Overview —

TAO Advance is a digital test engine that supports educators in delivering secure and reliable online tests that are WCAG 2.1 AA compliant to students.

The objective of this project was to determine if accessibility compliance, WCAG 2.1 AA, results in a usable assessment platform for users who are visually impaired.

In this project, I created a research plan detailing research questions and areas to be explored, compiled a screener to recruit visually impaired participants, moderated research sessions, analyzed data, and presented findings and recommendations to stakeholders.

Preparation —

Research Questions

Does WCAG 2.1 AA compliancy equal usability?

Which question types are more usable for visually impaired users?

How do visually impaired user perceive digital assessments?

The ideal participant

The aim of this research project was clear in wanting to test the accessibility of our test engine. As a result an ideal persona was considered in order to later assist in the writing of the screener questions. This included having taken a test before, having a visual impairment, using assistive technology regularly, etc…

These characteristics were used to create the screener questions and guide the selection of possible participants.

A persona of an individual who has a visual impairment.

Recruitment process

Recruiting involved creating a set of ideal participant characteristics, a screener, and launching the recruitment process on our tool. The screener asked questions that allowed us to better understand what visual impairment participants had, what tools they used, and how they previously took tests. The responses to the screener were exported into a spreadsheet, cleaned up, and color coded to help determine which participants to proceed with. Three participants were selected, they were informed and asked to book a time within the recruitment tool.

Table showing potential participants and their responses

What would be tested

The study was designed to be 90 minutes and consisted of a short user interview, usability testing, and wrap up questions.

The test for participants to take was designed to validate the accessibility of particular test elements and question types. While realistic content was used in the questions, care was taken to ensure it was at a 5th grade level. This was done to make participants feel more comfortable and reinforce the fact that they even thought they were taking a test they weren’t being tested — they were testing our platform.

Evaluation —

The evaluation of TAO Advance included a short user interview with participants to understand their current assistive technology use, their previous digital and paper based testing experiences, and their perception surrounding digital tests.

Participants were asked to go through the test as they normally would and answer the question to the best of their ability. After answering the question, they were instructed to go through the page again with their screen reader and highlight anything that was confusing or sounded off. They were then asked specific questions related to the question type to validate their understanding of it.

Finally before moving on, participants were asked to rate the ease of answering the question type on a scale of 1 to 5. These steps were repeated with each area of the test.

Visual impairment:

Screen reader:

Customisation and shortcuts:

Do you know braille?

Screen reader on phone

P1

Blind

JAWS on Windows

Increased speed, shortcut


No knowledge of braille

VoiceOver on iPhone

P2

Low acuity & vision field

NVDA and JAWS

Increased speed, shortcut


No knowledge of braille

VoiceOver on iPhone

P3

Blind

NVDA on Windows

Customized pitch, speed, shortcuts

Knowledge of Braille

VoiceOver on iPhone

Below is an example of a participant answering a question using their computer and screenreader of choice. It’s a great illustration of how important it is to follow coding best practices and accessibility guidelines. There is a large amount of information that must quickly be understood by an assistive technology user in order to grasp what’s on the screen. If there are any gaps in that information a user can quickly become confused or even stuck.

Results —

While the tool is WCAG 2.1 AA compliant, participants faced usability issues in some areas


While WCAG 2.1 AA guidelines provide a strong foundation for web accessibility, they don’t consider additional steps needed to go beyond compliancy. Extra care and thought must be taken for accessibility issues that go beyond the WCAG guidelines.

Participants preferred multiple choice, associate, text entry, and inline choice question types

These question types were straightforward for participants to grasp. Participants had trouble with any drag and drop interaction, primarily because of the concept of “draggable”.

Visually impaired participants prefer digital assessments IF accessibility has been taken into account

Digital assessments offer assistive technology users a higher level of autonomy than paper based assessments. Paper based assessments often require an additional person to read out each question and answer for the user.

Challenges —

The biggest challenge I faced was ensuring our, the UX team’s, chosen tools were accessible for visually impaired users.

I began by identifying all of the interaction points a potential participant would face before speaking with me in the research session. This included our participant recruitment platform, electronic signature platform, and video conferencing tool. Each provider’s accessibility statement and VPAT was examined to determine their stance and current compliance regarding accessibility. This was then followed by a manual test using the keyboard and screenreader to verify the statements made and to ensure the tool was indeed usable for screen reader and keyboard users. Of the three tools use, one was not accessible — the video conferencing tool. As a result other video conferencing tools had to be investigated until one was found that would fit the needs of this study.

Summary —

My role

I created a research plan detailing questions and areas to be explored, created at a screener to recruit visually impaired participants, conducted usability testing, and presented findings and recommendations to stakeholders.

My impact

My findings helped to answer whether our test runner is usable for users who use screenreaders, while identifying issues that screenreader users currently face. These issues were shared to stakeholders, translated into 34 actionable items and prioritized in the product backlog.

Next steps

The product team is priortizing the recommendations to address the gaps in usability that are present for visually impaired users.

Continue to test updates and new features and/or flows introduced to ensure the test runner remains usable for all.

Next
Next

Supporting test takers in reading on screens