So, you’ve been asked to make your designs and software accessible. You know what it entails and recognize how to get started, but success is still unclear. Well, there are many ways to measure digital accessibility. Analyzing how a team incorporates accessibility, auditing for legal requirements, and gathering user feedback are just some ways you can do so.
We’ve put together our guide on measuring your success in accessibility, not just on the WCAG criteria but also on the role and responsibilities within a tech team.
Measure against accessibility principles
Let’s start with the foundations of accessibility, the POUR principles:
These, along with the WCAG criteria, underpin the majority of accessibility laws, so using these to measure accessibility is a sure way to create an inclusive user experience.
Web Content Accessibility Guidelines (WCAG) explain how to make web content more accessible to people with disabilities. WCAG covers websites, applications, and other digital content. It’s an international standard developed by the World Wide Web Consortium (W3C) Web Accessibility Initiative (WAI).
There are three levels of conformance:
- Level A is the minimum level.
- Level AA includes all Level A and AA requirements. Many organizations strive to meet Level AA.
- Level AAA includes all Level A, AA, and AAA requirements.
How do you measure meeting the perceivable criteria for digital accessibility?
The perceivable principle is the first, declaring that all information and components must be presented to users in a way they can perceive. For example, whether someone struggles to hear or see, the web page should communicate content accommodatingly.
Measuring the success of these criteria includes looking at how text alternatives for non-text content are communicated. How many transcripts accompany videos or audio? How many images with relevant information have a description that communicates said relevant information?
This criterion focuses on experiences dependent on time. For example, captions should be synchronized to the video or sign language added for level AAA. Providing audio descriptions or alternatives to the sound help those with various sight impairments.
A good experience comes from testing features like these. While it’s not explicitly mentioned in the criteria, test what’s good timing when it comes to captions. They need to be in sync as much as possible with the audio but not at the cost of ruining spoilers in the content.
Related reading: Testing the usability of sound
The adaptable criteria require content to be flexible for users and still be perceivable. Users should be able to manipulate the interface in ways they need to consume the content.
Testing the success of this criteria is unleashing your inner quality analyst by manipulating the interface in as many realistic ways as possible. Zoom in with the browser or operating system tools. Increase the font setting. Rotate the screen between landscape and portrait. Does the content still make sense, or do things move to weird places? Your success here is where content is still accessible and viewable, and the design doesn’t move the text off-screen.
Next is distinguishable content, making content perceivable through color, contrast, styling, and audio controls. Meeting this criterion involves looking at how meaning is conveyed in multiple ways.
Do links stand out from text through color and styling, like being red with an underline? How distinguishable is the text from its background color, is there sufficient contrast? Can users who change the contrast or resize text still achieve what they need to?
For audio, people should be able to control sounds played by a website, and anything that plays automatically should be no longer than three seconds. There should also be no background noise louder than its foreground sound, like background music to a speech.
How do you measure the operable criteria for digital accessibility?
Second is the operable principle, stating that users can interact with components in ways they need.
This criterion uses just the keyboard. Trying tabbing through the interface, selecting links, clicking buttons, and achieving your user journeys. How does it feel to use? Can you reach every component and trigger its action? How about if a screen-reader user goes through the same journeys, do they know which components they’re focused on, and do they know how to use them? Are keyboard shortcuts ergonomic and reachable for human fingers?
While helpful in secure user experiences, timeouts still need to provide sufficient time for users of all abilities to read and use the content. Users should be able to adjust timeouts and auto-updating information through pause, adjust or extend controls. The success here is achieving goals without a timeout being triggered; even when one occurs, no data is lost.
Seizures and physical reactions
A slightly different measure of success for accessibility is the lack of reactions users have to the design. Reactions of shock and amazement are still valid, but ones involving seizures, pain, or other negative physical ones should be omitted. Flashes and animation timings are the biggest culprits to cause reactions like seizures, so be sure to research accessible timings, especially before testing. It’s safe to say no physical reactions are the success for this criterion.
Criteria for a navigable experience involve providing ways for users to locate content and understand components’ purpose. This expands from previous criteria surrounding keyboard accessibility by including criteria like descriptive headings, labels, and page titles.
Because components are reachable via keyboard-only methods does not mean that if you cannot see the page, you can mentally navigate the experience. The more complicated an experience, the more difficult it is to record what’s happening, especially when users can’t cognitively offload onto the page. Measuring the success of whether a sequence is meaningful or labels are fully descriptive can include timing how long it takes to find information, the directness in their routes to find said information, and the overall success rate.
The input modalities category expands on keyboard navigation by adding pointer and motion criteria. What’s accessible via keyboard-only methods should also be accessible via mouse-only or motion methods. Keyboard shortcuts should also have corresponding pointer gestures, and touchpoints should be large enough that it doesn’t feel like a whack-a-mole game.
How do you measure being understandable in digital accessibility?
The understandable principle states content must be understandable in a variety of ways. This category doesn’t just look at impairments but also includes concerns for culture and language.
Speaking of language, the first criterion for Readable is that the language of the content is declared and identifiable. Let’s say your page is declared to be in English, but a paragraph or two is in another language like German. For screen-reader users, the German content will still be announced but in an English accent, which leads to interesting pronunciation. Incorrectly declared languages can also affect features like automatic browser translations because the browser doesn’t know other languages to translate.
Readability also involves reading levels and abbreviations. Your reading age should match those of your users. Definitions for abbreviations and unusual words should also be provided. Testing the usability of your content for ease of reading and comprehensibility will help measure the success of your experience.
Mental models, design patterns, principles, and best practices aim to provide consistency and predictability in user experiences. Accessible experiences are no different, with WCAG issuing guidance on predictability in this context.
The success of a predictable accessible experience is one with consistent navigation and identification of components and behaviors on events like focus and input. Can people use the system without having ever seen the interface without guidance?
From entering hobbies to submitting address details when purchasing a product, input options are the bread and butter of many websites. Form labels, error prevention, and suggestions are key to any form experience for users and businesses. The success of input assistance needs to come from usability testing, ensuring a diverse group of people can achieve their goals, regardless of what forms they’re presented with. Measure by variables like how many errors prevent people from submitting and how many people give up on the form.
Related reading: Testing mobile experiences
How do you measure the robustness criteria in digital accessibility?
The robustness principle is centered around a design’s compatibility with assistive technologies. The success of this requirement comes from testing with assistive technologies. There are multiple types of assistive techs, where it becomes daunting to try and measure. Each type has multiple tools with multiple versions. Screen readers are a text-to-speech type of assistance, with popular software like JAWS for Windows or VoiceOver for Mac. Robustness comes from the code meeting accessibility standards. Each component has the appropriate attributes like name, role, and value so that the content is parsable by user agents.
What sets you up for success on robustness is narrowing down which assistive technologies you should be compatible with. Start by gathering statistics on which assistive technologies, platforms, and browsers your audience uses to know which assistive technologies to support. It’s the same strategy as declaring which browser versions you support.
Measuring accessibility efforts within the team
Accessibility efforts come from each role within the team. We’ve gone through some typical roles on a product team and highlighted questions their role should be asking and what success looks like for them.
Measuring digital accessibility as a team
First, let’s look at the success of accessibility on the team as a whole, starting with the team’s relevant skills, the team process, and empathy towards the topic. Accessibility isn’t an easy topic, so nominating champions responsible for advocating accessibility in meetings and the team process is important. Champions need to be from a range of roles within the team and be trained properly. Developers should know how to build accessibility into the software.
Next, a team should holistically look at their process to see where non-functional requirements like accessibility to being included. Are tickets including acceptance criteria that detail accessibility requirements? Are researchers including impaired users in studies? Are developers building accessible code, and are quality analysts testing for accessibility?
Measuring digital accessibility as product owners
Accessibility needs priority and budget, as is usually granted by business and product owners. As a business analyst or product owner, a ticket must incorporate acceptance criteria for accessibility. Tickets need to be estimated with the new workload and allowed to take that long. There will also be a high cost at the beginning as the team adapts to the new process and faces new challenges.
There’s usually tech debt for non-functional requirements introduced to the software that existed for some time. One measure of success is how much of this technical debt is tackled in the sprints. Start by analyzing the current state, calculating how many accessibility bugs exist, and prioritize the next steps in fulfilling accessibility criteria. Measure accessibility via the number of bugs, user feedback, and your objective key results (OKRs), such as user engagement or sales.
Measuring digital accessibility as researchers
Part of the efforts for accessibility is empathy on the team. If a team doesn’t understand the value of what’s being asked of them, it can diminish your team’s shared vision and morale. Our success as researchers come from sharing users’ stories and building empathy between users and the team.
As researchers, we can start by looking at how many impaired users are involved in studies, research, and testing. We need to ensure a variety of impairments are considered and that it’s not a one-time study.
We need to observe how many team members are participating in the research. How many people are adding questions to the discussion guide? How many are watching interviews? How many team members read the insights and use them to influence their decisions? How much pushback exists when discussing accessibility requirements?
Measuring digital accessibility as designers
As designers, we must go through videos, transcripts, and insights to understand best who we’re building for. Experience principles are a useful way to align team members, particularly design, for the vision of the software. Including accessibility in the agreed-upon experience principles is a first huge win and confirmation of prioritizing accessibility.
Audits of our work before and after the solution is built help us keep on top of ensuring accessibility. The earlier in our design process we go through our wireframes, prototypes, and even sketches, the easier it is to catch issues. This reduces later costs such as redesigns and development. Prototypes also need to be made accessible to be suitable for usability testing methods.
Related reading: Avoid deceptive patterns in design
The greatest way to measure the success of topics like accessibility in a product is through users’ actions. The more users are involved in the research and testing of designs and have a place to share feedback. The easier it is to see whether the software meets user needs.
Accessible requirements have been proven to benefit everyone, so it’s safe to measure success through your normal measurements of success, like OKRs. Introduce accessibility into your process, train your team members, and gather insights from users; success will soon follow.