How to conduct usability testing

Posted on December 11, 2023
10 min read

Share

In product development, nothing is more important than user satisfaction. It’s a critical type of research that collects qualitative and quantitative data, helping your team build a product that aligns with the target audience’s needs, pain points, and expectations. Usability testing is necessary to optimize user satisfaction and product usability. 

This comprehensive guide details the usability testing process and offers tips for creating effective tests. 

What's usability testing?

Usability testing is a form of research in which test participants interact with your product to complete tasks and provide you with feedback about the user experience (UX)

Post-launch, users may abandon your product and switch to a competitor's offering if they cannot complete tasks without difficulty or frustration. Seeking feedback about these areas of confusion before launch helps your team pinpoint areas where your product needs improvement. To that end, usability testing examines your product's functionality and validates the intuitiveness of your user interface and design before the product ever hits the metaphorical shelf. 

Check out some usability testing examples to see how it works in the real world.

When should you do usability testing?

Customer-centric product development relies on varying factors, but one non-negotiable element is usability testing. Usability testing can provide value at any stage of design. Start testing early and often, and continue testing through every project stage. Ideally, you should approach usability testing as an iterative process, starting during the product discovery stage and continuing through post-launch.

During discovery, even before the product exists, you can conduct prototype testing to collect user feedback and use it to build a product that truly addresses your target users' pain points, needs, and expectations. 

However, it doesn't stop there. Continue testing at every phase of the product development cycle to observe user behavior and gain a better understanding of what works well and what needs improvement. Iterative testing keeps your customer at the forefront of the design cycle and gives your team the human insight you need to create the most valuable product.

Steps for conducting usability testing

Here are some steps to follow to run a usability test:

1. Plan the logistics

A little planning goes a long way. That said, conducting successful usability testing can require a lot of planning. You'll need to consider multiple factors, such as:

  • How and when usability will fit into the project's overall timeline
  • How many and which team members should be involved
  • Whether you will conduct moderated or unmoderated usability testing
  • Whether you will conduct your testing remotely or in person (and if so, the testing location)
  • What the team's research goals are
  • How much time, money, and other resources the team must allocate for testing
  • Who your target audience is
  • Which usability testing tools best suit your project

2. Find participants

Technology has changed how companies gather users' insights and perspectives for unmoderated and moderated experience research. Previously, recruiting participants was a slow process limited by geography, time, and resources. Now, teams can access participants globally with the click of a button, accelerating the research process significantly. With AI and machine learning capabilities, teams can synthesize results into findings in seconds.

To find participants, consider these methods:

  • Recruitment agencies: Using recruitment agencies can be costly, but they can reduce your workload by finding and selecting desirable candidates for you. 
  • Website pop-ups: Add pop-ups to your website for visitors to view. If they already use your products or services, chances are they'll be interested in providing in-depth feedback to help improve your products further.
  • Social media posts: If your brand has a large following and tons of social media engagement, post on social media to find potential participants for your study. Add relevant hashtags to find more participants that represent your target user base.
  • The UserTesting Contributor Network: Gather diverse perspectives from any audience quickly and easily. 

3. Create your test

During usability testing, you should ask participants to complete tasks they would regularly encounter while navigating your website, app, or other digital interface. For example, if you own an eCommerce business, one of your tasks may look like this: 

"You're buying a dress for your daughter's birthday party, but you're on a budget, and she loves the color purple. Try to find a purple dress that's less than $100." 

A task like this allows you to test the most important functions of your site. If it's an eCommerce site, the ability to find and purchase products is essential. This task also shows if participants can find certain filters easily, such as a filter to sort dresses by color or price. 

Later in this post, we'll provide several tips to help you create the perfect usability test.

4. Conduct the study

During the study, allow participants to complete the assigned tasks without assistance or guidance. It's important to assess and note how long it takes users to navigate your site and complete their tasks. 

Ask participants to think out loud, which will allow you to observe their thoughts while they use the product. Taking note of their thoughts and feelings as they interact with your product can foster a deeper understanding of user behavior. 

After participants complete each task, ask for their feedback. Ask whether they found your product or experience functional and easy to navigate, if they could complete the assigned tasks successfully, and if they enjoyed the interaction.  

5. Analyze the test results

After conducting the study and gathering your data, analyze the results. Aim to do this soon after the completion of your study so that the results are still fresh in your mind and relevant. If multiple participants experienced repeated issues during the study, examine the problem further to make any adjustments or improvements. By analyzing the results, you can identify usability problems and implement your findings to improve the overall user experience.

Tips for creating optimal usability tests

Asking the right questions during your test is crucial. A poorly constructed test can provide misleading user feedback, which could lead to unnecessary or even harmful changes to the product. Although creating the ideal test can be complex and depends heavily on the product in question and your team's specific circumstances, here are some helpful points to keep in mind that will help you avoid writing, launching, and conducting a sub-optimal test.

Rely on the scientific method

Although usability testing doesn't always happen in a controlled laboratory setting like traditional scientific experiments, using elements of the scientific method can greatly enhance the effectiveness and credibility of usability testing. Simply put: 

  1. Formulate a hypothesis: Define a clear hypothesis or set of expectations about how users will interact with your product. For instance, you might hypothesize that users will find a certain feature intuitive.
  2. Design the test: Plan the test methodology, including tasks, scenarios, and metrics to measure. Design your test to validate or invalidate your hypothesis.
  3. Gather data: Conduct the usability test, gathering both quantitative and qualitative data. This can include metrics like task success rates, time on task, user satisfaction ratings, and qualitative feedback through observations or user interviews.
  4. Analyze and interpret the data: Review the data for patterns, outliers, and insights. Quantitative data gives numerical information, while qualitative data provides context and user sentiments. Analyze these together to understand how users interacted with the product or prototype.
  5. Draw conclusions: Based on your analysis, determine whether your findings align with your initial hypothesis. Identify strengths and weaknesses in the product's usability. These conclusions should guide decisions for future design changes or improvements.
  6. Iterate and improve: Use your conclusions to make targeted improvements. This involves implementing changes, refining features, or adjusting designs based on user feedback and identified issues. Continuously iterate on the product to enhance its usability and overall user experience. 

Approaching your tests with the scientific method as a base approach ensures your test methodology is rational and sound, bringing rigor and structure to the testing process.

Mitigate test fatigue

The onset of test fatigue can vary significantly based on multiple factors, such as the complexity of tasks, the length of the test, the user's engagement level, and the user's tolerance for testing. The presence of a moderator can keep participants engaged and provide breaks or encouragement, which can delay the onset of fatigue.

In unmoderated tests, in which users complete tasks independently, fatigue may set in earlier. In an unmoderated environment, average users can experience test fatigue after as few as 15 to 25 minutes. 

To prevent test fatigue, it's important to:

  • Limit test duration: Keep tests concise and focused, and break longer tests into shorter sessions if necessary. If you have several hypotheses you'd like to investigate, you may need to conduct three or four separate tests to address them all.
  • Vary tasks: Incorporate a mix of tasks to maintain user interest and engagement. Alternating between different types of activities can help combat monotony.
  • Provide breaks: Allow short breaks during longer tests to allow users to rest and recharge.
  • Engage users: Incorporate interactive elements or questions to keep users engaged throughout the test.
  • Gather feedback: Monitor for signs of fatigue and gather feedback on the testing experience itself to refine future tests.

Creating tests that are the right length will be a skill you improve with every subsequent test you conduct.

Avoid unnatural test flows

Another thing to consider is the structure of your test. Start with general tasks, such as exploring the homepage, using the search feature, or adding an item to a basket. Then, move in a logical, connected flow toward more complex tasks.

For example, you shouldn't ask someone to follow this flow:

  1. Create an account
  2. Find an item
  3. Check out
  4. Search the site
  5. Evaluate the global navigation options

The order of that workflow simply doesn't make logical sense. A more sensible workflow would be:

  1. Evaluate the global navigation options
  2. Search the site
  3. Find an item
  4. Create an account
  5. Check out

If your task will require the user to do something that is complex or has a high risk of failure, consider putting that task near the end of the test. This will help prevent the user from perceiving the product as broken and assuming all tasks will be impossible.

Be wary of leading language

A simple way to render your test results useless is by using leading language. Leading language refers to wording or phrasing that subtly guides or influences someone's thoughts, decisions, or actions toward a particular direction or outcome. It's common in various forms of communication, such as persuasive writing, advertising, sales pitches, or even in everyday conversations.

In usability testing, it's paramount to avoid leading language to avoid receiving biased feedback from participants. Using leading language can inadvertently prompt participants to respond in a way that aligns with the tester's expectations or desired outcomes, which can potentially skew the results. Even if it's not your intent, leading language can cause users to complete a task in a way that minimizes thinking, problem-solving, or natural behaviors.

For example, asking a leading question like, "Wasn't it easy to navigate through that menu?" presupposes a positive response and might influence participants to agree, even if they had difficulties. Instead, neutral and open-ended questions like, "How would you describe your experience navigating through the menu?" encourage participants to share their genuine thoughts without being influenced by suggestive language.

Use your words intentionally when writing test tasks. Know what you are testing, lead when necessary, avoid leading when it is detrimental to the test's goals and hypotheses, and be aware of the possible interpretations of your words. By using neutral language, you can gather more authentic and unbiased feedback, giving you a clearer understanding of user experiences and perceptions without inadvertently steering their responses.

Test the correct people

If your product is designed for highly tech-savvy young women, the insights you find may not be on target if your test group comprises middle-aged men with novice-level tech skills.

Map your participants to existing personas or market research, and select accordingly when setting up your test for best results. Spending adequate time targeting, recruiting, and screening the correct test subjects is a good investment. With UserTesting's expanded enterprise features, you can write specific screening questions that let you target specific demographics within our testing panel. 

Involve your development team

If you are testing a complicated flow in a live test environment, you may need assistance from your development team to prepare the testing environment. This sounds simple, but you’d be surprised how easy this step is to overlook. 

Prematurely launched tests can result in poor results when the product isn't fully prepared. Unexpected issues can cause disruptions for testers. Your team members are your allies, so include them in the test planning and implementation as needed. When the back-end is not ready or hasn't yet been built, it's a good idea to seek feedback on sketches, wireframes, or prototypes

Allow time for a pilot test

Have someone unfamiliar with your testing approach and your product run a pilot test before you conduct official usability testing. This simple step can help catch spelling and grammar errors, awkward or leading wording, possible misconceptions, and technical product issues. Tweaking your test after the pilot test run will lead to better findings and help improve your test creation skill set in a low-risk manner.

Conduct productive usability tests with UserTesting

You should now have a solid grasp on how to conduct usability testing to obtain the most useful feedback from your test participants. For more information about how UserTesting, can help your team conduct successful usability tests and create the best possible products and experiences, contact us today.

friction detection

Get actionable insights today

Uncover human insights that make an impact. Book a meeting with our Sales team today to learn more.

In this Article

    Related Blog Posts

    • 3 designers working on a prototype at a conference table

      Blog

      How to align product vision with user feedback

      One of the major challenges some digital product teams find themselves struggling with, is...
    • Blog

      UX strategy: the foundation of a successful product

      It's no secret that a comprehensive business strategy is vital to an organization's success...
    • Top down view of 4 colleagues at a round desk in a meeting

      Blog

      How to build a customer experience strategy framework

      In today’s competitive market, great customer experience is a key driver of success. As...