9 Steps for Creating the Perfect User Test

By Jessica DuVerneay | May 2, 2013
Image
9 Steps for Creating the Perfect User Test

In this three-part series Jessica DuVerneay, Los Angeles based information architect and user researcher at The Understanding Group, will share testing tips on When to Test, What to Test, and How to Test. DuVerneay has been happily using UserTesting.com for the past 3 years, and teaches workshops for both practitioners and clients on Lean Unmoderated User Testing and other IA topics.

How to Create the Optimal User Test

Once you’ve decided to create your first user tests there are a few easy errors to avoid. A poorly constructed test can provide direction that is misguided and harmful to your product.

While writing the ideal test can be more complex than I can adequately explain here, there are some helpful points to keep in mind up front that will help you avoid writing and launching a sub-optimal test:

1. Know when to test & what to test

Understanding the best time to test and what to test can help you leverage user testing strategically. A little bit of planning can go a long way.

2. Rely on the Scientific Method

Approaching your tests with the scientific method as a base approach ensures your test methodology is rational and sound. Simply put: understand the problems your product is facing, create hypotheses about these problems, test the hypotheses, review your results, make changes, and test again.

3. Ask the Right Questions: Generating Hypotheses

Ask the Right Questions

Hypotheses can come from many sets of quantitative and qualitative data - including analytics, user research, customer journeys, team insights, heuristic evaluations, competitive analysis, and any other relevant sources.

List out what you think the problems are based on the aggregate data you can find as a team, isolate what you KNOW are problems (you don’t need to test something if you KNOW it's a problem), and prioritize what problems you’d like to investigate during the pending round of user testing. Surprising findings from "Test A" can often become hypothesis seeds for "Test B."

As a bonus, this stage is a great place to get all members of a team on board, build consensus, and get buy-in around testing activities and timelines.

4. Consider Test Length

Consider Test Length

Keep your tests to a reasonable length. While you want to get the most out of each test, remember that in an unmoderated environment, test fatigue is likely after 12-15 minutes for average users. As some tasks are more complex than others, creating tests that are the right length will be a skill you acquire the more you test.

If you have 15 hypotheses you'd like to investigate, you may find you need to write 3 or 4 separate tests to address them all.

5. Avoid Unnatural Test Flows

Avoid Unnatural Test Flows

Another thing to consider is the test structure. Start with general tasks (exploring the homepage, using search, adding an item to a basket) and move in a logical, connected flow towards more involved tasks.

You wouldn't ask someone to create an account > find an item > check out > search the site > then evaluate the global navigation options - it just doesn't make any sense. A more sensible approach would be to evaluate the global navigation > search the site > find an item > create an account > checkout.

If your task will require the user to do something that is complex or has a high risk of failure, consider putting that task near the end of the test to avoid the perception that the product is broken and all tasks will be impossible.

6. Be Aware of Leading Language

A simple way to write a useless test is through leading language.

Leading language is using a specific word or term to directly tell users to complete a straightforward task in a way that minimizes thinking, problem solving, or natural behaviors.

Consider this example: If a test needs to establish that a user can follow a company on Twitter, which of these would be better to ask?

  1. “From the home page, please follow Company X on Twitter.” (Most Leading)
  2. “What social media options are available for Company X? Please use the one that is most relevant to you.” (Less Leading)
  3. “If you were interested in staying in touch with Company X, how would you do that? Please show at least two options.” (Least Leading)

The answer is - it depends. Option A is perfect if you are testing the findability of a newly designed Twitter icon from the homepage. On the other hand, option C would be more appropriate for testing preferred user flows or desired functionality and content.

Use your words intentionally when writing test tasks. Know what you are testing, lead when necessary, avoid leading when it is detrimental to the test's goals and hypotheses, and be aware of the possible interpretations of your words.

7. Test the Correct People

Test the Correct People

If your product is designed for young women who are highly tech savvy, the insights you find may not be on target if you test your product with middle-aged men who are novice level technologists.

Map the folks you will be testing to existing personas or market research, and select accordingly when setting up your test for best results.  Spending the time you need to target, recruit, and screen the correct test subjects is a good investment.

Recruiting the right testers can be the greatest bottleneck in lean UX. Recently, my company had a trial enterprise account with UserTesting.com. One of the expanded enterprise features is the ability to write specific screener questions that allowed us to target an elusive population segment from UserTesting.com’s testing panel.  This saved us so much time and money and was simple to use.

8. Check In with Your Dev Team

If you are testing a complicated flow in a live test environment, you may need assistance from your dev team to prepare the testing environment.

This sounds simple, but you’d be surprised how easy this step is to overlook. I’ve seen tests launched before developers gave a final “ok” lead to miserable results, because the product was just not ready.  I’ve also seen aberrant loopholes where dummy account sign-in data had to be cleared manually from a database between each test for the product to work. In this instance the first test was great, while the subsequent seven testers were frozen out.

Your team members are your allies, so be sure to include them in the test planning and implementation as needed. When the back-end is not ready or has not even been built, I highly recommend getting feedback on your prototypes.  Prototyping tools like TestFlight, PopApp.in, Justinmind Prototyper, and Axure in conjunction with UserTesting.com make it possible to test your clickable web and mobile prototypes with your target users.

9. Allow Time for a Pilot Test

Have someone unfamiliar with your testing approach and product do a test run prior to releasing your test to the public.

You will see grammar errors, awkward task wording, possible misconceptions, and technical product issues of which you were not aware. Tweaking your test after the pilot test run leads to better findings and improves your test creation skillset in a low risk manner.

Thank you for reading this series - it’s been fun to write, and I hope it helps practitioners better leverage the benefits of lean unmoderated user testing for product improvements.

Insights that drive innovation

Get our best human insight resources delivered right to your inbox every month. As a bonus, we'll send you our latest industry report: When business is human, insights drive innovation.

About the author(s)
Jessica DuVerneay

Jessica DuVerneay is Los Angeles based information architect and user researcher at The Understanding Group.  DuVerneay has been happily using UserTesting for the past several years, and teaches workshops for both practitioners and clients on Lean Unmoderated User Testing and other IA topics.