Watch a Demo

12 Tips for Top Test Results

| October 24, 2011
Sign up to get bi-weekly insights and best practices, and receive your free CX Industry Report.
Thank you!

Get ready for some great content coming to your inbox from the team at UserTesting!

The key to getting useful test results is writing a good test. So we asked for advice from the experts in our office who’ve run thousands of tests. Here’s what they said:

1. Clarify your goals and how you’ll use results.

Before you begin testing, you have to be able to answer this question: “What do I hope to learn?”

  • Do customers understand what my website does within 5 seconds?
  • What distracts or confuses visitors as they try to place an order?
  • How quickly do users accomplish a given goal (“find a men’s sweater”)?
  • Do customers prefer our site or our competitor’s?
  • What happens when customers search for our offering on Google?

2. Consider using both broad tasks and specific tasks in your tests.

  • Broad, open-ended tasks help you learn about how your users think. For example, “Go to the site and take a minute to look around. What’s the first thing that grabs you attention? What are your first impressions–can you tell what business this company is in? How does it feel to you?”

This can be useful when considering branding, content, and layouts–some of the intangibles of user experience. This approach is also instructive early on, when you’re testing broad concepts rather than usability. You can run tests on very simple prototypes like uploaded PDFs or JPGs, or a single HTML page.

Specific tasks help pinpoint where users get confused or frustrated trying to do something specific on your site. For example, “You’re shopping for a gift. 1. Search for the item, specifying that you want to pick it up in a store near you. 2. Review the results page. Sort by the best reviews…”

This approach is great for understanding what path(s) a user takes or if they’re able to easily use a particular feature on your site.

3. Provide context up front.

Say you want to validate a new search feature on your e-commerce site. What’s the best way to do this? You could ask test participants to think of an item they actually need to buy: a steam iron or a hotel room in Hawaii or a gift for a child. Or perhaps they’re looking for a new job.

Use the scenario field to set the stage for your test.

Tip: UserTesting can help run shopping cart tests, where users buy an item on your site with a prepaid Visa gift card that we provide. This will help you detect the places an interested buyer often fails to complete a purchase. Contact for details.

4. Test the instructions.

If a test has too many twists and turns, you’re testing users, not the website. Ask someone else on your team to try the test (not just read the steps) and have them note any unclear or confusing instructions. Run a pilot test and see if you get the intended results.

Save big picture questions for the summary, when test participants have completed all tasks and have a moment to reflect.

Tip: Request the same test participants for later rounds. This allows you to test the experience of repeat visitors and find out whether you’ve addressed their feedback.

Enter the participant’s username into the following field when setting up your test.

5. Be concise.

Long, unmoderated tests can result in frustrated users. Most tests are between 10 and 15 tasks long. You can add more tasks, but you may get less helpful feedback if the test participants become fatigued.

6. Beware of leading questions and bias.

You may already anticipate some issues users will encounter. Look for language that hints at this: “Was it hard to find the Preferences page?” “How much better is the new version than the original home page?” A more neutral approach produces fairer results: “Compare the new version of the home page to the original. Which do you prefer?”

This doesn’t mean you shouldn’t ask difficult or negative questions. But try to keep tasks straightforward and language as neutral as possible. Save overall questions about how the user felt for the summary, so that reflection doesn’t disrupt workflow.

Similarly, look for a halo effect. With in-person moderated usability testing, participants often have a bias toward positive feedback, because they don’t want to hurt the moderator’s feelings. Remote unmoderated user testing has less bias, but you should still be aware of that potential. (Have you noticed a halo effect on any of your tests? How have you have worked around it?)

7. Ask test participants to meet your requirements.

UserTesting lets you identify a target audience to request appropriate test participants. Include requirements for demographics (gender, age, income, location) to match your target audience. Provide additional requests in the Other Requirement field. Effective screeners are often based on behavior: hours a day of web usage, plays games on mobile devices, has posted a resume online, leases a car, lives in South Florida.

Very specific requirements may take a little longer to fulfill, but the results will match your project goals.

8. Run a private test.

You can create tests in UserTesting and administer them to your own selected customers and stakeholders. We’ll send you a private URL that you forward to customers.

To do this, check “I want my own customers to be the users for this test,” and then you can compensate them directly in a method you prefer.

9. Invite live customers browsing your site in real time.

Take advantage of our partnership with Ethnio to intercept customers browsing your website. Email to discuss this for an upcoming test.

10. Include enough test participants.

A rule of thumb for each stage of research is you’ve done enough when you start hearing the same comments or seeing the same patterns over and over. According to experts like Jakob Nielsen, this can occur with as few as five users.

If you don’t start to see common issues emerging after 10 users, your test may be too broad, or your audience may require segmenting.

Running one or two users at a time will provide interesting feedback but might skew your thinking if those test participants aren’t representative.

11. Run separate tests for distinct demographics and groups of users.

For example, feedback from new customers will generally differ from feedback by regular users of your site. Men may shop in different ways online from women. Separate them up front in your test request so you can tell the results apart later.

12. Take advantage of followup questions.

You can send followup questions to a test participant by clicking the “Want to ask this user tester a question” link on the video page. Clarify what a test participant meant at 3:40 when she said “hmm…! What is that?” Perhaps it was something in the tasks and instructions that you should address now or in future tests.


What are you doing to achieve meaningful results for your website? Add a comment below, email, or tweet @usertesting.