Lessons From An Expert: What to Test

Posted on March 29, 2013
6 min read

Share

In this three part series Jessica DuVerneay, Los Angeles based information architect and user researcher at The Understanding Group, will share testing tips on When to Test, What to Test, and How to Test

As discussed in the last part of this series When to Test, unmoderated testing can add value at any stage of the product design cycle. Knowing when to test is the first step - knowing what to test is the second.

1. UX / IA / Usability Issues

Usability Issues

This is the area of inquiry where most companies or IA/UX professionals will begin. Unmoderated tests can identify high-level issues, or can drill down into a specific flow or conversion point depending on how you structure your tasks & scenarios.

There are some basic measurable indicators to consider while usability testing. (Note: The indicators listed below heavily reference Measuring the User Experience by Tullis and Alberts)

  • Task Completion: Can a user complete the task? To what percentage can they move in the correct way in the task completion funnel? What issues are impeding the completion of the task? Do users want to complete the task in the first place?
  • Time On Task: How long does it take the user to complete the task? Does the time it takes to complete the task allow the user to lose interest or focus? Does the task take too little time to complete?
  • Errors: Did users make avoidable errors? Did users make unavoidable errors? Were they able to recover from the errors? What would make the errors less likely to occur or easier to recover from?
  • Efficiency: Was the system efficient in effort and number of steps it took for users to complete a task? Which steps were superfluous or confusing? Which steps can be streamlined or eliminated? Which steps need to be added?
  • Learnability: For repeat visitors, is the system learnable? Does task success and perceived efficiency increase? Does time on task and number of errors decrease? Do certain key actions and flows become tacit?
  • Perceived Severity of Identified Issues: Are any of the issues noticed by the testing team not actually registered as issues to the users? Which issues are most important to the achievement of key business flows? Are any of the issues deal breakers?

It is worth mentioning that what constitutes success for one product may not necessarily be indicative of optimal state for another product.

Furthermore, while aggregating qualitative data is valuable, do not make the mistake of ignoring the insights extracted from qualitative information (opinions, ratings, exclamations, and comments), as they can guide some of the most effective product changes. 

2. Competitive Testing

Competitive Testing

One of the lesser-known benefits of unmoderated testing is the ability to show a test subject a competitor’s product without the risk of moderator bias. Learning from a competitor - which flows, content, and design patterns are successful and which should be avoided - can be particularly useful in the startup space where lean UX teams may have to make complex high-stakes product decisions on limited research. 

Questions to consider during competitive testing might include:

  • Which product do users prefer? Why?
  • Which product was easier to use? Why?
  • What are some errors my competitor made that I can avoid during product development?
  • What are some successful features that my competitor has that I might want to consider for product roadmapping?
  • How do users of different demographics (Age, Gender, OS, income, etc) respond to each product? Is one better for my target market? 

These tests are relatively easy to set up yourself, but UserTesting also offers assistance with competitive testing through their Agency and Enterprise subscription packages.

3. Preference

Preference Testing

I’ve seen it, my colleagues have seen it, and you’ve probably witnessed it as well - a product is ready to launch and some inane battle about “The Blue Version vs. The Orange Version” threatens to delay launch of the product by weeks.

Preference testing can assist teams in determining which approach to run within a lean and cost-effective fashion, independent of internal personality conflicts or petty political battles.

While the statistically meaningful solution is to launch both versions and validate success via A/B Testing, oftentimes companies do not have the bandwidth or time to create and support multiple versions of a site. Akin to competitive testing your product against itself, Preference Testing with remote users can help you get a few user opinions to back up which product to launch.

Preference testing may provide direct user insights around:

  • Visual design & Branding
  • Interaction design
  • Copy
  • Navigation approaches
  • Use of imagery
  • Page layout and information hierarchy
  • Any other contentious issues at your organization 

As UserTesting.com allows test subjects to access any content that can be hosted on a URL, showing a live, ready to launch product is not necessary. Linking to multiple versions of a visual comp or an HTML wireframe can provide great insight from users.

4. Validation of Fit & Finish

Measuring

I saw this tweet a few weeks ago:

This is a sentiment I wholeheartedly agree with. If you test nothing else, or at no other point - please do a Validation Fit and Finish test of your product prior to launch.

Fit and finish is a “measure twice -  cut once” approach that will prevent unknowingly faulty products from being released. It will also build confidence in leadership who may have been removed from the design process to see a handful of users positively interact with a product just prior to launch.

Validation of fit and finish should address:

  • One to four main user flows (keep this as simple as possible)
  • The main conversion point of the system
  • Overall opinions of the visual design and interaction design

The main question for a Validation of Fit & Finish should be “Are there any show stoppers that should delay launch?” While other usability or preference issues that appear can certainly inform future usability testing efforts, they should never confuse the main goal: to feel confident about launching and identify any insurmountable fatal flaws prior to launch.

Other Approaches: Copywriting Validation, Lightweight Multi Platform QA, Persona Info

While the information you can gather from unmoderated user testing about Usability Issues, Competitive Analysis, Preference and Fit and Finish should be ample motivation to jump in and add user testing to your design cycle immediately, there are additional ways to use testing to improve your product. Some examples:

  • Personas: As mentioned in the last article in this series, I’ve augmented other user research activities with data I’ve collected from unmoderated testing to flesh out personas as needed.
  • QA Testing: I’ve seen unmoderated user testing implemented as a lightweight QA process - running the same test on a product in multiple desktop and mobile environments can provide invaluable insights for a small team with limited QA/UAT capabilities.
  • Copywriting: Likewise, testing copywriting or taxonomies to some extent can be done in unmoderated testing. Again, the success of these inquiries will depend largely on the manner in which the tests and scenarios are crafted. 

Don’t limit your testing strategy or use of unmoderated testing to tried and true approaches that you are comfortable with. Think about the nature of what you need to know to make your product better, and try new approaches as your schedule allows. Your product, your team, and hopefully your users, will thank you for it.

The Right Test at the Right Time

This diagram is an example of how the different types of testing outlined in the above article might work in a singular design cycle - showing you now not just when to test, but also now what to test.

When creating your test, it is of note to understand that you may write a test that approaches multiple topics. For example, I’ve written a battery of tests for a product that address specific IA & UX issues, Persona Info, and Competitive Analysis at the same time quite effectively.

Up Next – How to Test

In the final installment of this series, I will share some thoughts on How to Test. I’ll address a few common mistakes to avoid when creating, analyzing and presenting the findings of your unmoderated test. And, in case you missed it, you should check out part 1 of this series about When to Test.

Until then, feel free to comment below with questions or your thoughts or email me at jessica@understandinggroup.com. Thanks for reading!  

In this Article