Unmoderated usability testing is a form of qualitative research where users complete pre-determined activities using a design or interface. In these unguided studies, only the contributor is present during the session. The contributor uses a usability platform, like the UserTesting platform, and completes tasks while answering questions out loud. These remote tests can be done at contributors’ own pace, on their own time, at a time and location of their choosing—making it highly convenient for both participants and researchers. Unmoderated testing tends to be less time-consuming and more flexible than moderated usability testing, as contributors can complete their tests independently without disrupting the researcher’s daily workflow.
What’s the difference between unmoderated and moderated user testing?
The differences between moderated and unmoderated user testing come down to the depth of insights, guidance, cost, and time. Moderated tests aren’t necessarily better than unmoderated tests, and vice versa. Many organizations need both throughout the development process, and factors at play include budget, the development stage, and your research objective(s).
Depth of insights
While some unmoderated test types require a contributor to speak aloud and turn on their camera, it might feel less natural compared to a 1:1 or group interview. Some unmoderated test types only capture a participant’s voice and screen (not their facial expressions), and others only include silent screen recordings, which can limit the number of insights you receive. And with non-verbal test types like card sorting or surveys, results are restricted to one’s answers alone.
Moderated user testing, on the other hand, is a great way to observe body language, facial expressions, and subtle behaviors and responses, which can be highly insightful details. Some tasks may evoke joy, confusion, or some other emotion worth noting. Observing and developing a rapport with your test participant can help establish trust—and leads to candid feedback that might not be possible with other qualitative research methods. However, a downside is that a moderator’s presence may cause a participant to give biased feedback. They might tell the researcher what they think they want to hear or avoid criticizing what they’re reviewing out of fear of hurting someone’s feelings.
Since unmoderated tests lack a moderator, this places even more importance on fine-tuning test scenarios and questions to ensure that participants understand the tasks clearly. To minimize the risk of a participant speeding through the tasks or questions, we recommend reminding participants to slow down or asking them to take a designated amount of time for a certain task. And, if you anticipate any technical errors or non-functional elements, it’s always a good idea to tell participants what to do if they encounter any barriers to task completion—whether it’s moving on to the next task or asking them what they expected to happen.
In moderated testing, by contrast, the moderators can always lead participants back in the right direction. This added guidance can come in handy if a participant is going off-topic with their responses or if they encounter any hurdles with a limited functioning prototype or even a technical error. Moderators aren’t just there to ask questions; they provide a deeper explanation of the tasks to alleviate confusion, clarify any misunderstandings, and help establish trust.
The beauty of remote unmoderated usability testing is that it can be done anytime, anywhere, and you typically have actionable feedback within a day or less. Because a moderator isn’t needed, the cost is typically much lower than moderated tests, enabling you to run more tests with a wider variety of contributors. Additionally, if you conduct a test and later (understandably) decide that you need to rework the tasks or questions, sending out a new round of remote unmoderated tests would be more cost- and time-friendly than if you were to redo moderated usability tests.
Conversely, moderated usability testing tends to be more expensive than unmoderated tests, especially if conducted in person, due to the need to find a location to rent and more. However, even virtually, moderated tests require more costs—one aspect being the price of time. Both in-person and remote moderated testing requires more planning and scheduling, and potential cancellations or no-shows can lead to significant project delays. Additionally, moderated test participants earn a higher rate, which needs to be accounted for in your budget.
Unmoderated tests can be as short as 10 to 20 minutes, which is why they’re so well-liked by participants, especially those with tight schedules. And as soon as sessions are recorded and uploaded to a user feedback platform, the results are instantly yours to review and dissect.
Meanwhile, moderated testing is known to take longer than unmoderated testing, thanks to the lengthier administrative efforts. This test type requires thorough planning and searching to find target participants and a qualified moderator. And if you’re designating a moderator from your own team, researchers will need to allocate time for both conducting the study and analyzing the results after.
Another thing to note—individuals need room in their schedules to participate in the study as moderated tests can take up to an hour. Additionally, researchers may receive moderated results back slower. Depending on your time constraints, these factors can play a significant role in which one you choose.
When should I use unmoderated testing?
Unmoderated testing is best for validating concepts and designs quickly with a diverse group of contributors. This type of testing is ideal if you:
- Need a large sample size
- Require feedback quickly
- Want to observe a contributor interacting in their natural environment
- Are strapped for time and don’t have the bandwidth to moderate a test
And depending on your project needs, you might find this testing type beneficial for the following scenarios:
- Evaluating a live website or app, especially if it just launched
- Assessing a website or app prototype so you can test the viability
- Observing real-world experiences, also known as ethnography, like unboxing a product