UX testing: how to find the right target audience with screener questions

By UserTesting | July 12, 2022
Image
Screener Questions Blog

Users are at the heart of what we do here at UserTesting and the UX industry. They tell us the needs that require filling, inform our designs, and identify strengths and weaknesses within our products. You've crafted your UX testing process and identified your ideal usability testing tools—but who is your exact target user?

In a perfect world, usability testing involves users who wholeheartedly represent your product’s target market. And ideally, they’ve adopted the right mindset, speak the right lingo, and have the best insight into an ideal experience.

Though this may not exist, the good news is that whether or not you need to recruit exactly the right target user depends on the nature of your test. Let’s dive in.

Table of contents 

  1. When should you test with your exact target market?
  2. What are screener questions and why are they important? 
  3. How many questions should be in a screener?
  4. How many contributors are needed for a usability test?
  5. How to write a screener question: best practices for recruiting users for UX testing
  6. 4 examples of audience recruitment for UX testing

When should you test with your exact target market?

You know you need to conduct usability testing early and often. But the decision of when to use screening questions depends on your needs and goals.

Many UX thought leaders encourage researchers not to be too specific when recruiting user testing participants. After all, a vast majority of products should be clear and intuitive enough that anyone can figure them out. For example, someone who’s less technologically inclined should be able to make a purchase online without getting confused, and a young child should be able to easily check a weather app to make sure their soccer game won’t get rained out.

Of course, there are circumstances in which UX researchers need to capture insights and collect data from a target audience, whether it’s a specific age group, occupation, or customer. If you’re working on a Human Resources tool, you need to conduct usability testing with HR professionals because they're the only ones who would know whether the tool could be helpful to them as they work. If you’re collecting customer feedback on the usability of a subscription app, you’ll need to target current subscribers and not just any smartphone user. 

If you’re in one of those circumstances, and you’re testing remotely, then you can leverage screener questions.

What are screener questions and why are they important? 

Screener questions, or screeners, are multiple-choice questions that either eliminate the wrong participants from taking part in your study or give access to the right ones. 

Although it would be great to test all participants, it’s not always ideal, especially if your study has specific needs. It’s best to recruit participants who have attributes similar to your existing, or potential users of your product. Asking the right screener questions ensures you’re enlisting the intended participants for your test. 

Here’s an example of a screener question designed to target plus-sized users:

Plus-Size User Selection Screener Question for UX Testing: Interactive graphic showing a multiple-choice question with four options in a vertical list on a blue background. The question is 'Please select your clothing size from the options below:' with radio buttons to the left of each option. The options listed are 'XS - S', 'M - L', 'XL - XXL', and '3XL and above', with the intent for respondents to choose one that reflects their clothing size

In the above example, only participants who select the third or fourth option will be allowed to proceed to the test, and the rest will be rejected. 

Screeners may seem easy, but it’s often quite challenging to craft screeners properly. If a user misunderstands the questions, you could end up with test participants who don’t fit the needs of your study, negatively impacting your results.

How many questions should be in a screener? 

It’s important to remember that there’s no one-size-fits-all approach. Some teams may only use one screener question, as they could be doing an initial test for preliminary research that won’t count towards their final results. Meanwhile, other teams may need multiple rounds of screener questions to properly recruit for an ethnography study. This entails finding customers of a tangible product—who also have it nearby and are willing to show how they use it for the study. 

Here at UserTesting, we recommend setting up between three and five screener questions. If there are any more, contributors can get discouraged or even exit before completing all of the screeners because they assume they won’t qualify or find it too time-consuming. 

UserTesting’s Research team knows firsthand how important (and challenging) it is to write solid screener questions, and we have a wealth of experience in capturing the correct user group for your needs. Below are some guidelines and examples that can help you get exactly the right participant for your next remote usability test.

How many contributors are needed for a usability test?

Recruiting the ideal number of participants for your usability testing contributes to how efficient the data collected from your study will be.

The number of users you need depends on the usability testing you conduct. Here at UserTesting, five to eight participants are recommended for qualitative research while over 30 contributors can be enlisted for quantitative research (think card sorting or tree testing). 

If you’re testing a design where various users can use the platform, it may make more sense to test a larger sample of users. Consider starting off a study with five or fewer participants, and based on the results, you can improve your design and test further until your design is as effective as it can be. For instance, after reviewing the first few test results, you may have realized that contributors weren’t spending as much time as you expected discussing their feedback—leaving some insight on the table. This requires an edit of the task instructions to be more detailed. 

Keep in mind that there is no universally right number and that it’ll be influenced by the margin of error you find acceptable. However, by starting small, you’re giving yourself leeway to make adjustments and address mistakes, saving your time and resources in the long run. 

How to write a screener question: best practices for recruiting users for UX testing

Many of the principles involved in writing effective screener questions are similar to those recommended for crafting excellent multiple-choice questions. Here we delve deeper into these guidelines, emphasizing clarity, neutrality, and inclusivity to optimize your screening process.

Always include a "None of the above," "I don't know," or "Other" option

One key best practice is to always provide options such as "None of the above," "I don't know," or "Other." This inclusion is critical for two main reasons:

  1. Comprehensiveness: It covers scenarios where you might not have listed an applicable answer, ensuring that every participant can respond accurately.
  2. Avoidance of bias: It prevents participants from choosing answers randomly, which can occur if they feel none of the options apply to them. This is particularly important in screener questions as random selections can lead to unqualified participants inadvertently being included in your study, skewing your results.

Including these options safeguards against these issues, helping to maintain the integrity of your data collection.

Provide clear and distinct answers to avoid overlap

To ensure that every response accurately reflects the participant's true situation or opinion, frame your answers to be clear and mutually exclusive:

  • Example: If you're asking about clothing sizes, avoid ranges that overlap like "0-6" and "6-12". Instead, use distinct ranges such as "0-5" and "6-12" to ensure a participant who is a size 6 has a clear category. This clarity prevents confusion and ensures the accuracy of the data collected.

Avoid leading questions and simplify choices

Leading questions or yes/no formats can bias responses, as participants might try to give the answer they believe you want to hear. To combat this:

  • Neutral phrasing: Use instructions that ask participants to select the option that most closely applies to them, followed by a list of neutral statements. This approach reduces the likelihood of biased answers.
  • Detailed instructions: Explain clearly how they should think about their answers. For instance, if the question involves frequency of use, define what terms like "frequently" and "rarely" mean in the context of your question.

Implementing these practices effectively

  • Test your screeners: Before finalizing your screeners, conduct pilot tests to see how real users interpret and answer your questions. This testing can reveal if certain questions are confusing or leading.
  • Iterate based on feedback: Use the insights from your pilot tests to refine the questions. This iterative process helps enhance the reliability and validity of your screeners.

By adopting these enhanced practices, you ensure that your screener questions are not only well-designed but also effective in selecting the most suitable participants for your UX testing.

How to check that your screener is capturing the right users:

Depending on your needs and what you’re studying, you may need someone with a particular background (like a medical degree) or someone who is going through a particular experience (like shopping for a new car). If that’s the case, we recommend that in addition to screeners, you use the first task of your test to verify this:

Use initial tasks to confirm screener accuracy

In addition to your screener questions, we recommend incorporating an initial task in your usability test that directly relates to the screener criteria. This task serves as a secondary confirmation that participants meet the necessary criteria:

  • Example task: If your screener was designed to identify participants who are currently shopping for a new car, the initial task might be: “You indicated in the screener questions that you are currently shopping for a new car. Please describe what kind of car you are looking for, where you have looked so far, and any preferences or concerns you have regarding your new car.”

Listen carefully to participants' descriptions

The responses to this initial task can provide a wealth of qualitative data that confirms whether participants truly fit the profile you need:

  • Depth of response: Listen for detailed descriptions and specific language that align with someone actively engaged in the process you’re studying. For instance, a participant genuinely shopping for a car might mention specific models, dealerships visited, and factors influencing their decision.
  • Consistency: Check for consistency between the screener responses and the details provided during this task. Discrepancies might indicate misunderstandings or inaccuracies in the screening process.

Providing an exit option

This early task not only helps verify participant suitability but also allows participants who realize they might not fit the criteria to gracefully exit the study:

  • Respect for participants: Offering an exit option after the initial task respects the participant's time and improves the overall quality of your data by ensuring only well-suited individuals continue.

Iterative improvements based on initial tasks

Use the insights gathered from these initial tasks to refine your screener questions:

  • Feedback loop: Analyze common discrepancies or areas where multiple participants seem to misunderstand the screener. Adjust the wording or structure of your screener questions based on these findings.
  • Continuous validation: Regularly revising your screener based on real participant interactions ensures its ongoing effectiveness and relevance, particularly as your user base or product changes.

4 examples of audience recruitment for UX testing 

While your recruitment requirements can be customized, the following examples are four common types of screeners. Look for other usability testing examples to help generate ideas for your questions.

1. Screeners based on familiarity with a product

One of the most common kinds of screener questions that researchers use is capturing users’ level of familiarity with a product or a brand. Sometimes they need fresh users to test out a new tutorial for their app. Other times they are looking for insight from their most frequent users.

Whatever the case, you don’t want to ask point-blank if users fit the mold; people are naturally inclined to say yes just to please you! Instead, ask users to indicate their familiarity, and then define the different levels of familiarity, like in this example:

An image featuring a user experience screener question with a bold header that reads 'How would you describe your experience with [Brand/Product Name]?' Below the header, there are five numbered options, each representing a level of familiarity with the brand or product. The options range from complete unfamiliarity to frequent use, allowing participants to self-select their experience level for more targeted UX research.

This screener makes it easy for users to tell which category they belong in, and it isn’t obvious which answer the researcher is looking for. 

As you can see in the above example, it’s helpful to define the Novice, Intermediate, and Expert levels by a few elements: overall content levels, usage levels, and consideration levels. This keeps a user from overestimating their experience level with your product and accidentally skewing your results.

2. Screeners based on a product's frequency of use

Similar rules apply to the related—and equally popular—frequency-of-use screener. A researcher may be interested in finding users who regularly check the weather on their phones, or users who used to play a game but have given up on it over time.

Image of a user experience survey question asking about the frequency of mobile weather application usage. The question reads 'Regarding your use of mobile weather applications, which of the following statements best describes your usage pattern? Please select one.' Below are five options ranging from no usage to daily usage, including a choice for those who have stopped using such apps over a year ago. The options are clearly numbered, allowing respondents to easily choose the one that matches their habits

As with experience levels, it’s important to define frequency in solid terms, not just “rarely,” “sometimes,” or “often.” 

Another common screener related to the frequency of use might have to do with how recently a user has participated in a certain activity. For example, many ecommerce product researchers prefer to hear from users who purchase items online fairly often. Meanwhile, many travel product researchers want to hear from those who are planning a trip within the next year.

In these cases, it may be wise to create two screeners: one to confirm that they purchase items online or have an upcoming trip, and then a follow-up screener to determine timeframes.

Are you currently planning to go on a vacation?' followed by four answer choices. The options range from actively planning a vacation, not planning at all, considering but not started planning, to having just returned from a vacation. The design is simple, with a clear and readable font on a blue background, intended for respondents to self-select their current stage in vacation planning.

The following question confirms that the user is planning on going on a vacation soon.

Image displaying the second question in a travel research survey series, focused on the timing of future vacations. The question asks 'When are you planning to go on your next vacation?' followed by a list of time frames as options. The choices are 'Within the next month', 'In 1 to 3 months', 'In 4 to 6 months', 'In 7 to 12 months', and an additional statement for those planning a vacation for more than a year from now. The options are clearly enumerated for easy selection, set against a blue background, ta

This question filters for users who are planning their trip within four months to a year, and ensures that users’ timelines align with what you’re looking for. 

3. Screeners based on industry or occupation

When you need users within a particular occupation, multiple screeners are a helpful way to reveal a single characteristic. 

For example, a massage therapy retailer might want to hear from people in the massage therapy industry.

Massage therapy is a very specific profession, and it would be hard to come up with an exhaustive list of options inside of one screener question. But you also want to avoid asking a yes or no question. 

Therefore, you might start by listing broader professional categories, including health (which would encompass massage therapy). Then in a follow-up screener, you may have users indicate the role that they occupy within the Health industry. Consider the below example: 

Image showing the first screener question in a user survey designed to categorize professional industries. The question asks 'Which of the following industries best describes your current profession?' with seven options listed below, including Health and Wellness, Education, Technology, Finance, Hospitality, Retail, and an option for those who prefer not to disclose their industry. The layout is simple with a blue background, aimed at quickly guiding respondents to identify their professional industry categ

The first screener question gives broad categories to keep from overwhelming the user.

The image displays a follow-up screener question for survey participants who previously identified with the Health and Wellness industry. It asks 'Within the Health and Wellness industry, what is your specific professional role?' A list of roles is presented, including Medical Practitioner, Mental Health Professional, Alternative Medicine Practitioner, Massage Therapist, and other related professions. This detailed screener is designed to refine participant selection to the exact professional role within th

The second question narrows it down to specific industries. As you can see, with each screener question, you’re able to weed out more and more of the contributors that don’t qualify to narrow it down to the ones who do. 

4. Screeners that deal with personal information

The last type of screener that the UserTesting Research Team relies on frequently involves users providing sensitive information, including their income, race, Facebook profile, or body type.

For example, there are apps that help couples who’re attempting to conceive a baby determine peaks in fertility. Obviously, reproductive health isn’t a topic that everyone is comfortable discussing in a user test. Therefore, it’s important to give users a disclaimer, and only accept contributors who’re willing to be open about this personal information. 

Here’s an example of how this screener can look:

The image displays a screener question asking for participant consent to discuss sensitive topics related to reproductive health and fertility for a user experience study. The text emphasizes confidentiality and improvement of user experience as the sole purpose of the information shared. Two response options are provided: one for consent to share personal fertility stories and discuss reproductive health for research, and another for those who are not comfortable sharing such personal information.

By being transparent with users, you’re giving them an opportunity to opt out of the study if they’re not comfortable sharing personal information—and now you’re that much closer to finding those who are. 

For further information on UserTesting’s data protection guidelines and HIPAA, read our help article here

Get started and recruit your product’s right target audience

The above examples only scratch the surface of ways that researchers capture their ideal users! Often these are used in some kind of combination, like screening for males who frequently track their fantasy football data, but have never heard of Rotowire, for example. 

Shaping screeners can be challenging, so we encourage you to always perform a dry run to ensure that your screeners are effective—and give yourself room for adjustments. When you use your imagination (or enlist the help of the UserTesting team) and leverage the best practices, the recruiting possibilities are endless.

Insights that drive innovation

Get our best human insight resources delivered right to your inbox every month. As a bonus, we'll send you our latest industry report: When business is human, insights drive innovation.

About the author(s)
UserTesting

With UserTesting’s on-demand platform, you uncover ‘the why’ behind customer interactions. In just a few hours, you can capture the critical human insights you need to confidently deliver what your customers want and expect.