When you’re conducting UX research, your goal is to get consistent and reliable feedback on how users naturally interact with a product.
You’re looking to find pain points in the user experience, identifying any areas where users become confused.But if your test plan is faulty, then you won’t be able to tell what was actually confusing the participants: the product or the test itself.
This is especially true when you’re conducting remote, unmoderated UX research. Since you don’t have the ability to backtrack or correct yourself in the moment, your test plan has to act as its own moderator. You need to be sure that the test plan will guide the participants through the test without confusing them, handholding them, or leading them to the “right” answer.
These six tips for writing questions will help you avoid errors in your test plan, saving you the time and hassle of having to re-do faulty research. Memorize them, print them out, and stick ‘em to the wall... and you can be confident that your test questions will produce sound responses.
1. Every word matters
Take your time and read (and then re-read) every word you’ve written in your test plan. It takes patience, but you’ll rest easy knowing that you can trust your own data.
And don’t forget to use plain language!
Pay close attention to any words that could be confusing, too technical, or easily misinterpreted. Test participants often interpret questions very literally, so always ask yourself, “How could someone misunderstand this?” when you write a question.
[clickToTweet tweet="Use plain language. Ask, 'How might someone misinterpret this?'" quote="Use plain language. Ask, 'How might someone misinterpret this?'"]
2. Ask one question at a time
If you ask multiple questions within a single task, you’ll run the risk of getting incomplete answers.Bad example: How long have you been doing business with this company, and what was the original reason you decided to use their services?
To make sure you get answers to both parts of the question, try splitting it up into two separate questions:
- How long have you been doing business with this company? (A written response question)
- What was the original reason you decided to use this company’s services? (A verbal response question)
3. Don’t use leading questions
The way you word your questions can skew your test participants’ responses. Be careful to keep your questions neutral and not leading.
Some researchers feel pressured by their design teams to only find the positive parts of the user experience. But remember: although getting only positive feedback may feel good, it won’t help your team build a better product.Bad example: How much do you love this app? Good example: On a scale of 1-5 (1=Strongly Dislike, 5=Strongly Like), how much do you like or dislike this app?
[clickToTweet tweet="Flattery won't help your team build a better product" quote="Flattery won't help your team build a better product"]
4. Define key concepts prior to asking a question
Make sure your test participants understand what you’re asking them to do!
This is particularly important for measuring task success. If you are going to ask respondents to self-report on whether they successfully completed a task, it’s imperative that you define exactly what success looks like.
Tell your test participants exactly what successful completion will look like, so they don’t have to wonder whether or not they completed the task successfully.
5. Every participant needs to interpret your question the same way
Any room for creative or personal interpretation leads to bogus and inconsistent data. Check to make sure you’re not using any words that have multiple meanings or regional differences.
6. Test your questions with one participant before launching a large-scale study
You’ll often notice mistakes in your question construction that you can correct before moving forward with the larger sample.
To save time and budget, run a simple pilot study with one person to catch any mistakes you might have made. Once you’re confident that your test plan is error-free, then you can release the test to the rest of your participants.