Understand and avoid imposter participants in your experience research

By UserTesting | September 29, 2023
Image
imposter participants

As the number of digital products on the market continues to grow, organizations are increasingly adopting human insight platforms and customer feedback methods to inform their decisions. To meet that need, services like ours offer access to test participants faster than ever before. To accomplish this of course, a financial reward is used to incentivize individuals to take tests. 

While most participants genuinely wish to provide thoughtful feedback and useful insights, there will always be some who are just looking to make a quick buck and move on—disregarding any need for truth as they take a test. We refer to these individuals as “imposter participants.” If just one of these low-quality participants makes their way into your study, it can have a significantly negative impact on your research process or—even worse—your results. 

In this article, we’ll let you know what UserTesting is doing to address these imposter participants, and give you useful tips on how you can avoid participant fraud in your own studies. 

What is participant fraud? 

In the realm of usability testing and customer feedback, participant fraud takes two primary forms:

1. Participants misrepresenting who they are

2. Participants being dishonest with their feedback 

Here are a few examples of what imposter participants tend to do in a testing environment:

  • Lie about personal information such as profession, age, gender identity, income, or ethnicity
  • Exaggerate personal or professional experience
  • Purposefully disregard directions
  • Use AI to generate answers to questions
  • Speed through tests without giving questions any thought
     

How organizations are affected by participant fraud

Imposter participants are nothing new and not specific to UserTesting. Organizations in many different industries have had to deal with similar challenges throughout the years. Today, however, insights gained from remote usability testing are more vital to companies than ever before. In many cases, these tests are depended upon to make crucial financial and strategic decisions. The consequences of error during this kind of research can be extremely costly.

When participant fraud occurs, organizations aren’t able to trust their data. Time and resources are lost as researchers are forced to manually review results. If imposter participants aren’t properly identified, it can cause product problems to go unseen or nudge teams in the wrong strategic direction, leading to even more issues down the line.

Doing our part

Whether you’re a UserTesting customer, already working with another testing partner, or exploring your options, it’s important to make sure the experience research platform you use is taking a proactive approach to combatting participant fraud.

Combatting participant fraud is a top priority that we’re constantly monitoring and solving for at UserTesting. We understand the importance of maintaining trust in our participant networks, and have implemented various measures to block and remove imposters. Specifically, we’re focused on three areas:

Reducing Misrepresentation

Some participants misrepresent their location or other characteristics like product or industry experience in order to qualify for more tests. To combat this trend, we use AI-powered device fingerprinting and PayPal location information to remove participants who may not be who they say they are. We’re also working to limit participants' ability to change demographic data too frequently while balancing that with the need to keep data fresh and accurate. To that end, we’re developing AI solutions that flag contradictory responses to screener questions across multiple tests and against demographic data. For example: if a participant claims to be a corporate lawyer for one test, and an IT professional in another, or lists their job as a plumber in their profile, we know something is off.

Investing in Panel Specialists

It’s also important that we continue to invest in our dedicated team of specialists whose main responsibility is addressing feedback quality concerns. This team of specialists reviews practice tests, first paid tests, and any 1 or 2-star rated tests. They routinely audit UserTesting networks and remove bad actors who have violated our Contributor Terms of Service and Code of Conduct. We continue to empower them with additional tools that help them detect and quickly remove speeders, professional testers, and participants with multiple accounts. 

Artificial Intelligence 

Our product and engineering team is focused on ensuring that participants provide their honest feedback. That means exploring solutions that restrict a participant’s use of tools like ChatGPT to “create” feedback. We’re also investing in our own AI solutions that help monitor the quality of  the feedback you receive, flagging anything that looks suspicious. For example: we’re working on solutions that recommend screener question adjustments that will make it harder for imposter participants to guess their way into a test.

How to decrease and prevent participant fraud in your own testing

Now that we’ve covered some of our priorities in combating participant fraud, let’s take a look at a few things you can do to make your research less susceptible to imposters. The single most effective way to reduce participant fraud is to make your tests difficult to guess through. The best way to do that is with an effective screener. 

Create a thorough screener

Here are a few things to keep in mind when drafting  screener questions:

  • Avoid yes/no questions as much as possible
  • Don’t use leading questions
  • Don’t ask for multiple answers within the same question
  • Refresh saved screener questions periodically to keep imposters guessing
  • Include unique “dummy” answers

In addition to creating an effective screener, one of the most important actions a customer can take to reduce the number of imposter participants is to experience the participant's experience first-hand. We encourage all test creators to sign up to be a participant. This will help users gain clarity on how test notifications, screeners, scenarios, and tests are experienced.

Ask for further verification

Once participants have made it past your screener questions, consider adding one or two more questions—at the beginning of the test—to further validate they’re a good fit. For instance: if the screener asked participants whether or not they’re currently shopping for a car, you might have a pre-test question that asks them what kind of car they’re shopping for. 

Use screeners or additional lines of questioning to make participants prove their experiences or communicate familiarity with the topic at-hand. Ask them to expand upon their qualifications or give their thoughts on their industry. The more specific you can make your questions, the more obvious it will be whether someone is misrepresenting their identity or experiences. 

Finally, know that it’s okay to ask for identity verification on tests. Just be sure to follow any applicable Data Protection Laws, your obligations under our Data Processing Agreement, and our PII best practices first. If you ask for verification, you should warn participants in advance that you will be asking for personal information such as a Linkedin profile or place of employment. And always make sure you give them the chance to opt out. 

Save your favorite participants

If you’ve found reliable participants who provide good feedback, it’s a good idea to add favorite participants to a saved list. This allows you to send tests to those trusted participants in the future. 

Analyze your data

The last way to identify imposter participants is during the analysis phase. Reviewing visual cues and responses to validation questions at the beginning of the test are the first step. You might also note that some responses are especially brief, lack depth and detail, or have an unusually high number of pauses. Further answers across multiple questions may be inconsistent in terms of chronological details. And while it’s hard to identify an outlier with a small sample size, healthy skepticism is warranted if one of the responses is diametrically opposed to other perspectives expressed in conjunction with some of the other signals (e.g. brevity). 

Reporting problems

When you encounter issues with feedback you receive from UserTesting participants, it’s important to report them to our support teams. You can do that in a number of ways, including providing a 1-star rating and/or reporting a problem from the sessions tab on UserTesting. In UserZoom you can also exclude participants from your studies to avoid biasing your results. Our Panel Operations and Support teams will review your tests and take appropriate action if the participants are found to be in violation of the Terms of Service or Code of Conduct. To find out more about reporting issues, please read our Knowledgebase article on the topic. 

Don’t let participant fraud de-legitimize your experience research practice. With the right screening, validation and testing strategies, you can greatly improve the quality of your insights and avoid wasted resources.

 

Insights that drive innovation

Get our best human insight resources delivered right to your inbox every month. As a bonus, we'll send you our latest industry report: When business is human, insights drive innovation.

About the author(s)
UserTesting

With UserTesting’s on-demand platform, you uncover ‘the why’ behind customer interactions. In just a few hours, you can capture the critical human insights you need to confidently deliver what your customers want and expect.