Watch a Demo

3 Most Common Mistakes Marketers Make When Running User Tests

| August 5, 2015
Sign up to get bi-weekly insights and best practices, and receive your free CX Industry Report.
Thank you!

Get ready for some great content coming to your inbox from the team at UserTesting!

As marketers, we rely on data to guide our decisions. We run A/B tests. We use robust analytics tools. We find out where customers are converting, and where they’re not. We know exactly what our customers are doing.

But there’s one major problem: we don’t always understand why.

If you’re like most marketers, you’ve probably thought your job would be easier if only you understood what’s actually going on inside your customers’ heads. The only way to be sure what customers are thinking is to ask them. And the way to do that is by user testing.

The only way to be sure what customers are thinking is to ask them. And the way to do that is by user testing.

More and more marketers are discovering the impact they can make on their bottom line by improving their user experience. They’re running user tests to find new ideas for A/B tests, optimize their sales funnel, and build user-centered campaigns.

But a lot of marketers fall into the same 3 traps when they run their studies, and it keeps them from getting actionable insights. In this article I’m going to share the 3 most common mistakes that marketers make when running user tests, and what you can do to avoid them.

1. Only looking for the good

A big mistake we see marketers make is asking questions that are overly-optimistic, to the point of biasing their test results. For example, in one test we analyzed, a marketer asked participants, “How would you describe this website: clean, easy-to-use, helpful, or well-designed?” All of the available options were positive. But what if the user had a negative experience?

describe-this-site

What if your participants wouldn’t use any of those words to describe your site? Biased questions like this won’t give you the kind of feedback you need to improve your campaigns and hit your numbers.

Here’s the thing: we don’t intentionally look only for the good things that users like. But at some point it happens to all of us. And it’s easy to understand why. It’s hard to hear critical feedback, especially when it’s about something you spent a lot of time and effort working on. Nobody wants to hear that their baby is ugly.

Nobody wants to hear that their baby is ugly. But if your campaigns don’t resonate with your audience, then you’re going to lose out on leads and sales.

But if your campaigns don’t resonate with your audience, then you’re going to lose out on tons of leads and sales. That’s why we need to get harsh feedback, even if it hurts to hear. If we don’t know where things aren’t working, or where we’re missing our mark, then we can’t improve.

If you can become the rare kind of marketer who loves taking harsh feedback and using it to improve conversions on your site, then you’ve got a huge advantage over almost everybody else in your field.

2. Waiting for a problem or a redesign

We also see a lot of marketers wait for one of two things to happen before they start running user tests:

  • Their metrics tell them they have a problem, or
  • They have a major site redesign

This is a huge mistake, because research shows that every $1 invested in user experience will bring $2 to $100 in return. And companies who use human-centered design methods on an ongoing basis see a 228% higher ROI than the S&P 500.

design-value-index

Photo Source: Harvard Business Review

The most effective marketers use a process of experimentation to consistently test, iterate, and grow. Instead of waiting until they’re losing customers and sales, they test proactively in order to prevent problems before they arise.

Like Brian Balfour said, “The realm of digital marketing is changing extremely fast, and the rate of change is accelerating.” The best marketers test early and often to stay ahead of the curve. They proactively seek user feedback in order to find new tactics to improve their user acquisition and retention rates.

The realm of digital marketing is changing extremely fast, and the rate of change is accelerating.

And even if they do have a big site redesign coming up in the next year, they’re being proactive about it. Instead of waiting for the redesign process to begin, they start getting user feedback early in order to figure out what’s working, what’s not, and which changes need to be the highest priority.

That way when the redesign actually does start they already have a game plan. Instead of waiting until the last minute to brainstorm a bunch of ways to improve their site, they have a stockpile of ideas and optimization opportunities based on the insights they gained from their users.

3. Not launching dry runs

The third mistake we see is that marketers often launch a study with a large number of participants without doing a try run first (this is sometimes known as a “pilot test” as well). I recently made this mistake myself.

I launched a study with 25 participants without performing a dry run to make sure that my test script was effective, and it was a disaster. The way I worded one of the tasks confused 15 of the test participants, and the results of 60% of my tests just weren’t useful.

dry-run-header

This is a natural mistake for anybody who’s new to user testing, and who isn’t a research professional. Thankfully, it’s also an easy mistake to avoid. Once you complete your test plan, launch it to one participant.

The results of your dry run will show you if there’s anything in the test script itself that’s confusing to the user (or that might produce poor results). Even the most experienced UX researchers do this because there’s no way to predict how users will respond to your test until you see the results.

Even the most experienced UX researchers do dry runs, because there’s no way to predict how users will respond to your test until you see the results.

If the dry run participant gets confused or lost, revise your test script and make sure your tasks and questions are worded as clearly as possible. Once you’ve made your changes, do another dry run to see if your updated test works more effectively.

Keep revising and testing until you feel confident that your test plan is going to produce useful insights. Then launch it to however many users you need for your study.

Conclusion

User testing is your secret weapon for getting into the mind of your users and understanding why things are happening. The feedback you get will help you find new A/B test ideas, optimize your sales funnel, and build marketing campaigns that users actually look forward to.

You don’t have to be a pro researcher to get great results, you just have to remember to do these three things:

  • Avoid Bias — Look for where users are getting stuck and confused, instead of only looking for what they like.
  • Test Often — Test consistently rather than waiting for a problem, or until you redesign your website.
  • Perform a Dry Run — Make sure the questions and tasks in your test script produce useful results before launching your full study.

And if you want to learn how to evaluate the way customers perceive your brand, understand your cross-channel customer journey, and optimize your landing pages and forms, check out The Marketer’s Guide to User Testing.