A/B testing, sometimes known as split testing, is a randomized process of presenting users with two different versions of a website—an A or a B version to observe which one performs better. Key metrics are then measured to see if variation 'A' or 'B' is statistically better at increasing business KPIs. Determining and implementing the winning variation can boost conversions and help with continuous improvement in customer experience.
A variation of A/B testing is multivariate testing, the process of testing three or more versions of a design.
Why is A/B testing valuable?
A/B testing clarifies which version of your product aligns with your audience, which can settle any internal debates. Depending on the number of conversion rates, you’ll know which changes to your design are helping boost sales and conversions. For example, if you change your call-to-action (CTA), and one CTA has more conversions than the other, it’s evident that the former CTA is more effective. And, when you enhance your A/B testing with qualitative data, you'll better understand your target customer and how to create experiences that resonate with them.
The most significant value of A/B testing is that it challenges assumptions and helps businesses make decisions based on data rather than on gut feelings. In particular, A/B testing is a valuable methodology as it can be applied to almost anything, whether email subject lines, color preferences, website information architecture, or even new processes. Tests can be conducted on something as small as a single copy change or as large as a website redesign.
When should A/B testing be used?
Remember, A/B testing shouldn't be a one-time test. If you received interesting findings from one study, we recommend repeating the study multiple times to see if the results are consistent. Conducting repeat tests reduces risk, ensures any results weren't coincidental, and proves that you're pursuing the best possible solution for your product.
What are the drawbacks of A/B testing?
Due to the quantitative nature of A/B testing, designers receive few qualitative insights to explain the reasoning behind users' choices. Although these experiments are driven by hypotheses—often with controlled variations between the designs—there may be alternate reasons behind the success of one variation over another. All you know is that one design change resulted in more conversions than the other. However, it’s difficult to determine whether further improvements can yield the same or better results unless you conduct more testing—which requires more time.
Where to come up with A/B test ideas
The best way to come up with A/B test ideas is to listen to your customers and prospects. As designers, researchers, or marketers, we easily become biased from sitting so close to our product daily—and we forget to take off rose-colored glasses. Get a new lens by consulting first-time visitors or prospects. Here are some mediums you can use:
- Web analytics: Ask yourself or your team, where is your product losing revenue? Where are the drop-off points? To get started, you can find this by combing through funnel performance, high-traffic pages, low-converting pages, and user flows.
- Mouse tracking analysis: Mouse tracking lets you record what people on your site do with their mouse, and this gives you insights into where visitors are paying (or not paying) attention. Find this information by leveraging platforms that offer click maps, attention maps, or scroll maps.
- Form analytics: Imagine if you knew exactly which form field people hesitate to fill out, which field they left blank even though it was required, and which area caused the most error messages. That's exactly what some tools and platforms can tell you, giving you performance data for every single form field.
- On-page surveys: Using on-page survey tools are a great way to find out why people aren’t converting. Not everyone will fill it out, but you'll get some interesting data here that will inform the kinds of tests you should be running. For example, through an optional form, ask visitors, “If you’re not going to [sign up/ subscribe] today, can you tell us why not?"
- Chat transcripts: These records are a treasure chest of ideas where you can directly see customers’ complaints and pain points.
- User testing: Conduct usability testing with a human insight platform. For example, you may send multiple users through your organization’s funnel, to your competitor’s site, and finally to a leading organization’s site or app in your industry. You'll learn what others are doing right and where you need improvement.