We often talk about the fact that usability testing allows you to learn more than what analytics can teach you. What we don’t talk about enough is that analytics can—and should—guide usability testing efforts.
Louis Rosenfeld puts it this way:
You can’t know why things are happening if you don’t know what is happening.
Analytics is a great starting point, in part because analytics can reveal problems that usability testing could never uncover. Usability testing is typically conducted with a small, representative sample of visitors. It also takes place on a fairly limited number of pages. So if a site has 10,000 products, and there’s a problem with one page or product category, it’s highly unlikely that usability testing would reveal the problem. Analytics excels at this though, since it covers every user and every page. Let analytics find the problem; then use usability testing to learn why the problem is happening.
In the mind-bending film The Matrix, the “normal” world known by the main characters is revealed to be a computer program, in which humans’ minds are unwittingly trapped. The program can be viewed in code form on a computer screen. One of the characters remarks that he doesn’t even see the code anymore when he looks at it; instead, his mind automatically translates the code. Where the average person sees just a bunch of code, he ‘sees’ the actual people, cars, trees, clouds, and buildings that the code represents.
In a similar way, we can train ourselves to not just see a bunch of numbers and charts in our analytics package, but to identify great testing opportunities to find out why something happened, and hopefully, how to improve our numbers. When you use this technique to narrow your testing focus, you can save time and money in the qualitative research phase.
This article is focused on using Google Analytics to identify testing opportunities, but you can also find opportunities through data from click maps, scroll maps, form analytics, social analytics, etc.
You already know who your target audience is; but within that group, there are a number of subgroups, and many of them can be identified using analytics. Segment your data by the following metrics to further drill down and find out which testers to recruit, and which devices you want them to use.
It’s imperative today to have UX parity across device types, and you’ve no doubt made some efforts toward that goal. Has your testing plan caught up to your intentions? Are you looking at bounce rates, exit rates, time on page, pages per visit, and flow reports across device types, to see which devices need extra testing?
Also watch out for the rapid changes to mobile devices. We’re still seeing new screen sizes as manufacturers bridge the gap between tablets and phones, which can affect the user experience. And even operating system upgrades can impact your stats. iOS 7 introduced a new level of “swipe ambiguity,” as the Nielsen Norman Group recently highlighted. Safari now supports horizontal swiping to navigate to the previous page, so website owners who already employed in-site swipe navigation may notice an increase in pages per visit from iOS, while simultaneously seeing a drop in time on page. User tests are excellent at uncovering the reasons for statistical changes like this, that might not be immediately evident from analytics alone.
Google has been rolling out Demographics and Interests reports in their analytics product (the feature isn’t supported yet in Universal Analytics). This promises to be one of the best ways for marketers and UX specialists to determine which audience segments are not performing well for certain pages or flows. You’ll need to take an extra step to enable Demographics and Interest reports in Google Analytics, if you haven’t already.
In the example of a bookstore above, notice the strong correlation between Age and Average Visit Duration. Are younger visitors simply finding what they’re looking for more quickly? Possibly, but there’s something else going on here. Looking at the revenue numbers, and calculating Revenue Per Visit, we find that the Revenue Per Visit is 500% higher among visitors in the 25-54 age range than the 18-24 age range. For some reason, this bookstore is not connecting well with their youngest adult visitors.
So at this point we can clearly see that we’d love to look over the shoulder of visitors in the 18-24 age range via a user test. There are certainly some other questions we could ask first (such as, “where are the visitors coming from? And what’s their intent when they arrive at the site?”), and we’ll take a look at some of those questions next.
Do you start your tests on your home page? That might seem like a natural and obvious choice, but your visitors aren’t always starting on your home page, so your tests shouldn’t either.
By running a Behavior Flow Report, you can see where your visitors are coming from. A visitor coming from social vs. clicking on an ad vs. running a branded search query will have different expectations; and those expectations will shape their perception of your site, as well as the path they take.
To access a Behavior Flow Report showing traffic by source, see Behavior > Behavior Flow.
The report shows a great view of data by source, your most popular landing pages, where drop-offs are occurring, and the paths users are taking through the site. In the example below, I can see that the football home page is a popular starting point, and that most of the visitors who start on that page are coming from Google. But many of those people are not visiting any other pages on the site. This seems like a problem worth investigating. A possible next step would be to find out which search queries are bringing people to that page (which will be more difficult given Google’s shift to [not provided], but alternative methods exist), and have testers perform that same query, starting at google.com.
Another flow report, Goal Flow, provides a great visualization of what might be going wrong with a campaign or any other conversion path (as long as your goals are configured properly). In the example below, showing a campaign-filtered view of a gym memberships conversion funnel, three issues are instantly visible:
We could immediately run some tests to watch what’s happening to cause people to backtrack. Another opportunity would be to identify which source has the highest dropoff rate from the landing page, and start a test at that source. (Bonus points for combining that source data with age or gender data to be even more focused.)
Advertising campaigns can cost a lot of time and money, so it’s great that they offer so many data points to analyze for opportunities for testing and optimization.
Clickthrough rates can help you determine which ads are working and which aren’t, but they don’t tell you why (at least not explicitly), making it a bit of a guessing game to determine your next steps, other than disabling poorly-performing ads.
It’s a good idea to get several ads in front of real users (you can do this in a one-at-a-time, slideshow format, or present all ads on the screen at once), and have them answer questions like, “which ad makes you most interested in this product or service, and why?” By combining this qualitative data with your CTR data, you can start making better decisions about which direction to go with your ads, and which messaging (or color schemes, or CTAs, etc.) to keep or drop.
Look at both time on page and bounce rates to determine what context to provide for the tester, and what questions to ask. A landing page with an unusually high bounce rate—especially with a low time on page—might point to a mismatch between the expectations that the ad is producing, and what the landing page is delivering.
If you like to get straight to the bottom line, you might be in the habit of skipping past intermediate stats like CTR, visits, and bounce rates, and jumping straight to your conversion rate. After all, that’s the only number that matters, right? But low conversion rates are only a starting point for identifying testing opportunities. Once you’ve identified a poorly-performing campaign, use some of the other techniques from this article to determine specifically where things are going wrong. For example, a segmented Flow Report, mentioned earlier, is a great way to visualize the conversion path to find out where problems are occurring.
Even some simple stats can highlight pages that should be tested:
Another way to identify difficult-to-find pages is to look at your internal search data. Find out which queries are occurring most frequently. Analytics is telling you that plenty of people are resorting to search to find certain information, but why is that happening? Are visitors browsing first, or are they immediately relying upon search? During the test, have them try to find the items that the queries suggest they’re looking for. Find out where they’re looking, and why and when they’re giving up.
When you’re assessing performance, stats like high bounce rates, high exit rates, and low time on page are often considered bad. But in fact, there are times when a high bounce rate is just fine, but a low bounce rate needs to be investigated. We can look past the surface to find out which numbers—high or low—reveal important testing opportunities.
To find testing opportunities around bounce or exit rates, look for two things in particular: 1) user intent, 2) user expectations.
What’s a good bounce rate? This common question is usually met with an accurate but frustrating answer: “It depends.” And typically that means, “Find out whether the page is supposed to be retaining visitors.” For example, the page listing your store hours is likely to have a higher-than-average bounce rate, since the visitor intends to find the store hours and then leave. So it’s not necessary to test all pages with high bounce rates.
On the other hand, if you see a low bounce rate for a page that should be answering a very specific question, the page might be worth testing. If your store hours page has a bounce rate of 12%, it’s time to learn why. Look at a flow report to find out where visitors are going after visiting that page, or run a user test to determine whether something is going wrong.
An unexpectedly high bounce or exit rate can typically be traced to the page not meeting the visitors’ expectations.
Before we figure out how to run the test, we first need to determine which pages to test. Look for high exit rates on pages that are intended to convert, such as signup pages, checkout pages, and middle-of-funnel pages. You’ve worked very hard and likely paid a lot of money to get visitors to this point, so protect your investment by testing to find out why these pages aren’t paying off.
Here’s the key to learning how to improve those pages: When running the test, start the test prior to the problem page, so that the testers’ expectations can be developed organically. Call it “expectation incubation.” You might consider analyzing your traffic sources for a page, and finding out which sources are causing the highest bounce rates. (See the earlier section on Flow Reports.) Then you can figure out where to start testing.
In this example, we’re looking at the sources for a company’s Features page. It turns out that traffic from Facebook is bouncing far more than traffic from other sources.
Perhaps the test would reveal that one of the company’s Facebook posts or campaigns was telling people “See why we beat the competition,” only to drop the visitors onto a Features page with no competitive comparison. These visitors were approaching the page with an expectation that the page isn’t meeting. Analytics alone can’t give you this kind of insight, but analytics plus testing (and in this case, even just some analysis of the Facebook posts and campaigns) can steer you in the right direction.
Look for an unusually short time on page, combined with a high bounce or exit rate. Maybe you’re simply overwhelming your visitors, or maybe the page lacks credibility.
For pages with a long time on page but high bounce rate or exit rate, evaluate the page’s purpose. If the goal of the page is to move visitors further along in the funnel, find out why they’re spending time on the page but ultimately deciding to leave. Is the content ultimately not convincing enough? Is the CTA not clear and compelling? Is there any concern about privacy or how easy it will be to cancel the free trial?
If the page is just a content piece intended to serve very top-of-funnel visitors (such as a blog post), perhaps a long time on page plus a high bounce rate isn’t a problem; but check out your micro conversions. If you’re not getting comments, newsletter signups, or whitepaper downloads, run a user test to find out why.
Of course, using analytics to find usability testing opportunities isn’t the only way these two measurements interact.
For instance, after making changes based on usability tests, analytics should again be used to assess the impact on the entire audience. (Usability testing provides faster feedback, but ultimately the numbers from the entire audience need to be evaluated.)
Also, if usability testing reveals a problem, analytics can help you determine whether the problem is as widespread in the full population as it is within your sample group. If you’re considering investing time and money to address a problem you identified during testing, this extra validation can give you a confidence boost to move forward.
What other ways have you used analytics to reveal great testing opportunities?
Get our best human insight resources delivered right to your inbox every month. As a bonus, we'll send you our latest industry report: When business is human, insights drive innovation.