Reading the Matrix: How to see testing opportunities in analytics data

Posted on November 14, 2013
13 min read

Share

We often talk about the fact that usability testing allows you to learn more than what analytics can teach you. What we don’t talk about enough is that analytics can—and should—guide usability testing efforts.

Louis Rosenfeld puts it this way:

You can’t know why things are happening if you don’t know what is happening.

Analytics is a great starting point, in part because analytics can reveal problems that usability testing could never uncover. Usability testing is typically conducted with a small, representative sample of visitors. It also takes place on a fairly limited number of pages. So if a site has 10,000 products, and there’s a problem with one page or product category, it’s highly unlikely that usability testing would reveal the problem. Analytics excels at this though, since it covers every user and every page. Let analytics find the problem; then use usability testing to learn why the problem is happening.

Seeing Past the Numbers

In the mind-bending film The Matrix, the “normal” world known by the main characters is revealed to be a computer program, in which humans’ minds are unwittingly trapped. The program can be viewed in code form on a computer screen. One of the characters remarks that he doesn’t even see the code anymore when he looks at it; instead, his mind automatically translates the code. Where the average person sees just a bunch of code, he ‘sees’ the actual people, cars, trees, clouds, and buildings that the code represents.

Seeing Past the Numbers

Like The Matrix code, analytics can seem random and daunting, but there’s a ton of meaning if you read it correctly.

In a similar way, we can train ourselves to not just see a bunch of numbers and charts in our analytics package, but to identify great testing opportunities to find out why something happened, and hopefully, how to improve our numbers. When you use this technique to narrow your testing focus, you can save time and money in the qualitative research phase.

This article is focused on using Google Analytics to identify testing opportunities, but you can also find opportunities through data from click maps, scroll maps, form analytics, social analytics, etc.

Identify Users and Devices to Test

You already know who your target audience is; but within that group, there are a number of subgroups, and many of them can be identified using analytics. Segment your data by the following metrics to further drill down and find out which testers to recruit, and which devices you want them to use.

Devices

It’s imperative today to have UX parity across device types, and you’ve no doubt made some efforts toward that goal. Has your testing plan caught up to your intentions? Are you looking at bounce rates, exit rates, time on page, pages per visit, and flow reports across device types, to see which devices need extra testing?

Devices

In this example, mobile visitors are staying on the page nearly as long as their tablet and desktop counterparts, but are bouncing at a much higher rate.

Also watch out for the rapid changes to mobile devices. We’re still seeing new screen sizes as manufacturers bridge the gap between tablets and phones, which can affect the user experience. And even operating system upgrades can impact your stats. iOS 7 introduced a new level of “swipe ambiguity,” as the Nielsen Norman Group recently highlighted. Safari now supports horizontal swiping to navigate to the previous page, so website owners who already employed in-site swipe navigation may notice an increase in pages per visit from iOS, while simultaneously seeing a drop in time on page. User tests are excellent at uncovering the reasons for statistical changes like this, that might not be immediately evident from analytics alone.

Gender, Age, and Interests

Google has been rolling out Demographics and Interests reports in their analytics product (the feature isn’t supported yet in Universal Analytics). This promises to be one of the best ways for marketers and UX specialists to determine which audience segments are not performing well for certain pages or flows. You’ll need to take an extra step to enable Demographics and Interest reports in Google Analytics, if you haven’t already.

Demographics chart from Google Analytics

In this example, the strong correlation between age and visit duration, and a strong disparity among Revenue Per Visit for certain age groups, begs for qualitative testing. You can access this report through Audience > Demographics > Age.

In the example of a bookstore above, notice the strong correlation between Age and Average Visit Duration. Are younger visitors simply finding what they’re looking for more quickly? Possibly, but there’s something else going on here. Looking at the revenue numbers, and calculating Revenue Per Visit, we find that the Revenue Per Visit is 500% higher among visitors in the 25-54 age range than the 18-24 age range. For some reason, this bookstore is not connecting well with their youngest adult visitors.

So at this point we can clearly see that we’d love to look over the shoulder of visitors in the 18-24 age range via a user test. There are certainly some other questions we could ask first (such as, “where are the visitors coming from? And what’s their intent when they arrive at the site?”), and we’ll take a look at some of those questions next.

Use Segmentation in  Flow Reports to Learn Where to Start Your Tests

Do you start your tests on your home page? That might seem like a natural and obvious choice, but your visitors aren’t always starting on your home page, so your tests shouldn’t either.

Entrance points for Southwest's website

By running a Behavior Flow Report, you can see where your visitors are coming from. A visitor coming from social vs. clicking on an ad vs. running a branded search query will have different expectations; and those expectations will shape their perception of your site, as well as the path they take.

To access a Behavior Flow Report showing traffic by source, see Behavior > Behavior Flow.

Behavior Flow diagram

Accessing a Behavior Flow Report showing Source data.

The report shows a great view of data by source, your most popular landing pages, where drop-offs are occurring, and the paths users are taking through the site. In the example below, I can see that the football home page is a popular starting point, and that most of the visitors who start on that page are coming from Google. But many of those people are not visiting any other pages on the site. This seems like a problem worth investigating. A possible next step would be to find out which search queries are bringing people to that page (which will be more difficult given Google’s shift to [not provided], but alternative methods exist), and have testers perform that same query, starting at google.com.

Behavior flow with Source

Don’t trust intuition for determining which pages to start your tests on. See what the real starting pages are, and identify the ones that are underperforming.

Another flow report, Goal Flow, provides a great visualization of what might be going wrong with a campaign or any other conversion path (as long as your goals are configured properly). In the example below, showing a campaign-filtered view of a gym memberships conversion funnel, three issues are instantly visible:

  1. The conversion rate on the landing page is abysmal.
  2. Something is causing people to backtrack to Step 1 when we want them to be completing signup.
  3. Those who don’t backtrack are dropping out of the funnel when they’re close to completing the purchase.

Goal flow report

This sample goal flow report for a gym shows a very weak landing page, but also a problem with a huge number of visitors backtracking or leaving the funnel right before they should be completing signup.

We could immediately run some tests to watch what’s happening to cause people to backtrack. Another opportunity would be to identify which source has the highest dropoff rate from the landing page, and start a test at that source. (Bonus points for combining that source data with age or gender data to be even more focused.)

Look for Advertising Campaign Optimization Opportunities

Advertising campaigns can cost a lot of time and money, so it’s great that they offer so many data points to analyze for opportunities for testing and optimization.

Clickthrough rates

Clickthrough rates can help you determine which ads are working and which aren’t, but they don’t tell you why (at least not explicitly), making it a bit of a guessing game to determine your next steps, other than disabling poorly-performing ads.

It’s a good idea to get several ads in front of real users (you can do this in a one-at-a-time, slideshow format, or present all ads on the screen at once), and have them answer questions like, “which ad makes you most interested in this product or service, and why?” By combining this qualitative data with your CTR data, you can start making better decisions about which direction to go with your ads, and which messaging (or color schemes, or CTAs, etc.) to keep or drop.

Landing page stats

Look at both time on page and bounce rates to determine what context to provide for the tester, and what questions to ask. A landing page with an unusually high bounce rate—especially with a low time on page—might point to a mismatch between the expectations that the ad is producing, and what the landing page is delivering.

Test example:

  1. View this Facebook ad for 2-3 seconds: www.[company].com/images/bahamas-sun-350.jpg
  2. If you were to click on this ad, what would you expect to see on the page you are sent to?
  3. Visit www.[company].com/BahamasVacation. Look at the page for 5 seconds.
  4. Is this what you were expecting? Is it better or worse? How does the page differ from what you were expecting?

Conversion rates

If you like to get straight to the bottom line, you might be in the habit of skipping past intermediate stats like CTR, visits, and bounce rates, and jumping straight to your conversion rate. After all, that’s the only number that matters, right? But low conversion rates are only a starting point for identifying testing opportunities. Once you’ve identified a poorly-performing campaign, use some of the other techniques from this article to determine specifically where things are going wrong. For example, a segmented Flow Report, mentioned earlier, is a great way to visualize the conversion path to find out where problems are occurring.

Find Pages With High Optimization Potential

Even some simple stats can highlight pages that should be tested:

  • A high number of pageviews could indicate that a page is important to your visitors. The sheer volume of visits means that the page should be thoroughly tested and optimized, even if just for micro-conversions.
  • An unexpectedly low number of pageviews could indicate that a page is difficult to find. This is obvious, but identifying such pages isn’t. Noticing that something is missing (e.g., a page that should be in the top 10 for number of visits) is more difficult than seeing something that’s there. So to find out which pages are difficult to find, list the pages that you think should be among the top 10 or 20 most popular, then compare it against your analytics. When you notice what’s missing, you’ve found something worth testing.

Internal Search Data

Another way to identify difficult-to-find pages is to look at your internal search data. Find out which queries are occurring most frequently. Analytics is telling you that plenty of people are resorting to search to find certain information, but why is that happening? Are visitors browsing first, or are they immediately relying upon search?  During the test, have them try to find the items that the queries suggest they’re looking for. Find out where they’re looking, and why and when they’re giving up.

Rethink “Positive” and “Negative” Stats

When you’re assessing performance, stats like high bounce rates, high exit rates, and low time on page are often considered bad. But in fact, there are times when a high bounce rate is just fine, but a low bounce rate needs to be investigated. We can look past the surface to find out which numbers—high or low—reveal important testing opportunities.

High Bounce Rates and Exit Rates

To find testing opportunities around bounce or exit rates, look for two things in particular: 1) user intent, 2) user expectations.

User Intent

What’s a good bounce rate? This common question is usually met with an accurate but frustrating answer: “It depends.” And typically that means, “Find out whether the page is supposed to be retaining visitors.” For example, the page listing your store hours is likely to have a higher-than-average bounce rate, since the visitor intends to find the store hours and then leave. So it’s not necessary to test all pages with high bounce rates.

On the other hand, if you see a low bounce rate for a page that should be answering a very specific question, the page might be worth testing. If your store hours page has a bounce rate of 12%, it’s time to learn why. Look at a flow report to find out where visitors are going after visiting that page, or run a user test to determine whether something is going wrong.

Bounce rates

A simple evaluation of bounce rates can reveal great testing opportunities. The high bounce rate on the Directions page isn’t of much concern, due to the visitors’ intent. But a bounce rate of 41% on a features page—which should be sending visitors further down the sales funnel, is cause for alarm. This page should be tested.

User Expectations

An unexpectedly high bounce or exit rate can typically be traced to the page not meeting the visitors’ expectations.

Before we figure out how to run the test, we first need to determine which pages to test. Look for high exit rates on pages that are intended to convert, such as signup pages, checkout pages, and middle-of-funnel pages. You’ve worked very hard and likely paid a lot of money to get visitors to this point, so protect your investment by testing to find out why these pages aren’t paying off.

Here’s the key to learning how to improve those pages: When running the test, start the test prior to the problem page, so that the testers’ expectations can be developed organically. Call it “expectation incubation.” You might consider analyzing your traffic sources for a page, and finding out which sources are causing the highest bounce rates. (See the earlier section on Flow Reports.) Then you can figure out where to start testing.

Example:

In this example, we’re looking at the sources for a company’s Features page. It turns out that traffic from Facebook is bouncing far more than traffic from other sources.

Bad bounce rate from one source

Look for differences in bounce and exit rates among traffic sources. Then start the test there, to help testers adopt the same expectations as the visitors that are bouncing.

Perhaps the test would reveal that one of the company’s Facebook posts or campaigns was telling people “See why we beat the competition,” only to drop the visitors onto a Features page with no competitive comparison. These visitors were approaching the page with an expectation that the page isn’t meeting. Analytics alone can’t give you this kind of insight, but analytics plus testing (and in this case, even just some analysis of the Facebook posts and campaigns) can steer you in the right direction.

Time on Page

Look for an unusually short time on page, combined with a high bounce or exit rate. Maybe you’re simply overwhelming your visitors, or maybe the page lacks credibility.

For pages with a long time on page but high bounce rate or exit rate, evaluate the page’s purpose. If the goal of the page is to move visitors further along in the funnel, find out why they’re spending time on the page but ultimately deciding to leave. Is the content ultimately not convincing enough? Is the CTA not clear and compelling? Is there any concern about privacy or how easy it will be to cancel the free trial?

Analytics line showing long time on page but high bounce rate

If this is a blog post, these numbers aren’t necessarily a problem. If it’s a page closer to the middle of our funnel, we’d want to test to see why people are leaving after spending so much time on the page.

If the page is just a content piece intended to serve very top-of-funnel visitors (such as a blog post), perhaps a long time on page plus a high bounce rate isn’t a problem; but check out your micro conversions. If you’re not getting comments, newsletter signups, or whitepaper downloads, run a user test to find out why.

More Ways Analytics Can Help in Testing

Of course, using analytics to find usability testing opportunities isn’t the only way these two measurements interact.

For instance, after making changes based on usability tests, analytics should again be used to assess the impact on the entire audience. (Usability testing provides faster feedback, but ultimately the numbers from the entire audience need to be evaluated.)

Also, if usability testing reveals a problem, analytics can help you determine whether the problem is as widespread in the full population as it is within your sample group. If you’re considering investing time and money to address a problem you identified during testing, this extra validation can give you a confidence boost to move forward.

What other ways have you used analytics to reveal great testing opportunities?

In this Article

    Related Blog Posts

    • 3 designers working on a prototype at a conference table

      Blog

      How to align product vision with user feedback

      One of the major challenges some digital product teams find themselves struggling with, is...
    • Top down view of 4 colleagues at a round desk in a meeting

      Blog

      How to build a customer experience strategy framework

      In today’s competitive market, great customer experience is a key driver of success. As...
    • Photo of UserTesting THiS London stage

      Blog

      Digital innovation and insights driving customer-centric transformation: THiS Connect London 2024

      The Human Insight Summit (THiS) Connect: London 2024 was a must-attend event for digital...