How empathy drives amazing UX at Canadian Tire: Q&A with Steve McGuire, Associate Manager of Usability and Optimization

| June 13, 2017
Sign up to get weekly resources, and receive your FREE bonus eBook.
Thank you!

Get ready for some great content coming to your inbox from the team at UserTesting!

In a recent webinar, Steve McGuire, Associate Manager of Usability and Optimization at Canadian Tire, shared how his team uses empathy to drive amazing UX and how to spread this empathy to other team members in the user testing process.

We had a great Q&A session with Steve and included some of our favorite questions below, or you can watch the full webinar here. Enjoy!

How do you use screeners to narrow down the customers that you test with?

It depends on what it is that we’re trying to test. We might use screeners that are very specific, down to the idea that we want to see people who are only a part of our loyalty program on a mobile app, and use our mobile app, and keep shopping lists on our mobile app. They can get quite specific, but sometimes they are also generic based on our main personas. It depends on what were looking for.

How does your team decide which information and data collected from the user tests need to be implemented or is important?

We have to really observe what’s happening. Like I said, we like to look for patterns. It’s great if at least one of everybody’s user either tripped over the same thing or made a similar comment. Then we discuss, “Wow, is this a pattern? Is this going to be a finding?”

We also encourage when people are watching videos, don’t always just listen to what the user is telling you. You have to observe what it is they’re actually doing because sometimes people don’t always represent themselves accurately in what they’re describing, and sometimes the observations are really key.

We like to do everything by consensus. We want to feel comfortable. If I’m on the whiteboard writing down findings, I’m always checking in with the group to ask, “Is this making sense? Do you guys feel okay with this?” We move forward that way, and then from there, we peel out the findings that we think we could turn into recommendations. Especially if the user’s tripping over some part of the software and we see that finding, we highlight that and say, “Hey, this is something we’d turn into a recommendation.”

Recommendations are something we’ll discuss after we’ve nailed down what we think the key findings are.

Can you share one of your most interesting and empathetic testing experiences and what it led to?

I can tell you a story that I used to tell the group back in the early days of using UserTesting that I remember from doing a test.

I think the participant’s cat had jumped onto the stove. Midway through the test—I think the cat could’ve been Freckles or something—she was talking to her cat. The cat had banged some pots around, and in a baby voice, she was saying, “Oh, Freckles, get off that stove.” The great part about that is that I said we were right there with her when she was testing. That’s proof that we’re right inside her natural environment.

How do you balance full-blown research projects that are more time consuming with quick and dirty tests?

It’s generally driven by what you’re afforded to be able to do. When you’re given the cycles to do as much research as you can, you do. That’s very fortunate when you’re allowed to give yourself lots of time to really understand what it is you’re trying to do. A lot of times that’s driven by the product at its root. We really try to incorporate an element of research into everything we do, even if it is just a quick project that needs to get out the door.

Sometimes that research, if you really don’t have a lot of time, it goes back to that thing about sharing your work. Even if you have to share, if something’s happening fast, that generally means not a lot of people are going to be totally aware of it. You just go somewhere and you do a mini little user test on one of your colleagues who may be sitting on another floor, completely unaware of what you’re working on or maybe over in the next work area. Generally, we always want to do as much research as we can to get it right. When I say research I mean producing designs, iterating those designs, testing those designs before we start building anything and the rubber hits the road.

How do you present the results of usability testing to a mixed group of stakeholders and developers?

That’s something that we’re starting to work on more. I think in an ideal state, we’d be doing more workshops and things like that to say, “Wow, here’s the results of our tests. Let’s have a workshop to explore what it is we’re going to do about them.”

In light of that, in our UserTesting group, we create reports. The key for our reports is we try to make them really consumable. If you have a lot of data that you really want to include into a report, you might want to consider putting that in an appendix, but you want to put the big headlines up front. Each one of your slides needs to count because when you think about it, people, and particularly senior people in the organization, aren’t going to have a lot of time to go over your report. You want to surface the big headlines up front and write a very lean report that can be easily shared.

At what point do you bring in UserTesting? Is this something you do with low fidelity wireframes or after you’ve built something out a little further?

It depends. Generally, we try to create the highest fidelity wireframe we can in the shortest amount of time. Sometimes we find the closer things are to reality, the better it allows people to get over the idea that the color’s not right, or the data’s wrong, or whatever it is, even though we just want to test them to see if the flow is frictionless.

Once again, it’s back to time and what you’re afforded to do with what you have. My preference is to always try to put out the quickest, highest fidelity wireframe that we can. You can do it with sketching on a pad and paper with boxes and say, “Hey, does this make sense?”

What metrics do you use to measure your success?

When it comes to measuring empathy, I think that’s a tricky one. It’s not an easy thing to measure. When you see people in your organization are getting curiously engaged with what we’re doing, which is have the customer at the nucleus of everything that we’re doing, that’s the measurement for empathy in the organization.

In terms of measuring our success in terms of the testing, for us it’s generally when our findings and then subsequently our recommendations are acted upon.

We usually stack our qualitative stuff and our quantitative stuff together, and it’s a pretty powerful argument. When the business acts on those, that’s really a good measurement of success.

How does quantitative research play a role with empathy, in terms of the qualitative design?

There’s a lot of A/B testing. We do Test and Target. I guess you could say we’re perpetually testing stuff all the time. The analysis is essentially quantitative analysis, so it’s numbers. We can just look at those numbers and we can say, “Hey, these are the facts,” whereas the qualitative side, it’s not always facts. It’s coming from an opinion based on the results. We actually end up doing more quantitative testing using tools with and Google Analytics and Adobe Test and Target. Then we’re able to do more tests quicker because of the robustness of those tools. That’s how it works.

The magic is really that we try to schedule a lot of our tests to run together. We try not to lead ourselves into the desired outcome. We’ll say, “We’re going to do this thing. Why don’t you do this?” Then we’ll come back and unbiasedly explore whether that argument gets supported by both. When it does, it’s really powerful. That’s the kind of thing that the business feels compelled to take action against.

What other tools does your team use besides UserTesting?

We use Adobe Creative Suite, which we use for our wireframes. After the whiteboard, we’re able to go straight into Photoshop a lot of times to build those quick highest fidelity prototypes.

When it comes to prototypes, all the popular tools, I think, are fairly good and capable now. I like Axure because it’s quite deep. If I want, I can do a lot of extras, and there are plugins that can get created for it, and you can simulate a lot of things like real data. It allows me to create those if statements. If I want to go deep and make the prototype complex, it affords us that ability to do it.

Other than that, whiteboard and whiteboard markers and Sharpies and sticky notes are other great tools of the trade. That’s the tools that we’re using right now.

Are you on an agile schedule? How long does it take to get from the start of the qualitative research to the end of it?

Everybody here and even a lot of our designers and content providers have all adopted an agile platform. Our ideal cadence for putting out a test is once a week. We don’t always get a finished report out that quickly.

The whole company’s going agile. First, it started with our technology. I was supporting a lot of that, so I was into the IT agile process early. Now marketing, content, creative have all adopted an agile process. It’s the way we work, and we’re all building, and iterating, and learning, and measuring.

Can you share your thoughts on competitive testing?

It’s definitely something we seem to be doing a lot of now. Previously, we were focused a lot on testing our own app, and our own customers’ expectations, and the observations around that: extending or expanding our understanding of what customers are generally looking for (and why), and sometimes debunking some of the myths of why we think customers go to a certain place, or why they’re doing what they’re doing and where they’re doing it.

Competitive testing gives us a larger view of what people who buy stuff online are really doing. That’s been helpful, and it’s really boosted up our understanding of our competitors. I think it’s going to be a part of what we do going forward.