User research and product development – Q&A with NerdWallet’s Head of of User Research, Jeff Gurr

| May 5, 2017
Sign up to get weekly resources, and receive your FREE bonus eBook.
Thank you!

Get ready for some great content coming to your inbox from the team at UserTesting!

In a recent webinar, Jeff covers the pros and cons of NerdWallet’s embedded research structure, their learnings on working with product designers and PMs, as well as the toolkit of techniques they have developed to maximize speed and insights.

We had a great Q&A session with Jeff and included some of our favorite questions below, or you can watch the full webinar here. Enjoy!

 

How does user testing give a better perspective than focus groups?

For us in the personal finance world, it’s really just a lot easier to do one-on-one. It can be really tricky to do focus groups because of some of the stuff that goes along with our product type so I would say it really depends on the type of product that you’re researching and thinking about whether focus groups or one-on-one works for you.

On the flip side, where I’ve seen focus groups work well is when a group has a core understanding of what the functional value proposition is and that’s not changing.

Are there any downsides to embedded research?

We have to be really diligent to make sure that we’re prioritizing our research efforts around the biggest problems in the company because when you get into the pods each little problem or nuance can seem really important and you can run down some rabbit holes of research.

We need to make sure that we step back and are able to attack a lot of the bigger projects. When you’re doing it on a project basis, it’s a little bit easier to prioritize.

How big is your research team? Are your researchers shared across multiple Scrum teams?

We started out as two at the start of 2016, and we’re five at the moment.

We’re actively hiring as I mentioned. We’re hoping to get to eight to 10 by the end of this year. We’re definitely growing.

When I talked about the embedded model, yeah, we’re mostly dedicated to Scrum teams. There are a couple of verticals at NerdWallet that are big enough that they have multiple Scrum teams within one category, like credit cards for example.

There we often just have one researcher who’s helping multiple Scrum teams, which adds another layer of challenge, but we try to have the researchers be focused on big topic areas so we can avoid some of those problems I talked about like context switching.

Can you explain what lookalikes are as they relate to recruiting test participants?

There are two different ways you can get recruits. You could have people that are in the moment of the experience that you want to test so you know that they are for sure your user.

The other option is you could try to replicate. I know the attributes of the type of person who would solve that problem on NerdWallet and so I can try to replicate finding that. That’s often what we do for financial decisions because we know a lot of the characteristics, both the behaviors and attitudes, of people that are using NerdWallet.

As I mentioned, we have those challenges of recruiting on our site. Lookalikes are just our understanding of who that person is and then finding them in other ways. A great example is using the UserTesting screener questions to help us recruit people who are in the midst of having just made a financial decision.

That’s what I mean by lookalikes. If you can’t find the people in the moment, you may be able to find people who embody what that user is like and who will have behaved very similarly to that user.

How long are your design sprints?

We really like the one-week model. We’ll typically do one or two days of understanding the problem. We’ll try to bring in an expert, and then Wednesday and Thursday are really about prototyping and Friday is about testing. We really try to keep that whole week. I will say that

I will say that obviously, full weeks are really hard to block out, especially to get engineers to not be writing code, so we have adapted that for teams that are going through their second or third sprint with us.

If we have a really tight problem that we want to solve and it’s a team that we know already is working together really well, we may shorten that to two or three days or maybe five days of just the afternoon.

What metrics are the most helpful for getting stakeholder buy-in for research?

I would say whatever metrics hold heat at your organization. NerdWallet’s a software-based, conversion-focused company, so we want people to come to our site and take the actions that we want them to take. Things like click-throughs, time on page, bounce rate, all these metrics are important for our business, and they are how our different leaders and our product managers are measured.

One shortcut to this is to look at your stakeholder’s strategic planning and see what metrics are mentioned there and try to drive those metrics. If you’re driving the ones that already have attention within the company, I think it’s going to be a lot easier to drive impact than trying to come up with new metrics that don’t already have a natural place in the cadence and language of the company.

Do you typically test on prototypes or working code, and if so, how?

We try to run tests with all fidelities.

At the lowest fidelity, it can be just a wireframe. It can be black and white boxes all the way to a mid-level fidelity that’s got maybe some quick ability, all the way to a final visual design that hasn’t been coded yet, all the way to something that’s been coded and that’s either on our staging environment or out in production.

We try to test along all points on that spectrum. Obviously, the testing requirements change a little bit and what you’re able to learn changes a little bit, but as part of that testing mentality that I talked about, we try to test at all different phases.

There are tools out there that are helpful for that. One that we use a ton is InVision. Most of our designers here use Sketch for creating their prototypes, so creating clickable prototypes at all different fidelities is really easy with InVision and Sketch. We use InVision a lot and we’re able to use that in both our live sessions and through UserTesting.

What type of UX stack do you have outside of UserTesting to analyze users on the site and develop UX ideas?

There are a few other tools that we use. We use ClickTale a little bit and the heat maps that they provide, and a few other pieces. But I would say it’s mostly UserTesting and then we have an in-house analytics team.

A lot of the pods have dedicated analytics support. A lot of those teams are helping us identify phenomena that are happening on the site and that they want to understand why, so user research is really helpful for understanding the why behind things.

How do you get executive buy-in for embedded UX employees?

That was our million dollar question.

Figure out what metrics are important to your business and show how the research is helping drive those metrics forward, again, we got a little lucky. We found a champion, and we did research that directly and significantly impacted metrics pretty quickly.

The fastest way to get the executives’ attention is to deliver results, especially around metrics that they care about.

How do you educate cross-functional teams on basic research skills?

The way we spread the knowledge is we just did a few brown bags. We created a presentation, brown bag kind of thing, where people can grab their lunch and come listen to us talk on different themes. We created a presentation that we’ve got pretty good leverage out of around just best practices for something like UserTesting, and we shared that a couple times with a couple different teams.

Researchers on our team are the only people that can launch UserTesting tests, so we make sure that all the tests come through us so we can scan through them. We’ll spend 10 or 15 minutes with them and suggest things like, “Hey, should you change this? This isn’t how we usually do it.”

After a few times of doing that and analyzing the results with them, they get pretty good at it, and they can start to see the patterns that we follow and how we test things.

I would say develop a document or something that you can share with multiple teams. I know our UserTesting best practices document has traveled around the company a little bit.

Does NerdWallet have a process for balancing bigger research initiatives with the smaller, weekly design sprints?

It’s a great question because it’s something we’re figuring out at the moment.

I think what’s interesting about NerdWallet is in many ways we’re a company of many different companies. You’ve got the people working on credit cards, investing, mortgages, and all the different products that I talked about. But there is something that connects the NerdWallet customer together, and so we as a research team try to step back and think if there are any big topics that we want to study and address as a company, or as a research team that’s going to help our whole company not just our individual pods?

An example of a project we did recently was trust. As you can imagine, people who come to NerdWallet might not know us. Many of you guys probably didn’t know about NerdWallet until I talked about it today. How do they know that we’re unbiased, that we’re providing recommendations that are in their best interest and trying to be consumer first? Well, they don’t, so we have to make sure that we’re sending all the right trust signals, the right signals of trustworthiness and assurance out to people that are on our page.

How do you balance user goals with business revenue goals?

It’s a slippery slope. That’s the thing that we’ve tried to avoid at NerdWallet. We’ve always tried to be customer first.

I think luckily it’s not a challenge we’ve had to face a lot as a research team because it’s a mentality that comes down from the top from leadership at NerdWallet. They have that long-term view. There’s definitely times we could have done short term things to create revenue, so they’ve instilled that long-term mentality.

We’re in a fortunate situation that we don’t have to answer that too often, but if you do I would try to shift the conversation to that long term view. What are the implications that that will have long term and are we willing to take those risks?