Connect with your exact customers
See, hear, and talk to your customers
Uncover insights about any experience
Share key insights across your organization
In a recent webinar, Leah Russell, Vice President of User Experience at HomeAdvisor shared how her team incorporates regular user research to address the unique challenges facing both the B2B and B2C sides of the business.
We had a great Q&A session with Leah and included some of our favorite questions below, or you can watch the full webinar here. Enjoy!
It came out of need. When we originally created the team here, we did not have a dedicated research role. It was all designers. My experience had taught me that there is a real need for the researcher role.
We grew into that role. That's how we currently are at one researcher for the seven designers. As the researcher and I have discussed, she is never bored. She is never twiddling her thumbs. I think that there is a high potential that we will grow that into more resources focused on research in the near future.
There wasn't a magical formula. We started with this role and will see what that person can handle, what their capability is, and then determine future numbers based on that.
Ah, this is a debate I actually have quite often.
It's not perfect by any means, but the value you can get from people coming in far outweighs the negative. We definitely recruit for people who are in as much a realistic setting as possible. So, people who are already in that mindset, who are not do-it-yourselfers.
We try to find the most realistic people possible. Then we try to use as realistic of scenarios as possible. A lot of times, we'll start the test by asking them about their current project, which we know they have because we've screened for that during the recruiting process.
Then we'll talk to them about their current project, get them in that mindset, and try to make that the basis of the usability, rather than randomly saying, "Oh, you're looking for a plumber? Here we're going to have you talk about remodeling your entire kitchen."
Good recruiting is essential. That's really the key to getting realistic feedback.
A lot of our reporting is to record it for posterity in documenting what we tested and the findings.
The reports have now transitioned into getting a lot more of the detail. In those recap sessions, we are going to talk about much higher level, bigger topics. Where the report can not only capture those and capture screenshots, showing what we're referencing and what we've found.
Those reports also capture those smaller details. There's a lot of value in documenting because people have been involved throughout the process, but not everybody has been involved that needs to see it. So, we provide those reports to our executive team.
We have four sessions each day, which would mean we would have a total of eight. Usually, a valid test is five to seven people, so eight is a bonus.
When we are testing over the two days, that usually that means that we are testing multiple formats. That will give us a good idea of each of those formats at the end of the day, for example, web on day one and the app on day two.
If we are only testing on one format, or if the sessions are shorter, if we are doing maybe 30-minute sessions rather than 45 minutes, we can usually fit those into a single day.
Yes, we can get insights from that number of people. We actually try to complete around 20 user tests per month and then we divide it up by format.
We'll do maybe, six on app, seven on desktop, and seven on mobile web or something like that. Then we get a good representation at each of those levels.
We do a lot of A/B testing. We will do multiple designs that then will be put out for A/B testing.
We use A/B testing when we need to get volume, or when it is purely an "either/or" and we just want to test something as simple as a button color or a wording on a button.
When getting the opinion of eight people is not going to give us enough information to make a decision, we can put something out, A/B test it, and really get real-time information. What was the conversion rate on button A versus button B?
Whereas, when we are testing in the lab, it is more qualitative. We are looking for a group of people to help us set a direction, rather than come to a definitive answer. That's probably the biggest difference. A/B testing gives us definitive numbers that tell us we should go with button B, whereas the in-person testing really allows us to set a direction and say, "Hey, maybe that's not the right button, we should explore other buttons to move forward."
We work in two-week sprints here. The development teams work in the sprints and we work ahead of those sprints. We don't necessarily work within a specific two-week window. We participate in their sprint planning and roadmap planning so we know what projects are coming.
We specifically have a UX backlog meeting, which is separate from the agile backlog with the development team, where we meet with the product owners and we go through the projects that they have coming up in their next few sprints or even farther out.
We make sure that we're ahead of that and working on it in the appropriate time. We will actually get a heads up if we think it's a project that is going to take significant UX time—the product team lets us know and we start on that early enough.
We've got a great relationship with the product team, and we've really hit a nice cadence that they give us plenty of notice and we are able to complete that work before it goes into the strict development sprint. Having those UX backlog meetings, knowing what all is coming, and fitting in that work has worked really well for us.
Get our best human insight resources delivered right to your inbox every month. As a bonus, we'll send you our latest industry report: When business is human, insights drive innovation.