Get ready for some great content coming to your inbox from the team at UserTesting!
In a recent webinar, Tira Schwartz, Principal UX Researcher at Redfin, shared how her team has evolved its UX strategy from basic usability testing to developing deeper understandings of Redfin’s 10,000 clients, the agents who work with them, and the millions of users who visit the website.
We had a great Q&A session with Tira and included some of our favorite questions below, or you can watch the full webinar here. Enjoy!
I’ve certainly had challenges myself in getting buy-in. I think one thing that’s important to remember is that this goes back to that personality test that I was talking about. People are influenced in different ways and through different mediums. With some people, all you have to do is show them a video clip and they’re bought in. They just feel really connected with what they saw in the video clip. They see a pain point, and now, they’re completely supportive.
There are other people that they might need to see it in person. They might need to have a dialog. They might need to go out shadowing. They need to have a conversation with the person who’s feeling that pain. Then, there are other people who all you have to do is just tell them a great story and that’s enough to motivate them.
The most success that I’ve had is when I truly understand my audience and what it takes to motivate them. In an ideal world, you’re offering a cocktail of all these things. In order to get buy-in, whether it’s executive or just a peer, you really have to understand what resonates most with them.
The easiest way to measure your ROI, in my opinion, is to align your business metrics with your UX metrics and seek out a win/win solution. When your business goals are actually one and the same as the goals that you have for your users, that’s basically like the Mecca of UX design. Then, if your organization is already tracking business metrics, then it’s just a freebie. The tough thing is when they’re at odds with each other, and then, at that point, you probably have bigger problems than just trying to measure your ROI. At that point, then, I think it’s a longer conversation.
How tightly are you integrated with your product team and what are the pros and cons of working together?
I would say that I am very tightly connected to the product team. We actually all report into the VP of product, so we have a design director, and then the head of design. We’re communicating on a daily basis. We sit right next to each other. In this case, we’re really tightly connected.
Pros and cons, I think, it’s really just about figuring out what to focus on and how to partner effectively. So figuring out what is your role and what are you responsible for researching, what is the role of product, and what are they responsible for researching?
I do love that when someone owns the research themselves, they’re much more motivated to then act upon it. That’s one of the benefits of empowering anybody to conduct research.
When you are no longer learning new things. That’s sort of one of the ways you measure the value of your qualitative research and the saturation of your insights, the saturation of your sample size, as well. You no longer need to talk to more people if you just keep hearing the same thing over and over again and it’s not changing the insights and outcomes of your research.
I prefer user stories. In my mind, the difference is that the user story, first of all, it’s real. A lot of the personas are just sort of cobbled together, so it’s all these different facts about all these different people summarized together. At that point, then it starts to feel inauthentic.
I’ve actually been on teams where I feel like the personas were not as effective as they could be. Whereas, a user story, it just it is what it is. You’re not over-promising with it. You’re just saying, “This is a real story about a real human and their real problems that they’re facing.” I just think people respond to it a little bit more seriously. They give it a little bit more attention.
It might take more than just one question. I actually feel really lucky. I totally recognize that at Redfin, the goals that our users have are just so clear-cut.
Really, it’s just about getting deeper into the conversation. You just have to get more specific and contextual. That might close some doors, but at the same time, then, it gives you that focus that you need.
I strongly believe in triangulation. If you have a research question and you only use one type of methodology or data point to answer that question, then you’re going to be a little bit limited in what you know. The best way to increase your validity, your understanding of the answer to that research question, is to just try different methods.
For me, we do these user stories ongoing, but at the same time, we send out these large-scale surveys to ask people very similar questions. Then, we’re getting large sample size numbers. They’re not as rich. They’re not that deep, qualitative insight where you’re getting people to talk and show you things, but it is a way to compare your answers.
With triangulation, you’re literally just taking multiple methods and then kind of figuring out where can I sort of fill in the gaps to paint that richer, broader story? Also, to increase the validity of the work.
I could talk for a long time about that. If I’m going to take the systematic answer, it’s look at what the goals of the UX research is, and is this person able to achieve those goals? Are they able to bring new insights to the organization? Do they have a good grasp of what the existing insights are and where there might be flaws in them or you need to develop them more richly or fill gaps and holes in terms of the insights that people have?
It’s sort of this combination of understanding the culture and then filling in the gaps in the insights that people have. I want to make sure that people are able to contribute to the overall product design process and product development process effectively.
Sometimes that means being really practical and knowing how to offer practical solutions based on the insights. Then, of course, being methodologically sound and knowing what it takes to build a solid study, what’s the right tools that hey should be using, knowing the limitations of them, and doing it efficiently, as well.
Yeah, definitely. The example that I gave was the cautious buyer. That was her behavioral segmentation. She behaved really cautiously.
Then, another example is, it kind of feeds from what I was talking about, an indecisive person. So someone who one day they want one thing, and the next day, they want another thing. That actually happens a lot. We hear about our agents saying, “They’re favoriting homes all over the place. There’s no consistency. How do we get them to focus, and how do we get them to really hone in on what they’re looking for?”
If you know that people of a certain age group click on this button more than people of another age group, what does that tell you in terms of the design? If you think about, well, I’m designing this, but I know that two-thirds of the people who are coming to this experience are really indecisive, so what does that mean, as a designer, for how I design this? I find that the designers are much more receptive to information like that than what year they were born in.
We do in-house usability tests, and that’s really when we need to have rich, interactive conversations or we’re not quite ready to put a prototype up on InVision. So, we bring people in about once every three weeks, and we just have it scheduled ongoing, and we get really good insights through that.
We partner a ton with our market researcher to do surveys, and the surveys are incredibly helpful. They have the limitations of the say-versus-do problem. You also don’t get a video clip out of it. You might get a quote, if you’re lucky, from an open-box question, but the surveys also are incredibly helpful.
Then, of course, there’s product instrumentation and A/B testing that we can sort of collaborate on. One of my favorite things to do with UserTesting is just pilot an A/B test and get some early insights into why people are behaving a certain way, not just how many are behaving a certain way. That way, we know why we should be targeting C versus A or B. The A/B test never would have told us that, but a user test would.
The other thing that I didn’t mention yet is that we do a ton of shadowing. We shadow our own agents. We go to open houses. Whenever someone on the team buys a home or sells a home, they write up this rich document about every single experience that they had, so we try as much as possible to get out and do some contextual field types of work.
In an ideal world, the research was done a month ago and you’re not being reactive, you’re being proactive. When I’m doing my job really well and someone says, “We have questions about blah-blah-de-blah,” I say, “Great. We have all the answers. We got them a month ago.” That way, then, you’re not dealing with the time constraint. You’re just ahead of the curve. You’re seeing the trends before they happen in terms of what questions the team is going to have moving forward.
But that’s not always the case, especially when you’re a team of one. At the end of the day, then, it’s just this trade-off between how confident you need to be in your results, how rich does the research need to be, versus how much time you have. In my mind I have this sort of graph that I draw that’s just basically shows that the more time you have, the higher confidence you can have in your results. If you only need 70% confidence, then you’re all set for a few days of work.
Join 120,000 subscribers and get articles like this every week.
Get ready for some great content coming to your inbox from the team at UserTesting!