How research should inform product design - Q&A with Steve Fadden, Director of Analytics UX Research at Salesforce

Posted on April 19, 2017
8 min read

Share

In a recent webinar, Steve Fadden, Director of Analytics UX Research at Salesforce, and Lecturer at the School of Information, UC Berkeley, covered some ideas for conducting research to inform the development of concepts, designs, and solutions in an agile environment. We had a great Q&A session with Steve and included some of our favorite questions below, or you can watch the full webinar here. Enjoy!

How does the quality of feedback in mockup tests compare to interaction with an interactive prototype?

I think the short answer is it depends, right? I think as any good engineering psychologist would say, it's always dependent on context. I will say this: as you increase the fidelity of your concept—so as you increase the engagement of your story, or as you increase the visual or auditory or whatever your interface is—as you increase the level of specificity, the kind of feedback will get more specific. I think that's good when you're trying to get very specific feedback, for example, how do you feel about this icon or this specific interaction, or about this design, or even about this story?

I would say use fidelity to your advantage. If you really want to focus a person on a specific piece then focus the fidelity in concert. If you really are trying to do a vague exploration or a "boil the ocean" type of study, keep things broad and keep them vague.

One of the beautiful things about formative methods is that they're really quick to deploy. (Shout out to our sponsors here!) There are a lot of platforms that are out there. To be honest, email's a good one that I use. You can send a screenshot if you have a trusted panel. You can send a screenshot and just say, "Hey, I really appreciate your feedback from yesterday. Here's an approach we're thinking about. What are your thoughts?" And you can tune that quality accordingly.

How do you identify what techniques of research you need for a particular project?

If you're a student at UC Berkeley, you can take my class! All joking aside, I mean, a lot of it comes down to having a researcher, right? Researchers are trained to do this. That's what we're supposed to do, so ideally you have access to some sort of research professional.

If you don't, then I think it kind of comes down to where you find yourself on that arrow of software development. If you're in that early concept phase, at the beginning, you have the luxury of exploring ideas, right? You'd want to use those more broad tests like a concept test or even just uncovering problems through critical incidents.

You want to talk about the problem space quite a bit, but if you've already homed in on solutions, your critical incident really turns more into a validation: we have the right type of user, and now I want to start making sure that the way I'm narrowing in on my design or my implementation of the design makes sense. Now you're starting to go into impressions. You're starting to go into things like usability and the like.

Again, it really depends, but I think about where you are in terms of your software development life cycle. It's important to declare where research can't help, right? Remember, the Agile principle is you want to maximize the amount of work not done.

Don't run a study just to run a study. Don't run a study just to get some numbers. If you're going to launch something and you know you're going to launch something maybe instead of running a study to make yourself feel better maybe we'll launch a study so we can give customer support and customer feedback about, "Hey, here are some problems that are likely to come in through some calls or through some tickets."

Do you have suggestions on how to recruit participants to test with? Especially within a B2B environment?

The answer is "it depends," and unfortunately, it often depends on your infrastructure. At a place like a Salesforce, we have an infrastructure where we have recruiting. However, some teams don't have that and other teams I know in smaller organizations or startups or organizations where you're working on a very, very specialized area in the business to business world, you might have to go to social media. You might have to do a Craigslist recruit. You might need to reach out to your friends.

I think the short answer is to start local. Start with your people, right? Start with the folks on your team. They are good surrogates. If you have people working on teams that aren't working directly on the products that you work on you can use them to start getting initial feedback. You don't want to design into an echo chamber of course so you want to kind of reach out. But you can do family and friends recruiting. Chances are there's somebody in sales at your company or in your organization or somebody in tech support or some developer who has a friend that works in a domain that's somewhat similar.

Use that to start building up that snowball and ask people, "Hey, would you be willing to provide recurring support?" If you can set up a panel, maybe through an email, maybe through a listserv, maybe through some kind of organization like UserTesting. If you can build up a group of people where you can reach out often it's a great way to get feedback even when you don't necessarily have that 100% on the mark user that you need to get access to.

Social media's great. Twitter, Facebook, etc. If you have tons of money, go to a professional recruiter, right? You can get them to build a panel, but I think you can do a lot in your own backyard to start getting some answers soon.

Do you run separate or parallel sprints for UX activities in Agile? If so, how many sprints ahead do you run UX sprints before the execution sprints?

I personally do not on our team. We're pretty product-focused. When we are looking in far distant sprints, it's usually on more strategic types of questions. A lot of times we're working hand-in-hand with our product teams. Sometimes we don't have the resources so we're not able to support our product teams. As an organization, working at a company the size of our company here at Salesforce as well as many other enterprise companies that I know about, you often have many groups within research, some of which are focused on near-term research and others that are focused on more distal research.

For example, we have groups here—I think we're blessed—we have groups that are focusing on really strategic initiatives here. For example, looking at the personas of our customers and using actual data from pretty significant segmentation studies where we look at how people from different walks of life of being a Salesforce user, how do they look in terms of the functionality that they report that they use and the needs that they have? Those teams doing the persona research may not necessarily be working on sprint 0 or sprint N plus one work that we're working on, but we have the luxury of being able to take advantage of their strategic research to help inform the direct that we're going.

In an ideal world, I would say it would be awesome to be working parallel. I don't know of lots of companies that have to resources and the luxury to be able to do that.

Is it important to have different users for testing each time you test?

The short answer is to go back to that triangulation thing, right? You know not to be always sampling from your echo chamber. I mean, we had an election here in the United States where you saw polls that diverged from the reality. That's the only political statement I'm going to make in this webinar. I think it's important to make sure that you're always sampling some different audiences. However, that audience needs to represent the target that you hope to ultimately be using or benefiting from or purchasing your solutions. If you've got a group of, say, 20 customers that you can rely on for quick feedback, that's awesome. Use them. I have that here. I've even got customers that I'll email out to or even send a text to and say, "Hey, can you help?"

I think some feedback is better than no feedback because when we design as designers, as scrum team members, as developers, as quality engineers, we have our own biases and the decisions that we make are laden with all these hidden biases.

We're looking into democratizing UX research throughout our organization, empowering others to do small-scale usability testing. What advice do you have for techniques that would benefit these burgeoning researchers?

I think, "Bravo for doing that!" I think it's, again, I can't speak on behalf of Salesforce, but I know there are efforts to do similar things here and, as a Ph.D. researcher who teaches this stuff, I applaud it. I think research is meant to be a team game. I don't think research is supposed to be the provenance of the people who live only in the ivory tower, although I do think there are benefits to having specialized researchers on your team.

In terms of democratizing research, I think perhaps the number one skill is understanding how to check your biases at the front door.

I work with a lot of designers who are really open. I admire how open they are when they go into these design critiques. It would terrify me if I was a designer to go in and have people shoot at my designs. But I think as researchers we need to be just as open to having people shoot holes in our research design and welcome criticism from non-researchers even.

I think it's really important for understanding how do we check that bias problem, and it comes down to asking questions like critical incident. It's not about a question that says, "Hey, how awesome would this feature be?" or, "How awesome would it be to be able to do this with this tool?" Instead, it's to figure out, "How can I ask that question as a critical incident question that surfaces whether or not this person has had problems that could be solved by this awesome feature or function that I'm creating?"

Do you have any parting words of wisdom on how to dip your toes into Agile for the first time?

If I had to distil it down into a simple "go get them" type of feedback, I would say don't be deterred. There's wisdom here that we use, which is if you see something that's not working, change it, and I think that's really at the heart of Agile. If there's something that's not working, change it. Be open about it. Make mistakes. Make them boldly. Identify what's working. Have retros. Ask yourself, "Okay, what do you wish you had done?" "Hey, I know we weren't able to do research. What would you have wished to know? From my perspective as a researcher or as a designer maybe this is how we could approach it in the future."

Basically Agile is about having conversations openly and embracing the idea of formative research, about informing what's working and what's not working. If you stick to a rigid plan we're just kind of reliving the mistakes of Waterfall. I think, embrace the squishiness and the protean ever-changing nature of Agile and just run with it. The benefit of a two-week sprint is if you're making mistakes, you'll be able to make some new ones in a week or two.

In this Article