Get ready for some great content coming to your inbox from the team at UserTesting!
When people talk, listen completely. Most people never listen. ― Ernest Hemingway
Here’s a hard question: If the tools to build digital experiences become easier and easier, how do you earn engagement and adoption in an ever-increasing noisy world? One way to answer this is with an empathy map.
At Tadpull, a NetSuite eCommerce software and services firm, we’ve found that the best thing you can do is step away from the keyboard and get out and talk to your users to build empathy for their needs and desires. While this certainly isn’t a new idea, finding a way to capture those conversations and turn them into tangible deliverables is where many teams fall down.
Somewhere between the inspiration and the code, the user’s emotional hooks—are often garbled, lost or even ignored.
But what if there was a way for those of us who have to research, design, code, and market digital experiences to have frameworks and tools to help us organize all the feedback into specific buckets? From these juicy bits, we could distill coherent goals that get to the heart of the logic and emotion of why users are willing to engage with our brand.
Capturing user feedback can often feel like drinking out of a firehose:
To handle this chaos, we use a framework called empathy mapping which comes from the field of design thinking or human-centered design.
Despite the fancy words, empathy maps are really just a simple tool to actively listen when you speak with users and quickly organize that feedback using a four-quadrant system, categorizing what these users say, think, feel and do. Nielsen Norman Group sums it up nicely with their definition,
An empathy map is a collaborative visualization used to articulate what we know about a particular type of user. It externalizes knowledge about users in order to 1) create a shared understanding of user needs, and 2) aid in decision making.
Let’s say that we’re testing a new mobile app prototype for online banking targeting mature users. Now, we likely come to this design challenge with a host of assumptions and biases on what these people want and need.
But we also have a limited budget for creative and code and need to make sure we suss out key features for our next coding sprint and the mark on our next release to build our user base and trust with key stakeholders.
Here’s how to do it.
Sit down with your team and identify one or two key questions you’d like to solve for right now. While everyone will likely have a different agenda, your goal should be to get everyone to vote on the areas in which they feel most strongly. These might not be obvious. In fact, they should be the murkiest (and thus carry the most risk for blowing up your budget or timeline).
For example, “What are the three things a user between the age of 50-65 would want to do—with one hand—on our mobile banking app while they’re standing in line at a coffee shop?”
A real-life scenario might look something like this:
A user aged 55 with arthritis wants to shop our eCommerce store using their thumb and checkout for payment while standing in line for coffee.
Note: Stating the context in which this experience will occur is actually a subtle detail that has big implications. A coffee line is filled with distractions and people jostling, but it’s often where most of us fit in our small tasks throughout the day like shopping or banking
Having a solid question with parameters will help you focus and ensure buy-in from everyone on the team.
Pro Tip: If you have a stakeholder with strong opinions and biases, be sure to get their buy-in at this stage for what the one crucial question needs to be. Come to this conversation with any analytics data you can find as well which will instantly build your credibility such as shopping cart abandonment on mobile by age. If done right, they will feel engaged in the process and getting this one crucial step correct ensures everything else flows efficiently (and prevents you from having to redo the user interviews because they don’t trust your work).
If you’re using a research tool like UserTesting or running UX studies in person, it’s absolutely critical to have a tight script written out in advance. You want to think through:
Pro tip: Be extra explicit in your instructions here. If you were running a user test for the example above, you might want to assure users in the first step that you don’t want to observe their personal brand preferences.
Instead, remind them that you’re just testing ideas and want to learn what they feel is broken or could be easier. Reassure them that there are no wrong answers and you are highly interested in observing their experience at specific points in the journey.
Finally, a “we need your help making this better” is a great line to get them invested and participating in earnest.
We recommend running a minimum of one or two pilot tests prior to running a full study to eliminate waste. This can often be done offline with a casual friend or family member. Just make sure they fit your demographic and technology savviness for your target market to ensure consistency. A Millennial will likely approach mobile shopping very differently than a Baby Boomer.
Conduct the test exactly as you would with any other participant, and remember to avoid bias and resist your urge to coach your participants. The friction is where you’re failing in the script and make a note to refine this.
You’ll be surprised by what parts of your test script you feel are straightforward, yet to a participant, they’re slightly confusing and, as a result, they begin going down the wrong path with their feedback. Garbage in and garbage out is what you end up with here.
So be sure to find the kinks and blind spots and adjust your script accordingly. Always refer back to your one big question to make sure you are getting at the information you need to make product feature or design decisions.
Here’s where most people lose their steam with qualitative research in user testing. Quite a bit of work can go into Steps 1-3, and many people tend to think that once they’re done, they’ve checked the box for testing and they’re ready to dive into design or code. Yet, there’s a bit more work to be done to mine the data for key insights.
This can feel daunting, and you might be tempted to quickly flip to specific parts of your tests and not take in the whole experience. Or even focus on a particularly charismatic or passionate user who gave great feedback which skews the average.
But as Hemingway says, “Most people never listen.” However, it’s by listening to the feedback and synthesizing the data that you come up with insights that help you move the needle and prioritize resources.
Here is where the art of active listening comes in. Empathy Mapping provides a framework for synthesizing the data post interviews, and this is where your true signals lie for clustering patterns—between emotion and logic. Our goal is to separate this from the noise of “I don’t like that image of the product” to the real gems of “Oh, I love feature X and use a similar feature every day in shopping with another brand.”
To achieve these patterns, we need a systematic approach to organizing all the comments from our user interviews.
An empathy map walks you through a handy framework for organizing all your users’ feedback into four simple squares: say, think, do, and feel.
Before moving forward, take a moment to refresh your team on your objectives, and what outcomes would be considered a success. In our example, this might be something like this:
A user with arthritis can shop our store using their thumb, checkout on mobile and do it all while standing in line for coffee.
Grab a pack of Post-It notes and off to the side of the canvas give each of your testers their own color. Each tester will have a corresponding color code on the map which makes it much easier to attribute and understand who said what.
I’d also suggest printing out a large copy of the empathy map and hanging it up in a conference room or even against a window. You can get your own copy of Tadpull’s empathy map for user feedback here.
Get you and your team up and moving as you watch the tests and organize the feedback. It makes for a much more fun experience to synthesize the qualitative data this way.
The four buckets of say, think, feel, and do mirror what users do when they go through an experience. This simple rubric becomes quite powerful because it gives you a system to organize all their behavior and comments in a highly visual way.
This is where we capture the feedback like “I don’t like that product image.” Consider this a place to park feedback and tidbits from the user’s stream of consciousness as you observe them. Not everything has to go here but if you think it’s interesting jot it down on a Post-It.
The “Think” part tends to reveal a bit about a user’s beliefs and the logic with which they approach the experience. For example, say one tester has security concerns and doesn’t want to shop on mobile for fear of getting their credit card hacked. We’d capture “security issues for mobile payments.” That could be a key insight—especially if other testers also express this concern—and may be a clue on whether or not this market is ready for such an experience. This will show up as a cluster in the map physically via Post-It notes if enough users report this and you can instantly see visually that this is something to address.
This quadrant focuses on how users experience the app or website directly. We might notice how many times they attempted to log in, as well as remarks like, “Ugh, I can never remember my password and hate resetting it on mobile.” Or they try and check out as a guest and end up in the wrong part of the site such as “Settings.”
For this quadrant, we’re just trying to capture their specific behaviors and actions impartially and not make any snap judgments. Make notes like “Guest checkout -> settings -> user would bail at this point,” to capture their actions. Bonus points for sketching these interactions in a wireframe to better communicate with developers and designers. A picture of the patterns you observe what users do is worth a thousand words (and 10X in time savings).
Here’s where the real listening happens, and the magic unfolds for designing a remarkable experience. Your goal is to read between the lines and empathize with how their emotions fluctuate throughout their time in your property. If the app surprises them with its ease-of-use or remarkable utility, you know you’re onto something. “Wow, I can add to cart and checkout with just two taps and don’t have to put on my glasses to do it? I love this!”
Remember, people will buy on emotion and backfill with logic. The empathy map helps you zero in on this. No emotion = no logic = no adoption.
At the end of all the mapping, take a step back and observe the colors. Did one tester provide tremendous insights as evidenced a ton of lime-green Post-It notes plastered all over the map? If so, she might be someone you can rely on going forward as a power user.
Next, step away for a day or two and let all this percolate in your mind. Your subconscious mind will chew away on the feedback, and the rest will make distilling your key insights a bit easier.
Finally, return to the map and begin the distillation process. While every comment may not be relevant, you might notice a few trends where users are constantly saying, “I hate logging into mobile financial apps,” and you can begin to see where your experience could improve.
An emerging technique to efficiently gather additional insights is to seek out online reviews and add that feedback to your Empathy Map as well. At Tadpull, our data science team often will look for datasets that exist online around where users congregate and leave feedback. In eCommerce, this can include sources like onsite product reviews which we can collect and collate and begin classifying using techniques like sentiment analysis.
Here, we’re mining for the tone of words and setting up filters that match to that specific brand and age group. We can assign a ranking factor of 5 for a word like “love” and a -5 for words like “difficult.” Running the text blocks through these nets helps us objectively get at how a brand or even an individual product is perceived. Next, we’ll compare this to the brand’s messaging or value propositions in the marketing mix to see if they align and where the gaps occur.
While this does require a fairly sophisticated team to pull off, there is a great hack for non-coders to use free tools online and build a basic word cloud that shows the frequency of terms used. We’ve found key stakeholders love this type of information as it represents an easy way to visualize the voice of their customers.
Here’s an example from the user feedback from our annual Mountains & Metrics eCommerce conference in Bozeman, Montana:
Note: One of our core values is empathy at Tadpull and designing remarkable experiences for people which is great to see show up as the dominant themes in the text reviews post conference.
In another example, one of our clients in the luxury resort space has a ton of feedback on TripAdvisor (as do their competitors). We can grab all this data and pull it into a database for analysis on the frequency of the words used and corresponding tone.
From this dataset, patterns emerge on what users are saying about their experience at the resort in a quantitative way. For example, they might comment frequently on the check-in experience or seasonal recreation opportunities.
Similar to the analog way, we’re looking for clustering of signals for what our users say, and this can be a fun exercise to see how your qualitative user research lines up with the quantitative results. All this can then be fed back into the online and even offline experience design exercise for the next sprint.
At this point, you might be thinking “Wow, that’s a lot of work” and you are correct. But don’t let the options overwhelm you. We’ve found that after running as few as five tests, we have enough insight to answer 80% of the original question we posed at the start of the process with our stakeholder’s buy-in.
Picking the right question to focus your efforts is what makes this work though.
In an agile world, this is usually enough of a signal to run another sprint for improving the experience. It’s easy to get fixated on getting perfect clarity on an issue before iterating, but over time this approach has diminishing returns with a real cost to time and budget.
So instead, return to the key question you defined in step 1. In our case it was:
What are the three things a user between the ages of 55+ would want to do for shopping with one hand while standing in line at a coffee shop?
The empathy map will help you zero in on the following:
While talking with humans is often a messy and confusing experience, using this kind of methodology helps tremendously with active listening – especially if you’ve never been trained in qualitative research methods. Instead of sifting through hours of video or transcripts and dumping the findings into a PowerPoint deck, you actively create a visual representation that captures actionable data and reveals the key insights directly from the users themselves.
The beauty in this analog approach is you can visually see a cluster of comments on the empathy map from the Post-It notes and you know there is a strong signal for emotion or logic.
Also, synthesizing this data suddenly becomes something you’ll look forward to and that the team can enjoy together. You’ll find yourself speaking in terms of the users themselves. For example, “Tester X is nervous about mobile security. We need a way to assure him his data is safe when entering his credit card info into checkout.”
Teams advocate on behalf of the people they feel they have a personal connection with.
And that’s what great digital experiences are all about: designing and building technology with heart.
It’s at the core of today’s best brands, not to mention it makes your job infinitely easier.
Henry Ford famously said,
If I’d asked people what they wanted they would have said a faster horse.
Those who doubt human-centered design often believe that true innovation happens independently of users through a sole visionary founder like a Steve Jobs or Henry Ford.
They claim ordinary people cannot articulate what’s possible with technology nor define it for useful purpose within their daily lives. To them, we’re all just wasting our time trying to spend time with users if we truly want to develop leapfrog innovations.
But if we rewind the clock to the experience Henry Ford was designing for, it comes down to one simple thing: users said they wanted to go faster. And while they might not have envisioned a Model T, they certainly knew the challenges of the horse and buggy and could speak to the pain points around that experience for speed
The same is true for the iPhone. People carried an iPod, a GPS for driving, a laptop for computing tasks and a clunky flip phone for calling or texting. Not to discredit Apple’s engineering and design prowess, but observing and speaking with users you can imagine how Apple quickly saw that there was a hungry market to carry one device that could deliver on all the above with seamless functionality. The problem was thus clearly defined by their behaviors and habits. Users don’t care how we build these experiences. They just want to accomplish their goals.
Gathering feedback will only get you so far. Having empathy for your users and using that to guide your design decisions empowers you to truly listen to what your users want and need.
To learn how UserTesting can help you understand your customers through on-demand human insight, contact us here.
Join 120,000 subscribers and get articles like this every month.
Get ready for some great content coming to your inbox from the team at UserTesting!