Watch a Demo

Insights on 2017 CX trends and 2018 predictions

| January 18, 2018
Sign up to get bi-weekly insights and best practices, and receive your free CX Industry Report.
Thank you!

Get ready for some great content coming to your inbox from the team at UserTesting!

In a recent webinar, Stephen Fleming-Prot, Senior UX Research Consultant for UserTesting, reviewed some of the CX trends we saw in 2017 and how they can inform your 2018 research plans. In this webinar, Steve discusses how different organizational teams are discovering more research options, working better together, and leveraging cutting-edge technologies to make customer experiences better than ever.

We had a great Q&A session with Stephen and included some of our favorite questions below. Enjoy!

Is there any specific research for AI or machine-learning tests that you can recommend?

There’s a lot of research going on. I was at a conference a couple months ago, and there were a lot of AI topics. I think staying close to the professional organizations, UXPA, and Human Factors, or Economics Society; those are going to have some great resources for what’s the core research that’s going on today. We also have a few articles on the UserTesting blog that you can check out as well that link to additional resources.

All things being equal, how do moderated study costs compare to unmoderated studies?

The challenge is, the costs of moderated studies are really the time the moderator needs to commit to being there for each of those sessions, and each one has to be scheduled, too. Scheduling takes time. Then we have to find participants, and we’ve got to schedule those participants.

Think about the last time you tried to schedule a meeting. Coordinating with multiple individuals—including you, as the moderator—is a challenge under regular circumstances. And now you’ve got to do that with each participant, and then still run the session with each of those participants.

Unmoderated studies, however, because you aren’t relying on the schedule of the moderator or participants, allows for those sessions to be happening at any time, day or night, 24/7.  They don’t have to be scheduled, and then they can happen at the same time.

When we run unmoderated sessions, for example, all of those participants are potentially taking the study at the same time. So, that allows us to get that study filled within a day, as opposed to having to schedule multiple sessions over the course of many days or even weeks.

One of the biggest benefits of unmoderated versus moderated is you get to tighten up that timeline of scheduling, and you get to reduce that bottleneck of, “Well, when do the participant’s schedule and the researcher’s schedule fit perfectly together in that one hour period?”

Many companies rely mostly on big data to understand what our customers are actually doing. Isn’t qualitative research just nice to have?

That’s a great question, and it’s something that we struggle with all the time when we’re talking with our customers. We often get asked why can’t we do more quantitative testing, or just add more participants?

Qualitative research is different. We’re gathering different kinds of information. With quantitative information, you get lots of data points, but you don’t necessarily understand the why. You just know that people are dropping off at this point on your site, for example. You know that people are signing up and then not renewing each month, or not renewing after six months. You know that lots of people are coming to your website but not providing their email address or signing up to get newsletters. You know those things are happening, but you don’t know why.

That’s where the qualitative research comes into play. You get those insights into why, so that you aren’t just making educated guesses, but really understanding what might be the problem, and making changes, and seeing if this change works, and repeating the process until the change works. By being able to get some of that qualitative feedback, you can have better insights into what changes you should make to achieve the outcomes that you want. Then your quantitative analysis, your big data, will give you those insights. You’ll know if you’re successful because of that data.

Is voice interaction something that a lot of companies are focusing on now? How do you feel we should study voice-based systems like Alexa?

Some businesses have more voice interaction with their customers than others. I would encourage you to look at the experiences that exist out there. In the user experience field, we talk about the mental model. What, based on our past experience, is our expectation of the way that the world’s going to work, or our tools are going to work? So, Google Home, Alexa, Cortana, and Siri, for example, those are all tools that have been building a mental model of how we interact with voice interfaces based on how people might expect to do so.

As far as research goes, you can schedule live interviews remotely, bring folks into a lab, or you can even go to site visits with participants and observe that interaction, and allow them to have that interaction.

From an unmoderated perspective, with a tool like UserTesting, we can have participants who have mobile devices using our platform using their camera to record what’s going on in their environment. We can observe them interacting with some voice interaction technology that they have, like a smart speaker, or a computer, or another device or interface. For example, UserTesting conducted a study before the holidays to learn how consumers can use conversational interfaces like Google Home and Amazon Echo to help around the holidays.

Have you noticed an increase in companies testing their apps and websites? Is usability testing becoming a competitive tool worldwide now?

We have been seeing more customers thinking about user research beyond just that tight agile sprint design and development process. We’re seeing more and more customers expand out of that design and build phase, and think about what to research beyond that phase.

As far as it being a competitive advantage, it shouldn’t be—it should just be a given. That would be like saying software development is a competitive advantage. Nope, that’s just a given. It’s just a given that you do software development. It’s just a given that you do usability research.

Doing that broader research—doing more discovery research, understanding the experiences participants are having with your live design once it’s released–those are where we see more companies expanding to, and where there are going to be more opportunities to learn early and fail early. Have your bad designs early rather than spending all your time developing a bad design and releasing it, and then realizing it’s a bad design.

Want to learn more?

If you’d like to learn more about how UserTesting can help you understand your customers through real-time human insights, contact us at support@usertesting.com.