How Autodesk creates better digital experiences with UserTesting – Q&A with User Experience Manager, Lisa Seaman

| April 26, 2017
Sign up to get weekly resources, and receive your FREE bonus eBook.
Thank you!

Get ready for some great content coming to your inbox from the team at UserTesting!

In a recent webinar, Lisa Seaman, User Experience Manager at Autodesk, shared how she tests and optimizes Autodesk’s websites with UserTesting. She discussed how her team uses UserTesting and why it has become such a crucial tool for their agile approach. We had a great Q&A session with Lisa and included some of our favorite questions below, or you can watch the full webinar here. Enjoy!

 

What’s the best way to structure your questions to users?

I often make it broad at the beginning and have them go through the task or flow and then I ask more specific pointed questions afterward. I don’t want to bias my participants in any way. Towards the end, after they’ve gone through a task, I might go back and say specific things, like “Did you see this link?” or “What do you think this phrase means?”

It’s definitely a little bit of an art and a science and I always run a pilot test. Sometimes I’ll need to revise my test two or three times. I’d encourage you to run one test, then watch it. Were any of the questions confusing? Did you bias the participants? Do you need to add another question to really elicit the feedback you’re wanting? Revise it, run another one-person pilot, and once you have it set up like you want, then you can launch it for more participants.

Do you ever run tests where participants need specific knowledge of your products?

Yes, and when I do, I will have a screener question, like, “Do you use 3D design software?” and then “What is the primary software product you use for your 3D design?” I will have some Autodesk products in there, but I’ll throw in other things that will kick people out—other things that people who really don’t use 3D software would pick. When they get into the test, I often have the first couple questions are “What is your role?” “What software do you use?”

I’m trying to make them have to prove to me that they do use our software, or sometimes, we want a competitor software. It doesn’t have to be our software, but I will make them kind of prove to me in the first couple tasks that they’re the person I’m looking for.

What do you tell the product manager in terms of the time required for testing and then, how do you fit that into your testing schedule?

Ideally, it’s nice to have a week or two. We don’t always have that much time. I work with my stakeholders ahead of time. We talk about what we want to learn, but I actually wait until I have the design to write the tasks.

I wouldn’t write the tasks until you have the design because you don’t know what the links are, you don’t want to bias your test takers by using words that are on the page. I wait until I have the page so I can write my tasks so I’m not biasing the participants. At the shortest, maybe we have two or three days but ideally, it’s nice to have two weeks.

How do you decide when to conduct user testing and when to run a more quantitative A/B test, and which do you do first?

We’re going to run an A/B test when we want quantitative data. We can then add the user testing study to see if we learned something we hadn’t thought about.

Other times, we talk with our stakeholders and really have a conversation about what is it they want to learn. Are they trying to drive towards a success metric, or are they really wanting feedback on “Does this work well? Is it easy? Is it confusing?” In that conversation, we determine what is the best method.

There are many tools in the toolbox. It might be A/B testing, user testing, or it might be a survey. Depending on the situation, we might pick one of those tools.

Do you usually look for certain demographics in terms of age and gender and familiarity with technology, or are you using screener questions?

Sometimes, I don’t care who it is. Like the test where we were looking to document that a sign in link wasn’t working. I just wanted anyone who had Internet Explorer 8. I didn’t care if they used AutoCAD.

For other tests, I think a colleague of mine was running one where she specifically wanted people who used 3D design software, but not Autodesk software. She wanted someone new to Autodesk software, so she set up her screener to screen out people who selected Autodesk software.

My secret sauce is in the screener. Make sure you’re adding in options that people will choose to get kicked out. The screener isn’t just a list of all the things that qualify them to get into the test. It’s also interspersed with the list of things that will kick people out of the test.

Also, don’t make your screener too narrow because if it’s too narrow, then it’ll take forever and you won’t get any participants.

When something is underperforming on your site, how do you use UserTesting to figure out why?

Let’s say you have something that’s underperforming, according to your web analytics data. If you have a flow going from A to B to C, can you see where people are dropping off? You’re just investigating how the feature works for people and then you can watch and learn. Maybe you don’t know ahead of time what you’re trying to prove exactly, you’re just investigating and getting some feedback from people who go through the task.