4 methods for getting actionable feedback on non-functional prototypes

By Devin Harold | June 2, 2020
Image
prototype hero

Imagine that you have a non-functional prototype, and you’re trying to decide your next best step during the product development process. You want to make customer-centric decisions, but you need to continue development quickly—while also focusing on areas that will have the most impact on your customers.

However, when running qualitative research on incomplete prototypes people often ask customers what they want or prefer instead of observing their true behavior. The problem with collecting feedback this way is that many times what customers say and what they do are invariably different. 

You know the benefits of prototyping. But what’s the best way to get customer feedback on an incomplete prototype?

Customer feedback shines when you ask people about their understanding of an experience (or their expectations of specific features) in the context of completing a realistic task. While you may not be able to gather usability issues (because you’re unable to watch them interacting with a completed prototype) you’re still able to adequately paint a picture of the customer’s mental model of the current state of the system in question.

4 methods for testing non-functional prototypes

Below are four tried-and-true methods for evaluating an experience even if it’s only a singular screen, or if the full task flow isn’t yet built out in the prototype. Follow these suggestions to ensure that you gather actionable insights to drive product development—and avoid relying too heavily on customer preference.

1. Information scent

If a prototype (or design) isn’t final, you can still test where customers would go to complete a given task. Begin the session by informing them that the prototype they’ll interact with won’t fully-function and that you want to understand what they’d expect to happen if it was. 

When administering tasks, do as you normally would for any usability test and ask a lot of ‘what would you expect’ or ‘where might this take you’ type of questions. You’ll notice that users will focus heavily on the headlines, content, and hierarchy provided within the incomplete prototype to ascertain where they may carry out the task. This is how you evaluate the information scent provided by the UI. 

When the scent of information is strong and aligns with expectations, users feel more confident they’re on the right track. It’s important to treat the task wording the same as you would for any other usability test because their end-goal would ultimately be the same. You wouldn’t want to ask “where would you click” for every task because “clicking” might not have been their first reaction. 

By doing this, you’re building a representation of your customers’ mental model based on the information and keywords they’re provided by the incomplete prototype. When the conceptual model matches the customer’s mental model, you can be more confident that they’ll be able to complete tasks when the entire flow is built out.

2. Affordances

If a particular system, widget, or UI isn’t fully functional, you can still learn whether or not customers understand what functions are available to them. This is similar to the information scent, however, it’s specific to the functionality of varying components (like a carousel, car configurator, or another complex interface). In other words, you’re not evaluating entry points into a flow, but you’re identifying if the intended functionality meets perceived functionality based on visual indicators provided by the UI. 

You can do this by asking customers to complete a task using the experience as they normally would, however, ask them to explain instead of do (normally a no-no in traditional usability testing). 

For example, you may ask questions like this:

  • Based on what you’re seeing, explain how would you expect to ____.

  • or What might happen if you clicked ____?

This works especially well if the UI you’re evaluating has varying states such as an on or off-state. You might also ask questions about what they’d expect to change in these different states. If customers can’t perceive how a specific component or experience may work—even hypothetically—chances are they’ll have a harder time in the wild or while completing a larger, more complex task.

3. Counterbalancing

If you have two or three early concepts to get in front of customers but don’t have the time to build each of them out into fully functional flows, you can still get great feedback based on customer comprehension and expectations using a within-subjects design. 

The problem here is: how do you get great feedback without creating favoritism or bias based on the order of concepts?

Avoid order bias by creating a Latin Square. This allows you to re-order conditions per customer tested so you can uncover true preference. If the concepts are different enough and your tasks are consistent, with just six participants, you can see which prototype caused the most confusion and which was preferred the most—and why. You might even find that different parts of each prototype work well for customers in different ways, resulting in a single (Frankenstein) direction. Be sure to bake in enough relevant tasks (versus just preference questions) to be able to effectively compare your different concepts. 

4. Assess desirability 

At the end of the day, sometimes we just want to know what our customers think of the visual aesthetics of a UI or design system and we may, for the moment, care less about usability or functionality.

Instead of asking customers directly what they want, follow a more prescriptive approach that will get you answers which align more closely with your brand values or the design system goals. One approach is to follow Microsoft’s evaluation tools for assessing desirability. This method assesses visual aesthetics in a programmatic way by assigning various attributes to an experience or visual design—choosing positive, negative, and neutral adjectives—then arranging them randomly within a table. 

You’ll want a good mix so that you’re not biasing the data with 80% ‘good’ words and only 20% ‘bad’ words to choose from. You then ask the customer to pick the top 5 adjectives from the table they feel best represent the design—based on their personal opinion. It’s great to follow up on why they chose those adjectives, which then calls for a rich conversation.

By using this method, you're taking a more rigorous approach to assessing what customers like about a product's visual interface, how it stacks up against your brand promise, and why they feel the way they do. This method would be great for benchmarking brand perception over time, especially when building out visual design systems. 

Get actionable feedback from non-functional prototypes 

If you think you can’t get actionable customer feedback from a non-functional prototype, think again. While it’s not usability testing and certainly doesn’t replace it, we can still learn valuable insights, building the plane as we fly it.

It goes without saying that what people say and what they do is different, so it’s still important to find time for usability testing in your customer feedback system. But by testing early and often—even when it seems you’re not ready—you can still gather the insights needed to accurately evaluate a design’s effectiveness without just asking customers if they simply like it.

Design thinking for product teams

Discover how to leverage human insight at every stage of your product development life cycle.

About the author(s)
Devin Harold

Devin is a Senior Manager of UX Research at Verizon, where he works closely with design teams and business leads to program, field, and synthesize consumer experience research for understanding the potential pains and gains of new products and services.