How AI user research fuels purpose-built products

Posted on June 4, 2025
5 min read

Share

a-man-chats-with-an-artificial-intelligence-chat-bot

According to a recent UserTesting study, more than 75% of AI chatbot users increased their usage over the past three months, including those who barely used AI before. That’s a clear sign that generative AI is no longer a novelty but a regular part of our daily lives.

But here’s the problem: As excitement builds, so does the pressure to “add AI” to every product. Teams are scrambling to launch AI features not because users are asking for them—but because leadership, competitors, or media headlines say they should.

The result? Features launched without context. AI enhancements bolted onto products without clarity. And worse, users become confused by tools they don't want or understand.

In an Insights Unlocked episode, Dr. Sam Howard, former Head of UX at Curio, shared his perspective on how to manage this balance between what AI can do and what users need. 

“When exciting technologies come around, there’s a temptation to focus on what’s possible instead of what’s needed,” he said. “That’s why embedding user insights from the outset is critical.”

The pressure is real and understandable

It’s easy to understand why AI integration feels urgent. The adoption of AI is happening at breakneck speed.

GPT-style chatbots, AI-powered writing tools, and automated workflows are no longer cutting-edge, but the norm. 

Leadership teams want to signal innovation. Investors want to hear “AI” in your pitch deck. And customers? They’re curious and cautious in equal measure.

This mounting pressure can push teams into reactive mode: spinning up a chatbot in weeks, integrating AI into search bars, or generating AI summaries “just in case” it boosts engagement.

But there’s a catch: AI features are only valuable if they’re user-aligned.

When teams move too fast without understanding real user needs, they risk building features that users don’t adopt, wasting valuable design and engineering time, and creating tools that require later rework or removal. Even worse, misaligned AI features can undermine trust, especially if they generate irrelevant or incorrect outputs that confuse or frustrate users.

In other words, the pressure is real. But so is the cost of moving too fast.

Recognizing the AI feature trap

So, what does the AI feature trap actually look like?

Often, wanting to adopt AI starts with good intentions but no clear direction. It can be difficult to filter out market noise: A new AI tool is trending. A competitor just launched something similar. You’re told to “just test it out.”

Suddenly, an AI feature is live. But it doesn’t quite fit the flow of your product. It’s hard to find, harder to trust, and doesn’t solve a specific problem.

Too often, these AI features are built into existing products or flows with the tech in mind first, rather than the user experience. 

When there’s no clear connection between what users need and what AI is delivering, it leads to friction. Users ignore the feature, abandon tasks midway, or seek alternatives that feel more intuitive and trustworthy.

Common warning signs include:

  • No clear customer pain point is being addressed
  • No user research or discovery was conducted pre-launch
  • The AI feature feels disconnected from the product’s core value
  • You’re adding it because “we need to do something with AI,” not because your audience asked for it

Take chatbots, for example. Many companies rushed to add AI chat interfaces to their sites—but without understanding how users actually wanted to engage. The result? Confused users, low adoption, and a return to traditional support workflows.

That’s not innovation. That’s noise.

WEBINAR

Effective AI: how to choose the right generative AI features—and build them fast

How AI user research focuses on real problems

Here’s the fix: Get back to the basics. Ask yourself: what job is the user trying to get done?

The most successful integrations start with a clear, validated problem done after some level of AI user research. From a UX perspective, teams should ask themselves: 

  1. Is the onboarding process too long and manual?
  2. Are users struggling to find answers in their help center?
  3. Is content creation or data analysis slowing users down?

These are the kinds of challenges where generative AI can shine, especially when the task is content-heavy, repetitive, or complex enough to benefit from automation or summarization.

Take WestJet, for example. The Canadian airline was developing a voice-driven AI assistant, “Ask WestJet,” to improve the traveler experience. 

Using UserTesting, the team uncovered key friction points in the customer journey and discovered how people phrased questions across different platforms, like Alexa and Google. 

They found that travelers often received inaccurate information about baggage fees and restrictions. So they responded with a tailored voice assistant experience, including a baggage size calculator for Google Voice. 

The result? Fewer complaints, more accurate responses, and a boost in brand trust.

Remote video URL

The WestJet story is a prime example of how AI features were built with intent, and the customer journey at its core, actually solving their problems.

Designing AI experiences with intention

Once you’ve identified a validated problem, it’s time to design the solution intentionally.

This means considering not just what AI will do, but how it will behave.

Teams can start to ask themselves these:

  • What does the user expect AI to do in this situation?
  • Will it increase speed, confidence, or satisfaction?
  • What should its tone and personality feel like—friendly? Formal? Expert-level?

Small experiments go a long way here. Rather than launching a full-featured AI tool, start with low-stakes prototypes or microtests. 

Let users interact with early versions, even if it’s just a clickable mockup or Wizard of Oz testing, where users believe they're interacting with a fully functioning AI system, but behind the scenes, a human is actually performing the AI's tasks.

For example, if you’re building an AI-powered writing assistant, test its first draft generation in a sandbox environment. Watch how users react. Do they understand what the tool is doing? Are they more efficient or more confused?

At this stage, qualitative feedback is critical. It helps teams adjust tone, flow, and expectations, before any code is written.

Designing with intention ensures your AI feels like a helpful assistant, not an intrusive gimmick.

Build AI features that actually matter

AI doesn’t need to be everywhere. It just needs to be useful.

When teams prioritize purposeful AI user research—pausing to test, listen, and iterate, they avoid the AI feature trap—and build solutions that actually improve the user experience.

Because at the end of the day, no one wants “more AI.” They want fewer problems. 

As Curio’s Sam Howard expressed it, “AI is the method, not the selling point. The user cares about the result—how the product makes their life better—not the technology behind it.”

Guide

Effective AI: how to choose the right generative AI features

In this Article

    Read more

    • hands-searching-for-plants-buy-online-shop

      Blog

      4 questions to ask when your drop-off rates get worrying

      So, you’ve done the A/B testing. You’ve adjusted button colors, tweaked page layouts, and...
    • product-velocity-businessman-explaining-business-strategy

      Blog

      4 myths about product velocity that are costing you time and money

      In product development, speed is king—until it isn’t. Everyone wants to build faster. But...
    • user-centered-design-team-collaboration-during-a-brainstorming-session

      Blog

      Why human-centered design is your core product strategy

      Design is no longer just about aesthetics, it’s a strategic differentiator. User experience (UX)...