Episode 216 | March 30, 2026

UX research for AI: building trust in experiences

Explore UX research for AI and how to build trust in AI experiences with insights on usability, emotion, and human-centered design.

The truth about UX research for AI (it’s not just about usability)

For years, great user experience was defined by usability: could users complete tasks efficiently and without friction? But as AI becomes embedded in everyday products, that definition is expanding—and fast.

In a recent episode of Insights Unlocked, Priyanka Kuvalekar, senior UX researcher at Microsoft, explains why UX research for AI requires a fundamentally different lens. It’s no longer enough to ask whether something works. The real question is whether users believe in what it’s doing.

“Evaluating AI experiences requires going beyond traditional usability metrics,” Priyanka explains. “Task completion and satisfaction scores matter, but they don’t tell you whether someone trusts the AI.”

That shift—from usability to trust—is redefining how teams approach design, research, and product strategy.

The shift from usability to trust

Think of traditional UX like building a well-lit road. If the path is smooth, clearly marked, and easy to follow, users reach their destination without frustration. But AI introduces a new variable: unpredictability.

Now, it’s not just about whether the road is smooth—it’s about whether users trust where it’s taking them.

This is where building trust in AI experiences becomes critical. Trust isn’t a single metric; it’s a combination of factors:

  • Transparency in how the AI works
  • Consistency in outputs
  • Accuracy and reliability
  • Clear communication of limitations
  • A sense of user control

Priyanka emphasizes that trust is built when AI systems behave responsibly and predictably.

“Trust is built through an AI experience that is transparent, that keeps humans in the loop and in control,” she says. “It’s built when the AI drives accurate and consistent results without hallucinating.”

Without this foundation, even the most advanced AI features risk being ignored—or worse, abandoned.

Stream On

Share

Get Started Now

Contact Sales
Crafted 2025 promo image

Are you an insight seeker?

Join UX, research, and design leaders to push your craft further.

How UX research methods are evolving for AI

As expectations change, so do research methods. Traditional usability testing still plays a role, but it’s no longer sufficient on its own.

Priyanka describes a more nuanced approach to UX research methods tailored for AI:

  • Evaluating behavioral signals, not just self-reported feedback
  • Tracking emotional responses like hesitation or frustration
  • Measuring how trust evolves over time within a session
  • Assessing AI outputs for tone, accuracy, and appropriateness

“One approach is behavioral coding of recordings,” she explains. “You go through interactions and identify moments of hesitation, confusion, frustration, and disengagement.”

This approach reflects a broader shift toward emotional UX research, where what users feel becomes just as important as what they do.

Instead of relying solely on surveys or ratings, researchers are analyzing real interactions—watching where users pause, question, or disengage. These micro-moments often reveal more than any post-task questionnaire.

Why consistency and transparency matter more than ever

AI systems don’t just need to be correct—they need to be consistently correct. A single unexpected or incorrect response can erode trust quickly.

Priyanka highlights consistency as a key pillar of AI user experience:

“Inconsistency directly erodes trust,” she says. “You need to evaluate how the AI responds across different users, tasks, and sequences.”

This is especially important in areas like voice AI user experience, where interactions feel more human and expectations are higher. When users are speaking naturally, they expect the system to understand nuance—and respond appropriately every time.

Transparency also plays a major role. Users need to understand what the AI is doing, why it’s doing it, and when it might be wrong. This is where concepts like human in the loop AI become essential—ensuring users remain in control rather than feeling replaced.

The role of accessibility and inclusive design in AI

AI has the potential to make digital experiences more accessible—but only if it’s designed with inclusivity in mind from the start.

Priyanka is a strong advocate for accessibility in AI and inclusive design AI, emphasizing the importance of recruiting diverse participants and testing across a wide range of user needs.

“I start from an inclusive recruit,” she explains. “Diverse participants in how they speak, the language they use, their interaction styles—and then evaluate how the AI responds to each.”

This approach ensures that AI systems don’t just work for a narrow audience but deliver value across different abilities, backgrounds, and contexts.

It also helps uncover edge cases that might otherwise go unnoticed—cases that can significantly impact trust and usability.

Title default

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Nunc vulputate libero et velit interdum, ac aliquet odio mattis.

From tactical research to strategic impact

One of the biggest challenges researchers face is moving beyond validation and into strategy. In fast-moving product environments, it’s easy to fall into the role of simply testing what’s already been built.

But Priyanka argues that researchers play a critical role in shaping AI product design from the ground up.

“It’s more important than ever for researchers to provide that foundation of your AI innovation,” she says.

She emphasizes the importance of:

  • Building strong cross-functional partnerships
  • Participating in early-stage product discussions
  • Challenging assumptions with user data
  • Connecting insights to business outcomes

Interestingly, she also challenges the idea that tactical and strategic work are separate.

“Something as small as changing where your AI presence appears can have a huge strategic impact,” she notes, pointing to how even minor design decisions can influence transparency and trust.

Using AI to enhance research workflows

While researchers are evaluating AI, they’re also increasingly using it to improve their own workflows.

Priyanka describes how tools like Copilot and research analysis platforms are helping streamline tasks like:

  • Survey creation
  • Thematic analysis
  • Insight generation

These tools allow researchers to move faster—but not at the expense of human judgment.

“I think of it as humans partnering with AI,” she says. “The AI can help you generate themes, but you still bring your own judgment.”

This balance is key. AI can accelerate AI research strategies, but it can’t replace the contextual understanding that comes from direct interaction with users.

Building AI fluency as a core skill

As AI becomes central to product development, researchers—and product teams more broadly—need to develop a new kind of literacy.

Priyanka calls this AI fluency: an understanding of how AI systems work, how they fail, and how to evaluate them effectively.

This includes familiarity with concepts like:

  • Hallucinations
  • Prompt design
  • Model evaluation
  • Bias and guardrails

“Being AI fluent cannot be an afterthought,” she says. “You want to be able to follow the conversation and ask the right questions.”

This knowledge empowers teams to make better decisions—and to build products that are both innovative and responsible.

The growing importance of product-market fit in AI

Another emerging theme is the importance of AI product market fit. With so many AI-powered features being developed rapidly, not all of them deliver real value.

Priyanka frames this as a critical question for teams:

“Does your AI truly add value to a workflow? Is it transformative? And at minimum, is it trustworthy enough for users to actually want to use it?”

This perspective shifts the focus from novelty to impact. Just because something can be built doesn’t mean it should be.

Teams need to ensure that AI features solve meaningful problems—and do so in a way that users trust and understand.

Title default

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Nunc vulputate libero et velit interdum, ac aliquet odio mattis.

Human judgment remains the differentiator

AI is changing how products are built, tested, and experienced—but it’s not replacing the need for human insight. If anything, it’s making it more important.

The most effective teams are those that combine the speed and scale of AI with the empathy and judgment of human researchers.

Priyanka puts it simply:

“It’s not about AI replacing what researchers do. It’s about humans partnering with AI to become more efficient and productive.”

That partnership is what enables teams to design experiences that are not only functional, but meaningful.

Designing for trust is designing for the future

As AI continues to evolve, the definition of a great user experience will continue to shift. Usability will always matter—but trust will be the deciding factor.

Designing for trust means:

  • Prioritizing transparency over complexity
  • Valuing consistency over novelty
  • Centering human needs over technical capabilities

It’s a shift that requires new methods, new mindsets, and a deeper commitment to understanding users.

And for researchers, it’s an opportunity to play a more strategic role than ever before.

As Priyanka reflects:

“It's not just about building a great experience anymore—it's about building something users genuinely trust.”

Episode links

cta-2026-illumi-awards-logo-230x230.png

Spotlight your great work

Apply for a 2026 illumi Award and showcase how your team is pushing boundaries, driving impact, and shaping what’s next with human insight.