
The responsible path to AI-accelerated Customer Insights

UserTesting’s point of view on introducing AI-powered features across its suite
This document outlines UserTesting’s guiding principles for introducing AI-powered capabilities across our platform. As the technology evolves, so will our approach — but our commitment to transparency, accountability, and human-centered insight remains constant.
Context and Definitions
AI is advancing quickly, and much of the conversation around AI in research is either overly abstract or overly absolute. In practice, teams are navigating real tradeoffs: where AI meaningfully accelerates work, where it introduces risk, and where human judgment remains essential. UserTesting’s point of view is shaped by direct experience building, testing, and deploying AI-powered capabilities in real research workflows — and by observing where these approaches succeed, and where they fall short.
Throughout this document, we use several terms with specific intent.
AI moderators refer to AI systems that conduct or assist research sessions by asking questions and following up based on participant responses.
Synthetic feedback refers to aggregated, simulated insights projected from prior real human data.
Agentic workflows describe AI-initiated actions (such as suggesting or preparing studies) that operate within human-defined goals and controls.
Responsible AI means systems that are transparent, inspectable, and designed so humans remain accountable for decisions.
These distinctions matter. In our experience, the biggest risk with AI in research is not that it produces no answer, but that it produces a confident, plausible answer that teams accept without inspection. This POV is designed to help teams move faster with AI while avoiding false confidence, preserving trust, and ensuring that customer insight remains grounded in real human experience.
1. How UserTesting incorporates AI across the research lifecycle
UserTesting’s point of view is grounded in the understanding that AI is most valuable when it reduces effort and increases signal, without removing human accountability.
Our guiding principle is that AI can accelerate interpretation, but humans should own meaning, impact, and decisions. In practice, this means AI is used to surface patterns, draft hypotheses, and reduce manual effort — while humans remain accountable for interpreting results, weighing tradeoffs, and deciding what actions to take.
AI and human roles are not always going to be mutually exclusive. Over time, capabilities will shift. The distinction below reflects where primary accountability should sit today—not a permanent boundary.

GUIDE
Effective AI: how to choose the right generative AI features
2. Trust, transparency, and responsible AI
At UserTesting, responsible and ethical AI usage is foundational.
UserTesting designs AI systems to be:
- Inspectable: AI-generated outputs link back to underlying evidence (video, audio, transcripts, behavioral data)
- Transparent: Simulated or AI-generated outputs are clearly labeled
- Accountable: Humans retain ownership of decisions
- Non-deceptive: AI outputs are never presented in ways that could be mistaken for real human responses
This “trust but verify” approach is a core differentiator.
3. UserTesting’s position on AI-assisted research
AI-assisted research includes multiple distinct concepts, which we address separately.
A. AI moderators (AI-assisted data collection)
AI moderators represent a significant advancement in scaling qualitative research. They enable teams to conduct many more interviews in a fraction of the time — bringing elements of “qual at quant scale” to modern research workflows. In structured contexts, AI moderators deliver consistency, comprehensive coverage, and speed that would be impractical with human moderation alone.
AI moderators can be highly effective for:
Structured follow-ups
Maintaining consistency across large numbers of sessions
Scaling coverage rapidly
Assisting human moderators in real time
Extending the depth and adaptability of survey-style research
Their effectiveness depends on context. In our research and product development work, AI moderators consistently perform well on structured tasks and scripted probes, enabling high-quality data collection at scale. At the same time, they are still developing the ability to fully interpret tone, hesitation, and emotional nuance—areas where experienced human moderators remain especially strong and where breakthrough insight often emerges.
Today, AI moderators are best suited for structured, low-to-moderate risk research where speed, scale, and consistency are priorities. For high-stakes decisions, deep exploratory research, or emotionally nuanced work, human moderation continues to play a critical role.
We expect AI moderation capabilities to advance rapidly. By combining UserTesting’s multimodal dataset and domain expertise with emerging AI systems, we see a clear path toward increasingly capable AI moderators that can support deeper exploration over time.
B. Synthetic feedback (projected, aggregated insights from AI-generated responders)
AI-generated or “synthetic” feedback is an emerging area of innovation in market research. Adoption is already underway in some segments of the industry, particularly in marketing and brand research.
In UX and product research, the technology and best practices are still evolving. UserTesting is actively studying this space and engaging with customers to understand where synthetic approaches create real value—and where they introduce risk.
We expect synthetic feedback to become part of modern research workflows in various combinations with human insight. The future will likely involve flexible sequences and blends of AI-generated and real participant feedback, depending on the use case, risk level, and stage of development.
Where synthetic research can add value
When used appropriately, synthetic feedback can help teams:
- Explore early-stage concepts
- Stress-test assumptions
- Identify likely questions or objections
- Refine research design before launching live studies
- Facilitate internal discussion and iteration
At the same time, synthetic outputs do not replace the depth, emotional nuance, or lived experience that comes from real human participants—particularly in high-stakes or exploratory UX work.
UserTesting’s approach to synthetic feedback
As we develop capabilities in this area, our goals are to:
- Introduce synthetic feedback responsibly and transparently
- Ensure customers understand what is simulated and what is based on real participant data
- Enable directional learning without overstating confidence
- Pair AI speed with human validation where appropriate
Synthetic approaches may take multiple forms over time, including aggregated projections, persona-informed simulations, or other emerging models. We are exploring these thoughtfully and in collaboration with customers.
Doing this well requires high-quality, differentiated data and domain context.
UserTesting brings:
- Rich multimodal data (video, audio, behavioral interaction, transcripts)
- Years of UX-specific insight patterns
- Strong controls around quality, fraud prevention, and participant integrity
This foundation positions us to develop synthetic capabilities that are grounded in real-world human behavior—not abstract generalizations. We believe we have the largest amount of such data to allow us to build the most accurate models for synthetic feedback.
As we introduce synthetic capabilities, we will:
- Clearly label simulated outputs
- Provide transparency into methodology and assumptions
- Avoid presenting AI-generated responses as indistinguishable from real participant feedback
- Support customers in determining when human validation is needed
This is an evolving space. We are investing, learning, and building with intention—ensuring that AI-generated feedback enhances, rather than undermines, trust in customer insight.

Are you an insight seeker?
Join UX, research, and design leaders to push your craft further.
4. How AI advances the core pillars of the UserTesting Suite
UserTesting’s AI strategy is not a separate product track. It is embedded across the core pillars of the UserTesting suite, strengthening each pillar while preserving our commitment to real human insight.
Our high-level goal remains consistent:
Combine customer perspective with AI speed so teams can make smarter decisions, faster.
Below is an overview of how AI capabilities—both existing and in development—advance each pillar.
Pillar 1: An unmatched global panel
UserTesting’s global panel provides rapid access to high-quality participants—from broad consumer perspectives to highly specialized expertise. This human foundation ensures that all subsequent insights are grounded in genuine behavior.
AI plays a critical role in protecting and enhancing this foundation.
Existing AI capabilities
Recruitment and Fraud Detection (UserTesting Verified™)
UserTesting Verified™ applies machine learning, behavioral signals, and geolocation intelligence to detect and prevent fraudulent participation. This dual-layer system:
- Detects VPN use with 40% greater effectiveness
- Identifies suspicious behavioral patterns
- Flags anomalous activity for review by our Trust & Safety experts
- Conducts verification checks both before and during sessions
This hybrid model—automation combined with human oversight—ensures only high-integrity participants enter the feedback funnel. It has contributed to an industry-leading 4.1% moderated session no-show rate, helping enterprise teams move faster with confidence.
Capabilities in development and exploration
We are continuing to invest in AI-powered quality and compliance systems, including:
- Behind-the-scenes models that flag low-quality or rushed sessions
- Improved participant-test matching to ensure relevance and fit
- Expanded behavioral anomaly detection to maintain panel authenticity at scale
AI in this pillar is designed not to replace human insight, but to safeguard it—ensuring that the speed of access does not compromise the integrity of feedback.
Pillar 2: Purpose-built feedback solutions
Purpose-built feedback solutions elevate how product, design, marketing, research, and CX teams design customer studies—ensuring the right participants, the right approach, and actionable insights every time.
AI strengthens this pillar by reducing friction across the research lifecycle—planning, conducting, and analyzing studies—while keeping human judgment at the center.
Existing AI capabilities
Generative AI features (optional, user-controlled)
- AI Insight Summary (LLM-based synthesis)
Automatically summarizes key learnings and themes from tests, surveys, and interviews—what people did, said, and where they struggled. Every AI-generated insight links directly to the exact video moment for verification, reinforcing our “trust, but click” philosophy.
- Insights Discovery
The Insights Discovery feature analyzes think-out-loud videos and surveys to uncover patterns and themes across multiple datasets. Researchers can customize the data scope—zooming in on a specific study or expanding across studies—and ask natural language questions to receive clear, actionable answers with linked citations to video timestamps and survey themes.
Foundational platform intelligence (always-on system models)
- Sentiment Analysis and Smart Tags
Machine learning models tag positive and negative feedback, moments of confusion or frustration, and key behaviors and quotes. These capabilities are designed to remove analysis bottlenecks so teams can run more tests and surface more insights without increasing manual workload.
Together, these capabilities reduce the work it takes to synthesize findings and make insights easier to act on.
Capabilities in development and exploration
We are building toward a more intelligent, guided research experience that feels like a built-in research assistant.
This includes:
- AI-assisted test creation and audience selection
- Suggestions to improve test plans and screener questions
- Guided workflows that support researchers step-by-step
- Automation of repetitive tasks such as formatting reports and organizing insights
We are also exploring AI-assisted moderation and synthetic feedback capabilities.
AI Moderators
AI moderators may run interview sessions autonomously in certain contexts, ask structured follow-up questions, monitor session quality, and support richer data collection during real human sessions.
Synthetic Feedback
Synthetic feedback can provide first-pass, early concept exploration—helping teams spot obvious issues, stress-test assumptions, and refine research design before launching live studies.
As this space evolves, we expect AI-generated and human feedback to be used in various combinations and sequences depending on risk and context. When synthetic capabilities are introduced, they will be:
- Clearly labeled as simulated
- Transparent about assumptions and methodology
- Used to generate directional insight—not to replace real human validation where nuance matters
AI accelerates research workflows, but human expertise remains responsible for interpretation, prioritization, and decision-making.
Pillar 3: Insights Intelligence Core
The Insights Intelligence Core connects insight streams across time and teams—revealing patterns, building institutional knowledge, and strengthening decision-making.
AI is central to transforming research from episodic activity into a living, evolving capability.
Existing AI capabilities
Insights Discovery already allows teams to query across studies and uncover patterns in behavioral, video, audio, transcript, and survey data. By combining multimodal signals, AI produces more comprehensive insights with fewer gaps.
These capabilities make insights more searchable, more connected, and easier to apply beyond the immediate study.
Capabilities in development and exploration
We are expanding AI’s role in enterprise intelligence through:
Path Flows behavioral pattern analysis
Combining clickstream, behavioral, and qualitative data to generate visual Path Flows that reveal common journeys and drop-off points.
Insight customization and domain-tuned models
Features that allow customers to “teach” the AI their terminology and definitions of meaningful events—tailoring outputs to each organization’s language and priorities.
Specialized, use-case specific LLM optimization
We are exploring retrieval-augmented generation, fine-tuning, and other model optimization approaches grounded in our UX-specific multimodal dataset (video, audio, transcripts, behavioral metrics).
Agentic workflows
Configurable automated chains where AI may identify patterns, suggest next steps, initiate a study, and prepare a report—while humans define goals, approve actions, and interpret results.
We are also enabling insights to flow into customers’ internal systems—analytics dashboards, AI models, and experimentation tools—so research informs decisions wherever work happens.
In this model, insights do not stay inside UserTesting. They become embedded across the enterprise.
Foundational strengths enabling AI across all pillars
Across these pillars, our AI investments are supported by three foundational strengths:
Multimodal data foundation
UserTesting captures one of the most extensive multimodal UX datasets in the industry — spanning video, audio, behavioral interaction data, UI structure, and participant context across millions of research sessions. This depth and diversity of signal enables more context-aware AI systems that surface meaningful patterns while reducing the risk of incomplete or decontextualized insights.
Unified AI architecture (Model–Context–Protocol)
A centralized orchestration layer powers AI experiences across summaries, synthetic feedback, workflow automation, and integrations—ensuring consistency and scalability as capabilities evolve.
Privacy, security, and trust by design
Responsible data practices are foundational to how we build and deploy AI. Customers maintain control over how their data is used, and AI-generated outputs are designed to be transparent and inspectable.

Are you an insight seeker?
Join UX, research, and design leaders to push your craft further.



