Episode 221 | May 04, 2026

Designing with AI: outputs are easy—problems aren’t

Designing with AI: Louis Rosenfeld and Llewyn Paine explore how AI is reshaping UX design, workflows, and the future role of designers.

Designing with AI: outputs are easy—problems aren’t

The most dangerous idea in design right now is that speed equals progress.

For decades, designers have been trained to move faster—ship quicker, iterate more, optimize everything. But as artificial intelligence floods the creative process, that instinct is starting to look less like a competitive advantage and more like a liability. The tools are getting faster. The outputs are getting smoother. And yet, the underlying questions—about quality, intent, and meaning—are getting harder to answer.

That tension sits at the center of a recent Insights Unlocked conversation with Louis Rosenfeld and Llewyn Paine, two longtime observers of how technology reshapes design. What emerges is not a rejection of AI, but a reframing: the future of designing with AI may depend less on how quickly we adopt it, and more on how deliberately we use it.

Stream On

Share

Get Started Now

Contact Sales

A moment to pause—and a place to learn

Before diving deeper, it’s worth noting that Louis and Llewyn are not just theorizing. They’re actively convening the conversation through the upcoming Designing with AI 2026 virtual conference, taking place June 9–10. The event brings together UX designers, researchers, product leaders, and strategists who are grappling with these same questions in real time.

This isn’t a conference built on hot takes or polished demos. It’s designed for practitioners—especially those a few years into their careers—who are being told to “use AI” but aren’t given a clear roadmap for how or why. Attendees can expect case studies, cohort-based discussions, and a focus on what actually works in practice.

Registration is available through Rosenfeld Media’s conference site, and listeners of Insights Unlocked can use the code Insights75 for $75 off through May 30.

If most AI conversations feel like watching someone else drive, this one is about handing you the wheel.

The illusion of intelligence

One of the more subtle risks of AI is not that it fails—but that it succeeds too easily.

“The problem,” Llewyn explained, “is that it looks good at face value.” The outputs are polished, articulate, often indistinguishable from human work. Reports appear “insight-shaped,” as she put it, even when the underlying reasoning is thin.

This creates a new kind of design challenge. In the past, bad work was easier to spot: clunky interfaces, awkward flows, obvious gaps. Now, the danger lies in outputs that feel complete but lack substance—like a beautifully wrapped package with nothing inside.

For UX researchers and designers working with AI in UX design, this shifts the burden of proof. It’s no longer enough to produce something that looks right. The question becomes: is it true? Is it useful? Does it actually support better decisions?

Llewyn’s answer is deceptively simple: bring evidence.

At the Designing with AI conference, speakers are explicitly asked to back up their claims—not just demonstrate what AI can do, but show how it performs under real-world conditions. It’s a small but meaningful shift, from showcasing capability to validating impact.

From makers to orchestrators

If AI is changing what gets made, it’s also changing who does the making.

For much of modern design history, the designer has been positioned as a creator—a skilled practitioner shaping pixels, flows, and systems. But as AI takes on more of that execution, the role begins to shift.

“We’re moving toward a model where the designer is more of an orchestrator,” Louis said.

It’s a quiet but profound redefinition. Instead of producing every artifact, designers are increasingly coordinating between systems: guiding AI tools, aligning stakeholders, shaping inputs and evaluating outputs. The work becomes less about direct creation and more about directing the conditions under which creation happens.

Think of it less like painting and more like conducting. The value lies not in playing every instrument, but in knowing how they come together.

This shift is already visible in how teams operate. The boundaries between design, product, and engineering are blurring, as the same AI tools become accessible across roles. What was once specialized knowledge is becoming shared capability.

The risk, of course, is that everything starts to look the same.

ON-DEMAND WEBINAR

Effective AI: how to choose the right generative AI features—and build them fast

When “good enough” is actually enough

AI has a peculiar talent: it raises the floor.

As Louis pointed out, many forms of writing and design don’t need to be exceptional—they just need to be adequate. A product description. A landing page blurb. Internal documentation. These are not masterpieces; they are functional artifacts with short lifespans.

In these cases, AI can be remarkably effective. It can take something that would have required an hour and reduce it to ten minutes. It can eliminate typos, smooth awkward phrasing, bring work up to a baseline level of competence.

“The spam I get these days is really great,” Louis joked. “No typos, no grammar errors.”

It’s a humorous observation, but it underscores a serious point. When AI can reliably produce “good enough” work, designers are forced to reconsider where human effort is best spent.

Not everything deserves perfection. But some things—core experiences, critical decisions, moments of trust—demand more than average.

The challenge is knowing the difference.

Designing for humans—and for machines

Perhaps the most disorienting shift in AI in UX design is the emergence of a new kind of user: the agent.

For decades, design has focused on the human-computer interface—making interactions intuitive, efficient, and meaningful for people. But as AI agents begin to act on behalf of users, that model starts to fracture.

“Agents may very well be the new users of our interfaces,” Llewyn said.

It’s a subtle but significant change. Instead of designing solely for human behavior, designers must now consider how systems interact with other systems. The interface is no longer a single layer; it becomes a chain: human → agent → product.

Reducing friction, in this context, means optimizing not just for human understanding, but for machine interpretation. It introduces new questions about structure, clarity, and control.

And it raises an unsettling possibility: what happens when the primary audience for your work isn’t human at all?

The pressure to move—and the need to think

Across industries, the adoption of AI is often driven by mandate rather than understanding. Leaders declare its importance. Teams are told to “figure it out.” The expectation is movement, not necessarily clarity.

Louis sees this as a familiar pattern—technology accelerating faster than our ability to make sense of it.

“We can’t slow the pace of the technology,” he said. “But we can create space for more thoughtful consideration.”

That space is increasingly rare. Designers are asked to experiment, produce, and adapt—all while the ground shifts beneath them. It’s like trying to map a landscape during an earthquake.

And yet, this is precisely where design’s value becomes most apparent.

The discipline has always been about asking better questions: What problem are we solving? For whom? Under what conditions? Those muscles—diagnosis, framing, sense-making—are more important than ever.

As Louis put it, “Playing with it… is not the same as solving.”

Crafted 2025 promo image

Are you an insight seeker?

Join UX, research, and design leaders to push your craft further.

Learning, not knowing, is the real skill

If there is a unifying theme in the conversation, it’s this: no one has it figured out.

Not the designers. Not the researchers. Not the organizations mandating adoption. The pace of change makes certainty a moving target.

Which is why the Designing with AI conference leans heavily on storytelling—on case studies that reveal not just outcomes, but the messy process behind them. What worked, what didn’t, what changed along the way.

“It’s not so much ‘here’s what we did,’” Louis said. “But ‘here’s how we learned.’”

That distinction matters. In a field defined by rapid evolution, the ability to learn—quickly, critically, continuously—becomes more valuable than any single tool or technique.

It’s less about mastering AI and more about navigating it.

Finding your place in the shift

For many designers and researchers, the current moment feels disorienting. Roles are changing. Expectations are unclear. The tools that once defined expertise are now widely accessible.

There’s a sense, Llewyn noted, of people feeling “lost… kind of adrift.”

But there’s also opportunity.

As infrastructure expands—more compute, more capability—the need for thoughtful application grows. Someone has to decide how these tools are used, what problems they address, what trade-offs are acceptable.

That “someone” is increasingly the designer.

Not as a pixel-pusher or deliverable machine, but as a translator between possibility and purpose.

The work ahead

The future of designing with AI will not be defined by the tools themselves. It will be shaped by how we choose to use them—what we automate, what we preserve, what we question.

Speed will remain a factor. Efficiency will continue to matter. But neither is sufficient on its own.

What’s needed is a kind of discipline that feels almost countercultural: the willingness to pause, to interrogate, to resist the urge to equate output with value.

Or, as Louis put it more directly: “We need space to ask these questions… to slow things down, because the rest of the world is not doing that.”

Episode links

GUIDE

Getting user feedback at every stage of the design process

Frequently Asked Questions (FAQs)