
Elijah Woolery
Co-host, Co-founder, Design Better Podcast
Amy Lokey
CXO, ServiceNow
Innovation moves fast—but failure is costly. In this live recording of the Design Better Podcast, hosts Aarron Walter and Eli Woolery sit down with Amy Lokey, Chief Experience Officer at ServiceNow, to discuss how experience-led organizations are reducing risk, testing early, and accelerating innovation across product, design, and CX. Hear how Amy brings together cross-functional teams to stay ahead of evolving customer expectations—by building with confidence and designing for impact.
Amy Lokey has led design and product teams at LinkedIn, Google, and ServiceNow where she's currently the chief experience officer.
Probably helps if I record.
At ServiceNow, Amy's team is helping shape how AI transforms our work, creating smart systems that can predict what we need, adapt on the fly, and make it easier to work with complex systems and connect with colleagues.
We're excited to talk with her about how her team approaches designing for enterprise level AI applications, including specific applications for agents and how they can help you in your day to day work.
Amy joins us...
Amy Lokey has led design and product teams at LinkedIn, Google, and ServiceNow where she's currently the chief experience officer.
Probably helps if I record.
At ServiceNow, Amy's team is helping shape how AI transforms our work, creating smart systems that can predict what we need, adapt on the fly, and make it easier to work with complex systems and connect with colleagues.
We're excited to talk with her about how her team approaches designing for enterprise level AI applications, including specific applications for agents and how they can help you in your day to day work.
Amy joins us today for a special live episode recorded on stage in San Francisco, California at the user testing disconnect city tour.
This is designed better, where we explore creativity at the intersection of design and technology. I'm Eli Woolery.
That's the end of the music.
You can learn more about the show and listen to our conversations with guests like David Sedaris, I Eileen Fisher, John Cleese from Monty Python, the band Okay Go, Pixar cofounder Ed Catmull at design better podcast dot com. Amy Lokey, welcome to Design Better.
Thank you so much, Eli. It's great to be here. Great to be here with all of you. Thanks for, thanks for your time and attention.
Let's start. So, let's start talk a little bit about the latter part of your career. We mentioned that you led design teams at Google and LinkedIn. So maybe tell us a little of the story from moving from those companies into your current role at ServiceNow.
Yeah. Absolutely. So, I think one of the connective threads, from LinkedIn to Google and then ServiceNow was just a growing passion around helping people accomplish what they wanna do in their professional world and their professional lives.
You know, LinkedIn, I before that came from more of a consumer hundred percent consumer product world working at Yahoo and other companies.
At LinkedIn, we were building a social network for professionals, for businesses, for people to build their brands, to excel in their careers, to get learning and develop their skills, potentially make business to business connections, marketing purposes, all those things. And what really resonated with me and felt like it connected to my own Values were that we were helping people reach financial stability. We were helping them provide for their families. We were helping them connect get their next best opportunity and that felt really good. It felt like very purposeful work.
So that then also led me to where I spent a couple years leading user experience for Google Workspace. It was at that time called G Suite. And, there again, it just felt great to work on a number of productivity tools. My kids were using them in school to do their classwork.
I had used them for many years as a professional.
And, and it also is really the intersection of personal and professional in that world too. I use my Google Calendar to this day to manage, you know, across my family's activities. Right? And having that, view of my personal schedule along with my work schedule is incredibly valuable.
So really that I it started to move me more and more into enterprise design.
G suite had kind of a fledgling app development product at that time. And as I was spending more time talking to customers about what they were trying to accomplish, I started to learn more about the value of, you know, business workflow, connecting teams across an enterprise, Helping digitize tedious processes that might still be paper based or spreadsheet based and, one thing led to another and I started talking to folks at ServiceNow And I just thought ServiceNow was a really unique product opportunity it's, one platform that a customer can install and then you can build into and create a number of solutions across the business everything from employee experience customer service solutions technology products, and then a low code app development platform.
So whatever we don't think of as an out of the box product, our customers can build themselves. So it just for me seemed like a fascinating product opportunity with an incredible diversity, of products, you know, interest for me personally from an intellectual standpoint, but also just still felt fairly squarely rooted in helping people be productive, have good experiences at work, have great customer service experiences, and businesses really evolve and transform how they were operating in in great ways based on the technology. So I I just felt like that was, the direction I wanted to head in from a career perspective, and it felt like really fulfilling work.
So you you lead a pretty large team right now. Right?
How many folks are on there?
I do. I do. We are, nearly a thousand people at this at this point. Pretty close.
Global team, you know, largely based both in the United States, but also globally across, India, Europe. We have a team even in Egypt. You know, we've got teams all over, and we're continuing to expand. I think I actually have my first team member in Costa Rica this year.
I'm really excited about that.
So we're, you know, we're continuing to grow and expand into a global market, looking for where great talent is and, where we can also expand our product regionally as well too.
So with a team that large, I imagine it can be tough to change course. And given just all the instability right now and the the rapidity with which things are changing, how do you think about that? How do you think about steering that big shit?
That's very good point. So things seem to be moving faster than ever, and, we certainly aren't slowing down at ServiceNow. We're in a very competitive market to take AI products to market, and, we need to do that and continue to maintain the trust and credibility we have with our customers to do that in a safe and secure and ethical way. So, if the complexity is growing and the pace is growing, I think that it starts with having a really tight connection and relationship with my leadership team.
We meet on an ongoing basis. You know, the folks that I have working on some of our core AI experiences, we're we're meeting on hourly almost. It seems like we're in constant communication. But I think it all starts with the relationships and strength of my leadership team that we can work well together. They can work well together without me there, and that there's just a constant cascade of information to the team to keep everyone appraised.
We definitely take kind of a hub and spoke model with that where we do have central teams that focus on what we call enablement. And, for us, that's both team enablement. So getting the right resources and information, patterns, Out to the team and disseminated to the team. We have a number of mechanisms for doing that but we also have an ecosystem that we work within where we have a whole, you know Slew of partners that have UX teams that build and deploy ServiceNow We have customers that have UX teams that build on ServiceNow. So our enablement is both internal and also external. So we're also, you know, making sure that whatever resources we're building for our own employees, we're thinking about how do we also deploy those out to our ecosystem so we can get those design teams and UX teams up to speed on the latest and the greatest.
Let's talk a little bit about agents. Who do you think the next James Bond should be?
Well, it's funny you say that. I thought it should have been Idris Alba, but funnily enough, we actually have him now as our spokesperson, for ServiceNow. So booked.
You can.
He's booked. Yeah. We got him. We nabbed him. So, you know, unfortunately, the double o seven franchise is out of luck because we got him, but I think he would have been amazing.
He would have been great.
Yeah.
Or they could maybe bring Sean Connery back to life with a Well, that would be Who knows?
Interesting.
The in all seriousness, with so Ray's talk earlier mentioned the sort of division between generative AI and agentic AI. I think, at least myself and I assume a lot of the folks in the room are a lot more familiar because we're using day to day generative AI. We get all the use cases.
I'm at least personally, much less familiar with use cases and using a agentic AI, but I know it's a big part of the work that you do. So maybe you could just talk us through some specific work cases and how you're thinking about it.
Yeah. Absolutely. Yeah. I've been kind of on the the ground floor of all of our AI development since I joined the company. We we had been investing in AI for a long time even prior to kind of ChatGPT being launched and, OpenAI first launching the large, open source LLM model a couple years ago. It's three years ago now. So, you know, AI has been built into our products since the get go.
Generative AI, obviously, we realized was really good at some key things, obviously, as you all know, you know, content based things. So summarize in content, generating content. Right?
And so we looked for a number of use cases where those activities happened most often. For us, I mentioned we do employee experience. That means we help employees with various services or support they need. It could be anything from inquiring about their benefits to, getting software to updating their computer to understanding if they can take days off to going on leave, you name it. Anything across, like, HR, legal, IT, anything you need to get your job done is what we call our employee experience suite. Customer service as well too.
A lot of those things you can kind of think of as a request fulfill model. And on the fulfillment side, there's frequently a lot of tedious tasks, right, that lend themselves very well to this world of generative AI. So a lot of folks that work in those, worlds, they get cases. You know, they have to get through a number of customer support cases or employee requests, what have you. So, you know, generative AI was great for things like summarizing a case, helping someone get up to speed, generating emails or chat responses in a really appropriate way.
Even starting to to diagnose and suggest, resolutions, to complex cases because of a precedent that existed or other information that existed in an ecosystem.
So those are the things that we first delved into. We also delved into the development world. So, ServiceNow is written on a kind of proprietary code language. There's tons of it available for us to write models on.
So, you know, text to code kind of experiences, developer experiences, those were highly, highly accurate and very quick to deploy. And so those were really, like, the home runs for us in the first year or so of generative, product solutions. And and, again, there were activities where we saw people doing them where there was large volumes of people doing them, highly repetitive tasks, things happening at a high frequency that were repeatable, where you could actually come up to with a pretty clear value prop for the customer on the time savings and the productivity savings, for, for their employees.
And and I know we've got a lot of user researchers here. This is based on user research. So we, used a very foundational system usability score type of measurement to look at how much time were people spending on these tasks and measuring that in a very quantitative way and then looking at the reduction of that time, and the comparison. And then you could you could extrapolate a value prop to your customer.
Even we even developed pricing models based on that. So so that was kind of the world where we started.
As we move into Adjuntic AI, really, the way that that works is we develop skills. Skills are like an individual kind of atomic level generative AI thing, like summarize a case. Right?
AgenTek AI, they're built on those same skills. So we took all the same skills that we had made generative, which which typically had a a human kind of trigger them. Right? So a human would look at a case and click a summarize button or draft an email or summarize a case or write a knowledge based article.
Right? So they're all kind of, like, human initiated generative skills. Well, in making those agentic, you're just basically automating that trigger. And you're saying in the event that this happens, have that AI agent run off and execute that skill.
So that was kind of step one of agentic AI. And, again, very kind of like individual skills doing individual tasks.
Now we're in a world where we have an orchestrator, and that's really the the really powerful part of this now is with the AI orchestrator, you can kind of think of it as like a team manager. Right? So the trigger happens. It kicks off the orchestrator, and the orchestrator is saying, what team do I need to solve this problem?
So it can handle a much more complex, maybe lengthy process and pull together a team of AI agents that are all essentially unique generative AI skills, to accomplish that task. Right? And then we do it in such a way that we look at it as a a read write kind of analysis. The AI agents can go autonomously do all of those read type of activities.
So pulling together the research, doing the analysis, conducting a diagnostic, pulling that information all together in a succinct way for a human to evaluate, make a choice, and then execute on that. And then when it comes to the right part of it where you make a change, that's where that human's in the loop to execute on it. So that's that's kind of been our evolution and trying to say it in a very short way. There's obviously a lot more to it.
But, where we move from kind of those fundamental generative capabilities into kind of singular skill based agentic capabilities to now orchestrated teams of AI agents. And then, coming up now, there'll be agent to agent frameworks where you do that across systems. Right? So our AI agents can collaborate with Microsoft's AI agents and so on to really do, you know, complex activities across systems.
Yeah. Talk a little bit more about that because we spoke before about, that Copilot integration and that agent agent work?
Yeah. Yeah. So we're, we're very close partners with Microsoft. And, last year, one of our bigger announcements, at our we actually have a big customer event next week, by the way, and, I'll be going to that next week.
And so last year this time, we announced our first integration with Copilot, which is really cool. So we had, you know, kind of like a bot to bot interaction where you could be using, say, Copilot in Teams and say, you know, my laptop's not working very well. I think I might need a new one. And the Copilot agent can call in our, you know, our bot, basically, and we could provide all the service that you needed through ServiceNow.
So it was kind of one of the first forays into, the Microsoft Copilot, you know, agentic, so to speak. You know, at that point is a virtual agent calling in ours.
What's coming up next week, I can't disclose a hundred percent yet, but you can imagine that same kind of framework will now again work with teams of agents on both sides. And and we're in a good position to do that at ServiceNow because we already have a platform that integrates really well with all these other systems. So we already had the framework and the APIs, the access controls, the security, all that built in that we can leverage to to open up those lines of communications even at an agentic level.
So designing, AI enabled products for enterprise is a lot different than consumer. On the consumer side, there's a lot more liberty to ship something and it breaks. Like, okay. Well, we'll roll that back. Yes.
But then but if you have enterprise customers, that's not so easy.
Right? Correct. How do you think about that?
Yeah. That's been, I think, probably one of the the trickiest parts in all this with enterprise.
We've learned so much along the way. I mean, first, you've really gotta test, the technology in a very robust way. There's a degree of dynamic unpredictability. Obviously, you know, we're all very familiar with generative AI can hallucinate. It might come up with the wrong answer. That's why you need humans in the loop.
At enterprise scale, that can be pretty serious. Right? Which is why we've been very, very careful to always disclose when we're using generative AI.
All of our initial four ways, of course, were kind of human driven, human triggered, and so on.
Now as we get into AI agents, the good news is is we've had the time to develop these skills. Right? Again, they're all based on these fundamental generative AI skills. But we are, we've learned along the way how critical it is to test with realistic customer data. So we've made big investments in making sure that, the data that we're using internally to test these models is accurately represents our customers.
We also then have, like, kind of an early adoption, methodology where a number of our customers can kind of go into what we call our innovation lab, and they know that they're getting into an early access, situation. But they are excited to be the first adopters, and they wanna give us feedback on the product, and they're willing to test it out.
And so, you know, I think I see our customers who are definitely some of them are very, very risk tolerant, and they're ready to jump in with both feet. They're also making sure that they're testing this really, really well.
But, fundamentally, we always make sure that we're being really transparent to the end user that they need to trust and, they need to, you know, trust but verify. Right? Like like, it should be working at a pretty high level of performance and accuracy. We're testing that and hitting certain thresholds before we release it, but there's always a little room for error there. Yeah. Makes sense.
Let's talk about how you stay connected with customers. Tell us about your your research practice, how you get feedback.
Yeah. Absolutely. So I'm I'm really proud of our research team. We've got, you know, again, a global team of researchers that we've grown over the years, led by a leader on my team on Anthronathon, who's amazing.
And, you know, we when I joined ServiceNow about five years ago, we were very, very focused on foundational research, and I heard some folks talking about that in our last session. And and I do agree that when you're in the earlier stages of a product company, when you're trying to figure out how to grow, where to develop, where where you wanna place your bets in terms of your product strategy, foundational research plays a critical role. And so our research team was almost heavily oriented into almost like product market fit research in those years. And that was at a time in ServiceNow's history where we were scaling tremendously.
So we're moving from being kind of an IT technology company to all these other areas within the enterprise from employee to customer service and beyond. So we needed to figure out, like, where was our product the right fit? How might you need to modify questions, but a lot of what we're doing is influencing product management on where to take this product and where where it could go.
Then we had a lot of product out in market that we needed to improve. Right? Like, we put a lot of stuff out there. Some of it was sticking, some of it wasn't, And we really had to look at the fundamentals of usability.
Right? Like, were people able to use these products? And so we've really invested in and matured now our fundamental usability testing. And and like I mentioned, a lot of that's based on a system usability score.
We created our own branded method that we called, UX quality, and it is based on time on task, accuracy, number of clicks, qualitative measurements, like how do people feel about it, what do they think about the aesthetics? Did they feel like it was a good design? All of those things. And we've been rolling out those benchmarking studies now for probably about four years.
And in some of our products, we've done them pretty regularly with every we do two big releases a year. So with every big we call them them family releases.
We will do another benchmarking study. And it's just provided really clear, actionable feedback for our teams to act on. And so, we had a product, for example. It's called field service management.
So it's part of our customer service product. It's, you know, technicians that go out in the field and have to fix things. You can think of them as like the Comcast person that might come over and fix your broadband connection. Right?
So they're out there. They're mobile. They've got equipment on their truck. You need to do a lot to, like, help them find, the right job to do and make sure you're assigning the right person to the right type of job, and you're looking at their logistics.
Anyways, our our product in the early days wasn't so awesome. Right? It was really in early stages.
It wasn't garnering much business. It wasn't easy to use. We've done these studies now probably, I think, about six or seven times, and we've taken that from what was scoring kind of in the forty percent to about eighty five percent in terms of usability. And, that's like a consumer grade product at this point, and we've seen the revenue from that product go at the same trajectory.
We also helped garner investment in the team. So the UX team has grown over that time too, and we can show this really great correlation from the investment in research, making those changes in product, the adoption and sales of that product, and, and how well it's performing are all very, very closely connected. Right? There's a causation there.
So that's been really cool. And then, the other piece that we do that I'm really proud of from a research, impact standpoint is, we have internal tooling. This is part of our, product suite too, that is kind of a developer operations or engineering management tool. And it it's a product that works for product managers as well as engineers, and we're building out for our design team and research team too.
And so we log every single individual unique insight into this tool. Each insight is an individual record essentially in the tool.
And we're able to also connect that to a design artifact that is our design record, which ends up being the the revised design or the proposed design that we will build to address the, research insight. That then also gets attached to an epic and a story and a PRD. All of this is in the same tool, and then we track it that it's shipped. So we have this really nice tracking mechanism now where we can track from insight to impact in terms of we discovered this thing.
Here's the design that addressed it. Here's when it shipped. Here's the epic and the story for the engineering artifacts that were built. We can see when it was shipped, and then we can also now measure the throughput of our insights, to product execution.
And we have an accountability model too. So we look at the burn down rate of how much UX debt is sitting on the shelf. Right? So if we have teams where we've got a bunch of insights and they're not being acted on in kind of a regular amount, that's a red flag.
And so those insights now both the user experience quality and the insights to impact tracking that we do, we look at it in executive review on a quarterly basis. So it's, overall we call it our UX health scorecard, and our executives, have to speak to that in quarterly product reviews, which I think is pretty awesome. And usually they're good stories, but also there's an accountability model there if there's work that we still need to do.
That's great. In the last panel, Leanne was saying that if you're a researcher, it's not just your job to do the research, but you also have to be in sales and marketing essentially. Does so do those tools kinda play into the sales and marketing part?
Percent. Yeah. We I mean, we've debated if it's the right time and place, but we do show team investment too. Right?
So if there's a lack of investment in research, in design, you know, we can show that as potentially contributing to the poor usability or adoption of a product. Right? And then we can also show, like, I was talking about field service management, team health, continual progress against user research findings, and continual business results that match to that investment. So, you know, we can show the the payoff that can come from investing in user experience.
Yeah.
Let's shift over and talk a little bit about sort of the future of the designer's role, the researcher's role in this new era. And, we're both associated with education. I teach and you serve on an advisory board for scab. And so I think we both get a little bit of a look into like, what?
How do we teach to this future of design and at least in my students? I'm seeing a lot of the blurring of boundaries between what might Originally be considered design task or or developer task where the designer canal you know, vibe code their way into a really effective prototype and vice versa. The engineer could use a lot of these tools to, you know, build build out a first pass at wireframe or or even higher fidelity mock ups. So we're talking about this at at lunch, and and there's a feeling that we're not gonna be calling ourselves designers or developers in the near future.
We're gonna be builders or creators. So how do you think about that? How do you think about, you know, advising your own team to upskill and and kind of be on track for this new feature?
Yeah. Absolutely. It's such a great question. I think we're all wondering about how this field might change.
And, Yeah. I I, was with a number of my old colleagues from back in my Yahoo days the other day, and we were we were at a a talk, an event by the founding designer of Perplexity, and we were all talk we kinda joke. We're like, are we gonna see, like, the rise and fall of UX as an industry in our lifetimes? And I don't think so, but it was kind of interesting.
It will change a lot for sure. Like, when I was working back at Yahoo, we were all using Illustrator, and that was our best UI design tool because it was Photoshop. It was at least vector based. Yeah.
And you could use Photoshop, but we anyways, we we all liked Illustrator for various reasons, but now you've got Figma and so on. So, I mean, the tooling keeps changing and evolving to meet our needs, which is really exciting. I think that will continue to accelerate. And like you mentioned, I had the chance to go, to SCAD a couple times and talk with their academics and their students about what do they think about AI.
And I would say, like, a ago, students and faculty were pretty nervous about it. You know, the faculty didn't know if they should allow their students to use it, but they kinda felt like they probably should because they were going to need to have you know, learn it at some point. And the students also felt, like, ethically, like, ethically, like, well, should I be using it? Is that cheating?
And so we just asked them to, like, full disclosure. Like, show us what you're doing. How are you using it? And, one of the the examples that really stuck with me is there's a fashion design student, and she had constructed this beautiful pink dress, physical, actual, you know, prototype of a dress.
And normally, she would have wanted to explore a bunch of ideas, and she would have wanted to try it long or short or puffy sleeves or no sleeves or all these different variants. Right? But to do that physically in the real world would have been a tremendous amount of work and taken weeks and lots of materials and so on. So she had constructed this real dress.
She took some photos of it, and she brought it into midjourney, and then she was able to do all these cool iterations on this dress. Right? Really push her imagination, try all these different things without the the limitation of, like, actually having to build it or construct that herself. And, we thought that was kind of, like, like, one of the perfect examples of what AI can help us with as creatives.
Right? Is that really that expansive, ideation and creativity of of exploring those ideas. So to me, like, I keep coming back to AI as a tool. It's just like back in the day when I got to use the magic wand selection tool and it kind of stuck to the edges of a picture and I was like, wow, that's really cool.
Now it's like you just click one button and it does it for you. You don't need to go into the lasso tool and, like, perfect it.
AI is that same kind of tool, but it's kind of exponential in what what it can help us to where you don't really need to spend as much time learning expertise in these really complex tools like a Photoshop, but you still have to have the idea, And you still have to have the understanding of humanity, of human need, of society, of technology, of cultural, you know, appropriateness. Like, you have to understand the world that we live in and what human needs are, and then you have to have the idea and the vision for what how to meet those needs in a new and creative way, leveraging technology.
And I still think that our roles are very much the interpreters of technology, like, to help people understand how to leverage technology. And and those interfaces may evolve. They may become more simple and conversational.
But you have to have that idea and that vision. So I think we all just move into more of the role of a creative director and a visionary. But, also, I think if fundamentally, you have to understand how to articulate and direct those ideas. You still have to be able to communicate them in a way whether an AI model can understand them or your team can understand them.
You have to be able to rationalize and articulate the idea and then know what good is. Like, know if it's gonna meet that need and be able to evaluate that and iterate on it. So I I still think those fundamental skills exist, but I I just think it's gonna be less about being an expert in a particular part of that craft. Again, whether it was I'm a, you know, wizard at Photoshop or Figma or I can code in any language.
Those things maybe aren't gonna be less important than really having that idea and knowing how to articulate it and see it through to a high quality product that meets a human need.
Amy, what are you watching or reading or listening to right now that's got you inspired? Doesn't have to be work related.
Oh, goodness. Well, I mean, on a work related front, I'm pretty, religious about listening to pivot twice a week. I love their podcast. I just find it's great way to stay appraised of business economy, technology news. So I do listen to that quite a bit.
Gosh.
Reading. I have about ten books on my bedside table that I am probably not making as much progress on as I would like them, but I mean everything from actually, I grabbed Brian Solis' book. He's he's here. He's gonna speak next. He's he I've got three of his books on my bedside table. I'm gonna admit that I partially threw all of them.
And, so I do I kind of alternate between work and and fictional books as well too.
Gosh. What is it? I think it's like the art thief is something that I read fictional. It was really good. I thought if you like art history and, European travel and stuff like that, that, that was really, really enjoyable. So I try to mix it up. I'm actually an English major, so I do love a good fictional story.
I'm also reading a a random kind of more spiritual book around, like, you know, past lives and stuff like that. I don't know if I believe all of it, but it's very interesting and thought provoking. So I don't know. I'm all over the place between business and and personal interests.
That's great. Awesome.
Well, I know we're running a little bit behind, but do we have time for an audience question or two, or should we wrap up?
Kinda get a signal from the team back here. Hard to see.
Is it okay to have an audience question? Do we have time or no?
Yeah. Well, let's do one audience question if there's anybody that has a question for Amy.
Over here.
Here I come.
Thank you. Hi. Will Jordan, product design at Box.
Hi. Nice to meet you.
Nice to meet you. Thank you.
What what types or categories of insights or themes resonate the most with enterprise stakeholders from your experience?
That's a really good question.
Well, it depends on the stakeholders. So if I'm thinking about what resonates that maybe has the most impact with our product teams making product decisions, if that's kinda what you mean, You know, we're we're looking at three altitudes of how to resonate with the customer, and I think this has been a really good learning for me in my time at ServiceNow.
There's there's three audiences that we have to build product, and they have to serve at these three different altitudes. So first, you have, like, kind of the c suite. Right?
My my peers and our c level executives are frequently having customers with CIOs and CEOs of massive, you know, fortune five five hundred companies about how do they solve their greater business needs. Right? Like, you know, we're talking to CVS around, like, how do you reduce turnover in all their retail stores. Right?
So so you have to kind of look at it from a very high level. What does this business need to solve for at the the most important strategic level? You know? I'd met with Stellantis as a big car, you know, manufacturer.
They're thinking about car flow. Everything from supply chain to manufacturing to transit to getting cars to dealers to in the customer's hands. Right? So what I love about my job is I get to learn about all these different very diverse businesses and think about, like, if I was the CEO, what are the top problems I need to solve?
We have to think about how does our product solve that. Then we have to think of the next altitude down, which is a person who decides to buy our product. We call them often like a a service owner. So in the enterprise world, you're gonna have someone that chooses to buy the product.
They're like a decision maker that's gonna evaluate this technology solution and decide it's the best thing for what the employees need. You have to solve for them too. Usually for them, you're thinking about what's the ROI on this investment? How do we prove that out to them?
How do we make sure that the product's easy to configure and deploy and get adopted and their customers are ultimately the end users who we also have to satisfy? So that's the third category that we have to think about. So so a lot of our demos, our vision pieces are, you know, the presentations that I will give have to be at those three different altitudes of we need to solve for the end user, which could be the employee, the customer service agent, the developer, the productivity worker, the HR expert who are using our software to do their core job.
And so that's, like, you know, kind of more of the consumer product mindset of just usability, great product, has to be valuable, have to get what they need out of it. But we also have to think about then how does that then turn to make that person who chose our software the hero. Right? Like, so so you have to prove the ROI on those things.
So they might be thinking about metrics like employee engagement, productivity, efficiency, customer satisfaction, stuff like that. And then, ultimately, that overarching solution has to drive a business result that a CIO would find you know, it makes them the hero for their board. Right? So so I'd say that's to me the difference in enterprise design and research is we're thinking about those three different audiences, which are trying to solve at very different levels, but, ultimately, our product solution has to do all three.
Yeah. Thank you for the question.
Amy Lokey, thanks so much for being on Design Better.
So much for having me. It's great to be here with everyone. Thank you.