In this guide

How to drive product waste out of your development process

    How to drive product waste out of your development process

    For decades, we’ve been forced to trade off between speed and quality when making software product decisions. But now, fast feedback lets you make and defend higher-confidence decisions based on real customer feedback without slowing down the project. The result: less rework cost, more effective products, and happier employees.

    Digital assets for guide: How to drive product waste out of your development

    Executive summary: Product waste burns engineering time

    Up to 50% of software engineering budgets are wasted due to corporate politics and the “fail fast” guesswork built into Agile development. The current level of product waste is a huge burden on companies that are facing intense pressure due to the economy, tariffs, and the challenges of AI. Leading edge companies are discovering that product waste can be substantially reduced through fast feedback—the use of human insight systems to validate and defend product decisions in close to real time. This enables them to get more productivity from their engineering investment, and move faster with high confidence that they’re building the right thing.

    This guide describes what they’re doing and how you can take advantage of fast feedback in your business.

    Product waste is a huge burden on engineering budgets

    Product waste—avoidable product rework driven by process problems and business politics—is an under-recognized drag on the productivity of software development teams. It’s hard to measure exactly how much rework is costing us, but people who have studied it closely say it’s massive. The USC Center for Software Engineering estimated that avoidable rework consumes 40%-50% of product development budgets (source), while product management consultant Rich Mironov estimated that 35% to 50% of software development budgets are wasted due to inadequate discovery (source).

    There are two main drivers of product waste:

    • The “fail fast” philosophy of Agile builds rework into the development process
    • Business politics and ineffective discovery often impose poor decisions on product teams

    Here’s detail on each driver.

    Fast failure: necessary but inefficient

    The idea of failing fast has become so central to Agile development that it’s almost a religious belief. It’s been the subject of countless articles and even full books. The concept has been with us for decades. In 2011, the Harvard Business Review called it “Failing by Design” (link). In Google Trends, the phrase “fail fast” first appeared in spring of 2006—the same time as the release of Amazon Web Services, which powered wide adoption of Agile in startups:

     

    Google Trends - fail fast searches

    Searches for “fail fast,” according to Google Trends

    We’ve become so focused on the “fail fast” mantra that it’s easy to forget that the goal of failing fast isn’t actually to fail, it’s to maximize your speed of learning, so you can evolve a better product quickly. Learning is essential because our guesses about how customers will think and react are shockingly bad.

    The trouble with guessing. In the day-to-day process of product development, product managers are accustomed to doing an enormous amount of guessing. When we surveyed PMs, more than half of them said they frequently guess about how customers will react to a new feature. We guess a lot because we’re all moving very fast, and there isn’t time to do traditional market research on most decisions. Instead we fail fast: make a quick guess, release a new version, and then track the analytics to see if things improve.

    Many of us don’t realize how inaccurate our guesses are. The A/B testing community has been looking at this issue for years, and their carefully designed studies showed that even the most experienced managers were able to correctly predict customer reactions only about a third of the time. You can read a summary of the research here.

    That sounds shocking, but it’s not because we’re all stupid, it’s because people are very complex. Experienced product managers recognize the challenge of anticipating customer reactions. These quotes are from the book Trustworthy Online Controlled Experiments:

    • “I’ve been doing this for five years, and I can only ‘guess’ the outcome of a test about 33% of the time.” - Regis Hadiaris, Quicken Loans
    • “Get used to, at best, 70% of your work being thrown away.” - Farheed Mosavat, Slack
    • “80% of the time you are wrong about what a customer wants.” - Avinash Kaushik, Google
    • “90% of what Netflix tries is wrong” - Mike Moran, Netflix

    Failing fast burns time and money. It’s a more efficient way to develop software than old-school Waterfall processes only because traditional market research is so slow. Even with the high rate of failure for guesses, it’s more efficient to guess and iterate than it is to wait for research on every decision. But companies are starting to recognize that failing fast is more a necessary evil than an ideal state. The cost of guessing and iterating is high:

    • Because our guesses are so often wrong, fast failure produces an enormous volume of rework for engineers, raising development costs and burning out engineers, who don’t like redoing their work.
    • It also hurts brand image when customers use a less than optimal version of a program
    • And it can cause a company to miss a business opportunity if a competitor gets to product-market fit faster.

    A quick history of Agile: How business conditions drive development process

    Everyone in tech knows that Agile improved the productivity of software companies. But many people don’t know what drove Agile to happen. Although you might think technology changes would be the main driver of dev changes, Agile was actually driven by changes in the logistics and business practices of software companies. You need to understand what happened in order to anticipate where Agile will go next. So here’s a brief look back at what drove Agile.

    In the days before the Internet, most software was installed on the customer premises (whether on a mainframe or a PC). For mainframes, software was often pre-installed in the computer. If not, it was delivered on physical media like punch cards and magnetic tape. (I started my career as a Mac developer, and we delivered software on floppy disks, distributed through the mail and in computer stores.

    This physical distribution process drove the software development process. Because it was incredibly slow and expensive to send out a bug fix or other change, a program had to be excellent on the first release. Companies evolved an elaborate development process that often included these steps:

    • Do market research (this took months to implement; remember there was no Internet)
    • Create a Market Requirements Document (a very detailed description of customer problems and how to solve them; when I was at Apple, their MRDs were works of art and often longer than college term papers)
    • Create a Product Requirements Document (a formal specification of the product’s features and development schedule, responding to the MRD)
    • Development (which took a lot of time because you had to deliver a fully mature product on first release)
    • Extensive alpha and beta testing to ensure the absence of bugs
    • Shipment

    The whole process could easily take 18 months or more, which sounds incredibly slow now, but actually was the most efficient way to create software given the logistics of the time. Fixing a bug or tweaking a feature in mainframe software could mean sending people around to every customer site. In PC software it meant mailing replacement disks to every customer, a massive expense for a large developer and potentially catastrophic for a small one. It was far safer to spend extra time up front perfecting the product.

    The rise of the Internet and cloud computing in the early 2000s freed software companies to work differently. Software hosted in the cloud could be updated constantly. Even if the software was installed locally, fixes to it could be pushed over the Internet anytime at very low cost.

    Taking advantage of this change in logistics, software development teams evolved their process, in several ways:

    • Since it was easy to change things, software could be created and deployed in small iterative increments
    • That iterative process made elaborate MRDs and PRDs less necessary
    • Bug testing could also be less rigorous. Obviously you don’t want widespread catastrophic bugs, but minor bugs were now an annoyance rather than a disaster.
    • Analytics could be used to identify many problems rapidly, helping the team to quickly decide what to do next

    This new approach was named Agile development. The old development process was retroactively named “Waterfall,” a reference to its cascading nature.

    Waterfall vs. Agile development chart

    Waterfall vs. Agile development. Agile is faster and utilizes engineering throughout most of the process. (Conceptual chart, not to scale.)

    Agile substantially shortened the time needed to deploy new software. It also enabled more efficient utilization of engineers, since they didn’t have to wait around for the planning and bug-finding processes to be completed before and after their part of the work. (In Waterfall, organizations tried to sequence multiple development projects so the engineers would always be busy, but it practice there was always some downtime or inefficient use of people.)

    Internal waste: The price of politics

    The other common source of product waste is flawed decision-making within a company, driven by internal politics and poor communication. Rich Mironov is a well-known consultant and coach to product executives. He described this form of product waste in an excellent essay (source). Our interview with him gives more details on the problem (source). Mironov’s observations are very similar to stories we hear frequently from product executives. Here’s a summary:

    “Roadmap amnesia” is rampant in many companies. Roadmap amnesia happens when a company makes a well-reasoned long-term plan for its products, but then overrides the plan due to short-term opportunities or issues. Obviously it’s sometimes necessary to adjust, but Mironov says that in many companies the short-term issues almost always override the long-term ones. Here’s how he describes it:

    “Our CEO got a call from (insert big customer name): JP Morgan Chase, Ford, Deutsche Bank, whoever it is. And somebody said, ‘if you could just’—and it always includes the word just—‘do X.’ Roadmap amnesia is where everyone else, except the engineering and product folks in the executive suite, has their brains wiped of everything we’ve committed to and every reason we had and every decision we’ve made because a new prospect or a business opportunity includes the word money.”

    He also cited what he calls the “airline magazine problem:”

    “On Tuesday your CEO was on a United flight and they had some puff piece in their inflight magazine about Six Sigma. And he got off the plane and he said, ‘McKinsey’s quoted as saying Six Sigma’s going to turn our company around, get on it.’ And then on Thursday, the same executive was on a Delta flight —which has a piece on Agile or customer centricity or machine learning or fill in whatever bulls*** you want here —and they come off the plane on Thursday saying ‘well, clearly Six Sigma didn’t fix all our problems since Monday, and I just saw that Boston Consulting Group has told us that the future is all about machine learning and AI. When are we going to do them?’ We’ve all been here, right?” 

    The problem is deepened by old-school thinking about product decision-making. Mironov says many companies “think of software development as a manufacturing process with classic assembly-line success metrics:  [a] building exactly what the ‘go-to-market’ team tells Engineering to build, [b] in the order demanded, and [c] done as fast/cheaply as possible.  Product Management (if it exists in old-style IT) is tasked with collecting each organization’s shopping list, then providing enough technical specifications that Engineering can build it.”

    The result is products that underperform. Mironov’s examples:

    • “Products released on time/on budget that don't deliver positive business outcomes.
    • “Dozens of new features that are rarely used and make the underlying products harder to navigate.
    • “Beautiful workflows designed for the wrong audience.
    • “Local optimizations that reduce total revenue.
    • “Single-customer versions that slowly consume our entire R&D organization with one-off single-customer support and upgrades.
    • “Acquired products that we thought could easily be integrated but (in fact) need total replatforming to interoperate with our main products.
    • “Project teams that ship v1.0 and are then re-assigned elsewhere, leaving orphaned software that is quickly discarded without essential bug fixes or v1.1 features.  
    • “Commitments to new capabilities that we know can’t work as promised.

    The root cause of all these problems, Mironov says, is that product teams typically speak the language of technology and process, whereas most of the rest of the C-suite speaks the language of business, focused on revenue and profit. Unless product leaders can translate their work into business terms, backed by evidence that’s persuasive to other executives, they won’t be able to defend their priorities.

    As Mironov puts it, “Don’t go into that meeting with a spreadsheet with nine columns and 600 rows, and expect that you’re going to walk everybody in the executive team through your spreadsheet and convince them that they don’t want that $1 million deal…. Rather than ‘let me lecture you for a couple of hours about how Agile works,’ the product leader needs to explain why your sales team isn’t going to make quota this quarter and will get fired.”

    Fast feedback cuts product waste

    Fast customer feedback, enabled by human insight systems, is creating new ways to reduce both drivers of product waste. Within the product development process, instead of failing fast, you run a human insight test. You can get customer feedback almost instantly on virtually any idea or experience—within a sprint, without delaying the project. This lets you learn even faster than failing fast. The fast-feedback approach gives several big benefits to a product team:

    • It saves you time and money because you’re not wasting as many sprints.
    • It improves employee and customer satisfaction because they don’t have to go through the disappointment of failing and trying again.
    • It improves your marketing efficiency because you’re more likely to have a hit product at first launch.

    Fast feedback also helps a product executive push back on ad hoc changes to the product plan. When a change is proposed, it can be tested immediately. The customer video from fast feedback gives a product executive compelling evidence of customer reactions and thinking, moving the debate away from impressions and anecdotes and toward balanced consideration of business issues. That doesn’t mean fast feedback will kill every proposal for a change, but it’s a powerful tool to defend against the bad ideas.

    Here’s more information on the problems you can solve with fast feedback, and how you can deploy it.

    How fast feedback reduces waste in the development process

    Problem discovery

    Design

    Development

    Post-launch

    • Time sink: slow, hard to find participants, scheduling difficult, no-shows
    • Teams often use imperfect substitutes
    • Test only at the end (too late)
    • Hard to communicate the why behind the design to engineering
    • Validate feature tweaks
    • Settle team disputes
    • Deal with executive “advice”
    • Validate the first-time experience
    • Form a better hypothesis for analytics
    • Improve success rate of AB tests

    Friction points in the development process that can be reduced through fast feedback

    Fast feedback in the discovery stage. This is the time when you listen intently to customer needs and ideas, so you can make high-quality decisions on their behalf. Everyone agrees that it’s a critical part of the product process, but it’s also a notorious time sink. It’s often difficult and slow to recruit the right participants for discovery, scheduling is a pain, and there are always no-shows. The interviews themselves are also time-consuming. 

    As a result, many companies skimp on the process. Imperfect discovery can give you a false sense of security because the feedback you get isn’t representative of reality:

    • It’s common for companies to interview their current customers rather than new ones, because they are easier to find. This puts you at risk of developing for your base rather than growing it.
    • Because it’s hard to recruit, companies will often talk to the same people repeatedly, perhaps in an advisory panel. Those panels quickly start to think much like your employees because they have far more context and interaction with you than does your typical customer.
    • Some companies settle for interviewing people who aren’t exactly the target customer but who are more readily available. That produces unknown biases in your learnings.
    • Companies often rely on feedback from industry analysts, which leaves you vulnerable to their blind spots and assumptions, and which also makes it hard for you to differentiate your products because the analysts are also talking to your competitors.
    • We’ve also seen companies focus on social listening (gathering feedback from social media comments) rather than customer interactions, which creates a huge risk that you’ll optimize for the needs of outspoken fanatics and power users who dominate online conversations, rather than typical customers.

    A good human insight solution will let you automatically recruit, schedule, and record interviews with your exact target customer within hours. You should also be able to do “self-interviews” in which the customers read your questions onscreen and answer them without the need for a moderator. This saves you time, and sometimes gets you more candid feedback (some people will tell their phones things that they would never say to a live human being).

    When discovery is done right, you should be able to complete it within days rather than weeks or months. This lets you do high-quality discovery for every project. With that discovery in hand, the project will start off in the right direction, and you can also share the discovery findings (including very persuasive customer videos) with the company to align stakeholders behind your plan and make later deviations less likely. The more customer reality you can bring the stakeholders early on, the less likely they will challenge your plans later. 

    For more information on making discovery interviews easier, see our article here.

    Fast feedback for the design phase. (We assume here that your company does a separate design phase. If you don’t, then this advice applies to the beginning stages of your development process.)

    Why do you design before you build? Ultimately, it’s about reducing engineering cost. You want to get the design as perfect as possible before the engineers start putting weeks and months into implementing it. 

    The earlier you get feedback on your designs, the sooner you’ll identify problems, and the less rework you’ll need to do later. Many companies delay customer feedback until the end of design, reckoning that it will be more effective to test a finished high-fidelity interactive prototype. This is actually the worst time to get feedback, because there’s usually no time left to fix any problems you find.

    Instead, the best practice is to start gathering customer feedback from the earliest sketches, and run quick tests again after every significant iteration on the design. A good human insight solution will let you test images and even line drawings, and it’ll be fast enough to get you feedback without slowing down the process.

    The results from fast feedback tests also help designers communicate their ideas to the engineering team. It’s commonplace for engineers to push back on a particular design element if it’s hard to implement. Videos of customers reacting to a design are very persuasive evidence for why your design is appropriate and should not be changed.

    During development. There are several situations during development that benefit from fast feedback. The first is when you have to make a significant change to a customer-facing element of the product. For example, conditions in the market might change and require a tweak to how a feature is implemented. This sort of change might severely damage the overall customer experience, but you won’t know it unless the change is validated through a quick human insight test.

    The second situation is when you’re getting internal debate about a feature or other issue from team members or a senior executive. These situations can be slow to resolve and frustrating for the team due to the passionate beliefs of many product people. There can also be a layer of political tension if you’re dealing with a senior executive. Fast human insight can usually settle the disagreements within hours, with objective video evidence of how actual customers react to the issue. Few team members will argue with direct customer feedback, and it also enables a team to push back on a senior executive with evidence, without creating hurt feelings.

    We all aspire to operate in a company culture that makes decisions crisply and logically, but when that doesn’t happen, fast human insight is a powerful way to herd the cats.

    The third development situation where fast feedback helps is pre-launch validation of the first-time experience, especially for mobile apps. Failing fast can hurt you especially badly in mobile, because bad reviews on the app store can haunt you forever. Launching a mobile app is a bit like launching a movie; you’re much better off getting it right before you release it. A good human insight solution will let you test the full customer experience on an unreleased app.

    After launch. Human insight can make you more efficient by helping you form faster, better hypotheses for two common problems. 

    The first situation is when your analytics report a problem with user flow. Typically, you’ll see that customers are behaving as expected until they get to a particular point in the experience, and then they’re either going someplace unexpected or they’re dropping out altogether. For example, picture a checkout process where many customers are abandoning their carts when it’s time to enter their shipping address. Although analytics packages are great for identifying these situations, they don’t do much to diagnose them. Your team may have to run several experiments to figure out what’s wrong. It’s much faster to run a quick human insight test, take people to the problem step, and then ask them to describe what they’re thinking and why. This will enable your product team to make a high-quality hypothesis on what’s wrong, so they have a good chance of fixing the problem on the first try.

    The next situation is improving the efficiency of A/B tests. Depending on who you ask, between 50% and 80% of A/B tests fail to give a statistically significant winner. That can waste a lot of time, especially if you’re testing a part of the app or website that has relatively low traffic, and therefore needs a lot of time to run a test. Using fast human insight, you can pre-qualify the variations for an A/B test in hours, identifying the ones that are most likely to succeed, and flagging any problems in wording or design that might have caused an otherwise good variation to fail. This can significantly increase the success rate of your A/B tests. For more tips on mixing human insight with A/B tests, see our article here.

    Think of fast feedback as lubricant that helps the whole product development process move faster, by removing friction from the process.

    How UserTesting helps

    UserTesting’s human insight solution is purpose-built to make it easy to incorporate fast feedback in business decisions. Key capabilities include:

    • A large, diverse, and vetted contributor network that can usually gather feedback from target customers in hours
    • Templates that make it easy for non-researchers to collect fast feedback on their own
    • Process guardrails that help to ensure quality insights
    • AI-driven analysis to make insight collection fast
    • The ability to easily clip and share video evidence of customer reactions, aligning product teams and stakeholders across the company
    • An extensive pro services team to help you run tests and manage the process

    The road to fast feedback: How to drive a cultural change

    Fast feedback is not just about running tests, it’s a cultural and process change in which everyone in product—ranging from individual product managers to C-level execs—recognizes the benefits of insight-driven decision-making. That recognition can be hard for some people. It means that individual product managers need to be humble and honest with themselves about the problems with guessing. Executives need to accept that customer feedback may fail to validate the great new idea they have. And the sales team needs to understand that the fix an important customer is asking for may not actually be the best decision for the company.

    We’ve helped many companies spread fast feedback throughout their product process. Every company’s culture is different, so the process of changing that culture varies as well. But we’ve seen repeated patterns and predictable points of failure that can be overcome with advance planning. The most successful processes have been documented in our guide to scaling insights, which you can find here.

    Here are four additional suggestions on how to drive change, from Rich Mironov:

    1. Document the results of poor decision-making

    Companies have very short memories about previous mistakes, so the product executive should document the consequences of poor product decisions the company made in the past due to guesses and unvalidated enthusiasms. 

    “Find the nicest possible way to show a stack of evidence that’s going to embarrass people slightly about things we actually did, by name. You remember the re-platforming project, which we never finished? And remember, we spent four million bucks and we closed one customer, which by the way made us $100K? It’s equivalent to helping somebody diet. They’re supposed to write down everything they eat, and then they notice that they’re snacking all day long, right? Until you drag them through the details, they’re never, ever going to see it.

    “The challenge, I think, is to get recognition from the C-suite that there’s a pattern here. We have to get past optimism and magical thinking. We have to get to this sort of hard-nosed, what’s-really-been-happening-here. Because you know, I’ve never met a CEO who didn’t think engineering was lazy. And over-resourced. And under-enthused. But when you unpack it, what I find over and over again is that we’re putting engineering on things that we shouldn’t. And that’s a political problem. It’s not a spreadsheet problem.”

    2. Include the full core team of a project in discovery

    The team will be more aligned and effective if it shares a consensus about exactly what customers want and need. The best way to build that consensus is to have all of them participate in discovery interviews, rather than leaving it only  to the PM or researcher.

    “ In discovery, I’m a big proponent that for each of those rounds of discussion, you want a product person and a designer/UX person and an engineer on the call—in real time.

    “You want to have them all on the call because the engineer hears different things from the same words. And the designer hears different things from the same words. So, having one of those three people do all the translating generally leaves a good portion of the learning on the floor. Your engineer’s really listening for scalability issues and security issues and data items and which system does it live in. And your designer’s thinking all about which step comes first and ‘do I put ’em all in one form?’ And your product person’s thinking a lot about why aren’t you using it, and whom do we have to get permission from to sell it to you, and does it make money.

    “You need a little meeting etiquette here, like only one of those three people gets to talk and it’s never the engineer. The person who talks is either your design/UX person, who is often very good at interviewing. Or your product person, who is often very good at interviewing leads.

    “And so once the team understands what the problem is, they get together around the whiteboard or the Miro or whatever it is, and the team figures out that we have five or six alternate solutions. And the team argues about which is better. Instead of (noise of chest thumping) ‘I’m the product manager. And I have an MBA from some famous university. And I’m right.’

    “Product managers have to have a little bit of humility and a little bit of empathy to know that half the time they’re wrong. And that it’s more important to get to the right answer or a right answer than for me to be self-important. A good test of product leadership is, can you leave your ego by the door?”

    3. To make a decision stick with the executives, translate everything into money

    Product teams generally have a culture focused on logic and process, and tend to be isolated from quarterly financial issues. Much of the rest of the company, especially sales and marketing, live and die based on quarterly financial results. The issues that product teams often think about the most aren’t even on the radar for much of the rest of the company. So in order to persuade the company, product has to translate its concerns into short-term financial consequences.

    “I have a little joke. It’s a story I tell to my CPO coaches: There’s a ‘reverse dog whistle effect.’ A dog whistle is a whistle dogs can hear, but you can’t hear it. The reverse dog whistle effect is anything that a product person says to the C-suite that’s not denominated in money cannot be heard. 

    “Nobody [in the C-suite] wants to hear about tech debt. Nobody cares about the roadmap. Nobody cares how hard engineering’s working or how many designers you have.

    “What you have to say is, ‘We can build that. We’re guessing it’s going to cost us two or three million bucks. That will push out the upgrade, which we all agreed around this table was worth $20 million. And our back-of-the-envelope guess is that we’ll get $50,000 for that $2 million spend.

    “We have to talk about these things with currency symbols, or nobody cares.”

    4. To establish your credibility, start with small measurable wins

    In any business change, it’s often most effective to focus on small, measurable, fast wins. They let you get the principles established quickly and build momentum to make deeper change.

    Mironov said, “rather than the big win, what you’re looking for is to be able to say, ‘we changed the headline on this email that invites folks to our webinar about whatever we sell. And this headline got 4% more signups than that headline.’ Because everybody downstream actually is desperate for more people to show up.

    “Find small examples where you can prove that the science works. Because the big examples take too long, they’re too expensive. And you have to build trust. Before anybody will let you spend a lot of money on something, you have to build trust by showing them the small wins.”

    Customer examples

    Music to match your moods at Deezer

    The music streaming service Deezer wanted to drive higher usage of its services by casual users. Discovery interviews conducted through UserTesting’s system revealed that casual users select music differently than more dedicated users. While the dedicated users tend to pick music in particular genres or playlists, casual users pick songs based on their moods. Armed with this insight, Deezer modified its AI-based music selection feature, Flow, to identify the emotions of users. It also classified more than 90 million songs by their emotional content.

    The result: Hundreds of thousands of customers adopted Flow, and their music listening was 35% higher than non-users. Deezer is continuing to use UserTesting to revise its service around the world.

    Internal innovation at Indeed

    Like many tech companies, Indeed, the world’s largest employment site, runs an internal incubator for new product ideas. Teams are given three months to ideate on an idea and prove customer traction. Live interviews through UserTesting are used in the initial sprint for discovery, and throughout the three-month process the teams are required to continue testing with customers every week. 

    The result: The incubator has funded 34 products since 2017, with at least six graduating into the main organization. New products include a universal resume for Japanese job seekers, and Indeed Hiring Events, which helps enterprise employers manage job fairs.

    Adobe: Feature optimization in Photoshop

    When Adobe launched a new image extraction feature in Photoshop, early customer adoption was lower than expected. The company tried to use feedback from online user forums to diagnose the problem, but realized that they were only getting feedback from experienced users, who couldn’t speak for new adopters. So they used human insight tests through UserTesting to connect with new users and get fast feedback throughout the design and development phases. 

    The result: In ten weeks, they changed the naming, interface, and menus of the feature. The changes made the feature much more accessible to novice users.

    AI development at Amazon Web Services

    Kendra is Amazon Web Services’ AI-enhanced tool that helps companies create their own search products. Kendra scans common data formats, both structured and unstructured, and then analyzes the contents to integrate that information into search results.

    The Kendra team used human insight tests to discover the pain points of search developers, and then tested prototypes as they iterated on the design. “UserTesting gives us speed…to look across a number of different things quickly, the speed to recruit, the speed to analyze, the speed to engage with a wide range of personas,” said Matt Menz, VP of Customer Experience at AWS.

    The result: Based on the learnings from UserTesting, Amazon added a dashboard to Kendra that helps search developers track the effectiveness of the searches for users, fine tune the search models, and improve them over time. Customers also use the platform to automatically fix or remove dead links and track user behavior.

    Quip: Fast high-confidence iteration

    Software startup Quip developed a business software suite designed to compete with Office and Google Docs. With such huge competitors, Quip needed to be nimble and quickly resolve any disagreements within the team. They used human insight tests to continually track user reactions throughout the development process, at one point completing 12 iterations in a single week.

    The result: the company was acquired by Salesforce for $750m.

    To learn more…

    Understand the problem

    How to solve it

    Digital assets for guide: How to drive product waste out of your development

    Webinar: How to drive product waste out of your development process

    Make your software development up to 50% more efficient by replacing fast failure with fast feedback