Do patients want AI to replace their doctors?

Not today. The use of AI in healthcare is intriguing, but most people aren’t yet ready to trust it. Data protection, proof of accuracy, and allowing human-driven alternatives can help drive acceptance

9 min read

Share

The generative AI leaders are targeting healthcare

The public release of ChatGPT in November 2022 set off a global phenomenon of generative AI usage and adoption. It took only five days for ChatGPT to reach over one million users, and by January of 2023 it was estimated that the platform had reached over 100 million users, making it the fastest growing platform in history at that time.

A similar pace of AI adoption is being seen at the enterprise level. This is evident in a recent Gartner poll indicating that 70% of organizations are currently in the exploration stages of implementing generative AI into their business functions and 19% are in a later pilot or production stage.

Sam Altman, the CEO of OpenAI (the company that operates ChatGPT), enthusiastically describes the possibilities for generative AI systems to greatly expand our scientific knowledge and specifically calls out the ability for these systems to help cure human disease. In a message on Twitter (now known as X), he praised companies leveraging ChatGPT to expand access to health information to patients who would not otherwise be able to afford it. This is a view supported by many who identify healthcare as one of the top industries expected to be disrupted through the widespread proliferation and adoption of generative AI.

This example was echoed by Kevin Scott, the Chief Technology Officer at Microsoft. While being interviewed in a recent podcast, he told an anecdote about his immunocompromised brother who lives in rural central Virginia with limited access to quality medical care. Scott imagined a scenario in which patients like his brother would be able to access better medical information through generative AI than they could get from a local doctor. And a recent study suggests that ChatGPT is already as good or better as a doctor at diagnosing patients.

But not everyone shares an optimistic point of view about AI in medicine. Gary Marcus, professor emeritus of psychology and neuroscience at NYU and a long-respected authority on the topic of AI, said AI chatbots have been known to provide false and inaccurate information. Marcus pointed out, “We know these systems hallucinate, they make stuff up. There’s a big question in my mind about what people will do with the chat style searches and whether they can restrict them so that they don’t give a lot of bad advice.”

Dr Nathan Price, Chief Science Officer of Thorne HealthTech and co-author of The Age of Scientific Wellness, shared an alarming story in which an AI system is asked to provide a solution to prevent cardiovascular disease. The system confidently responded with a recommendation to give everyone carcinogens so that instead of dying of heart diseases, they would all die of cancer.

How do patients feel about generative AI in healthcare?

Because there is so much disagreement, we wanted to ask actual patients what they think. We surveyed them in the US, and followed up with video self-interviews, all through the UserTesting platform (details at the end of this article). We found that most people are not yet ready to start putting their faith in the medical recommendations provided by AI chatbots.

As a baseline, we asked our participants to identify how often they used online information sources when they had questions about their health. Most people who have Internet access have used it to look up health information.

Among the people who have used the Internet for health information, search engines and healthcare websites like WebMD are the most often-used online resources:

what online resources do you use to research personal health questions

We then asked what specific types of information they research online. 82% look up information on symptoms, and 75% learn more about a medical condition that they or someone they care about has been diagnosed with:

What type of health information do you research online

We then dug into the emerging role of generative AI. Our first finding was that despite all of the buzz, usage of ChatGPT and services like it is still relatively low. Only 16% of people with Internet access have ever used ChatGPT or a system like it. Of that 16%, a little under half have used it for healthcare questions (7% of the total population).

Among the small number of people who have used ChatGPT for health questions, 87% were at least somewhat satisfied with the information that they received, a very good score. 

In online self-interviews, those ChatGPT users said they are using it to seek potential causes for symptoms that they were experiencing, treatment for existing health conditions, and to research side effects from recent surgeries and medications that they were taking. They told us that they liked how easy it was to quickly receive direct answers to questions rather than having to weed through the typical deluge of information returned from Google searches or through sites like WebMD.

Trust is central to generative AI adoption in healthcare

The issue of trust is one that we explored at great length in our research. We asked participants to indicate how comfortable they would be using an AI chatbot in a number of different healthcare scenarios. None of the scenarios scored particularly high, but getting medication reminders and scheduling appointments scored the least badly.

The activity that scored worst, by a substantial margin, was using ChatGPT to discuss mental health-related questions.  More than half of our participants indicated a low level of comfort  in potentially using an AI chat system to discuss mental health issues:

chat GPT in healthcare

We asked people to explain why they felt so uneasy using an AI chatbot to discuss mental health-related questions. They told us that trust, compassion and empathy-building with a health professional are crucial components of successful mental health treatment. It’s hard to imagine a patient building an empathy-based relationship with a chatbot. And as we know from a notorious example, empathy is something that AI chatbots still have quite a bit of difficulty with. 

These are pivotal insights for the healthcare and healthtech industries when you consider that mental health is often identified as a promising use case for AI chatbots. There are already quite a few dedicated mental health chatbots available to patients on the market today. This may be a challenging business model if most patients still strongly prefer to seek mental health treatment from an actual human.

But the mental health concern was just an extreme example of the uneasiness that many people feel about generative AI in healthcare. We asked participants about a wide range of potential concerns, and many people rated all of them as sources of concern, including fear of errors, unreliable information, loss of privacy, and even lack of certainty that a patient would be able to input the right question prompts or that they would be correctly understood by the AI chatbot.

Concerns on ChatGPT for health

For anyone creating and designing experiences that give patients the ability to use an AI chatbot in place of healthcare professionals, it is important to take note of these concerns. Roughly one third of the participants in our study had expressed the highest level of concern within each of the scenarios of using an AI chatbot in a personal health setting. In our interviews, some participants even expressed fear of what they described as a dystopian future in which only the wealthiest among us would have access to qualified humans when seeking medical assistance. 

Information accuracy was also a major issue identified by the participants that we talked to. Many individuals in our study told us that they were not sure if guidance from an AI chatbot could be trusted. They pointed out that many chatbots still do not cite credible sources as the basis of information that gets returned from user prompts. Some participants told us that they wouldn’t be confident that they even know how to effectively craft a prompt to obtain useful health guidance.

How to increase confidence in generative AI

While much of the feedback that we received centered around uncertainty and skepticism, some people did express excitement and optimism that generative AI could enhance the patient experience, or at least transform the bad parts of it. 

Participants expressed optimism about ChatGPT’s potential to serve as an alternative for medical professionals, particularly for patients lacking healthcare access. They also conveyed dissatisfaction with the inefficiencies of healthcare administrative staff and a waning trust in the expertise of health professionals. Many believe that an AI chatbot could provide quicker and more dependable healthcare services, viewing technology like ChatGPT as a viable option for addressing these issues.

We gave people a list of changes that might make them more comfortable using an AI chatbot in healthcare. Everything rated highly. The highest score went to the ability to escalate a conversation to a human healthcare professional if needed. Participants also indicated that getting scientific evidence of the accuracy of the information presented would be helpful, as was certainty that their privacy would be protected.

ChatGPT for personal health info comfort

In follow-up interviews, people gave more details on why they’re worried about privacy. There are significant fears that health data and information entered into a chatbot may be misused in the future, for example to determine eligibility for insurance and care. For anyone designing experiences leveraging generative AI in a healthcare setting, it is vital to convey to users a sense of robust privacy protections.

Conclusion: To drive adoption, create trust

Trust remains both the biggest barrier and potential opportunity in the adoption of generative AI in healthcare. There are hidden landmines throughout the intersection of AI and personal healthcare. If you are thinking about how generative AI fits into your patient experience, there are four crucial considerations to design around:

  1. Show patients that their privacy and personal health information will always be protected. It is essential to demonstrate to patients that interactions with generative AI adhere to HIPAA or the equivalent standard norms and regulations that safeguard personal health information. This is crucial to building their trust.
  2. Wherever possible, show evidence of scientific rigor in any medical advice or recommendation dispensed by AI chatbots. Patients want to know that the information they receive is accurate and backed by scientific research. The current experience commonly offered by AI chatbots is not enough to gain the trust and confidence of patients.
  3. Make it easy for patients to reach out to an actual person, even if their experience is guided by a generative AI chatbot. For most healthcare scenarios, patients want to know that they can escalate to a human healthcare professional if they need to.
  4. Keep your finger on the pulse of the needs and sentiments of patients. With AI technology changing rapidly, attitudes toward it are also evolving quickly. Don’t assume that today’s answers will resonate tomorrow. Talk to your customers and your stakeholders on a regular basis. 

Research methodology: In summer 2023 we surveyed 1,055 randomly chosen US adults using the UserZoom platform. The survey was followed up with online self-interviews via the UserTesting platform to explore the reasons behind the survey findings. The interview participants gave permission to use their video comments in this report.

In this article

    About The Center for Human Insight

    We created this resource to help you use human insight for business decision-making.

    Read more

    • Article

      Keep Your Eyes on the People: How to be Certain You Have the Right Audience for Your Studies

      Q. Tell me about your role and background, and why you’re so passionate about...
    • Article

      How to Build Great CX Into the DNA of a Legacy Company

      The path to design Q. Tell me about your background. How did you get...
    • Woman sitting at desk with laptop and phone holding a coffee mug

      Article

      Great Reading on Human Insight Issues

      With most Americans taking the next few days off for Thanksgiving, we thought this...