Well, that was fast. Just one week after OpenAI announced it will spin out a healthcare version of ChatGPT, we’ve seen backlash and ripple effects in both health and digital media domains. This isn’t surprising, due to the high-stakes nature of healthcare, combined with trust issues around conversational AI and LLMs.
Before going into the latest, the quick background is that OpenAI announced it will launch a version of ChatGPT that’s purpose-built for healthcare queries. As we examined, it makes sense to silo that highly-specific and sensitive information (including monetization implications), but there could be challenges.
That brings us back to the aftermath. Among the events of the past week, a number of pysicians spoke out against patient-facing AI agents like ChatGPT. Meanwhile, Google has moved in the opposite direction by removing AI overviews from certain medical queries. Let’s take those one at a time.
Specialized & Siloted
Starting with the backlash from the medical community, many doctors have supported the use of AI to help them in diagnostics and other functions, but not in patient-facing ways. The difference is that any hallucinations or misinformation can be filtered out by the doctor that stands between AI and the patient.
The potential for harmful effects has been seen already in ChatGPT, prior to spinning out a healtcare-specialized engine. Several doctors have decried the practice of patients showing up with ChatGPT printouts, filled with outdated or incorrect information about their condition or recommended care.
But some doctors see positive steps in ChatGPT’s move towards specialized and siloed AI for healtcare. For one, it will be focused on a given patient and their medical history, versus all the other noise. The healthcare silo also means more private and secure patient data, not used for training general models.
While those debates play out, Google has moved in the opposite direction, as noted. After an investigative story by the Guardian about damaging health misinformation in AI overviews, the outlet reports that AI overviews were removed for many of those queries. Expect this to be a moving target.
Security & Sensitivity
Stepping back, these ripple effects don’t come as a shock to anyone. Conversational AI is still prone to hallucinations, and the technology faces deserved trust issues in the broader public. For these reasons, conversational AI tools like ChatGPT tend to get the most traction when applied to low-stakes areas.
This finding was evident in our SMB research. Applied to business functions, that means there’s hesitance to let AI take over critical functions like payroll. This is different than the mistrust around healthcare-focused AI – the point of this article – but high-stakes sensitivities are a common thread.
Healthcare info is high stakes simply due to the consequences of taking advice that may be hallucination-driven. We’re talking things like drinking bleach to cure Covid, or other such things that could have damaging or even fatal outcomes. No amount of disclaimers is going to quell that concern.
In addition to high-stakes fodder, there’s the additional concern around security and sensitivity. Will users trust an unproven entity like ChatGPT with sensitive and private health information? People trust Google with that information, but it took years to earn that trust. We’ll see if ChatGPT can do the same.
Header image credit: National Cancer Institute on Unsplash


