Paging Dr ChatGPT
A few years back I wrote about Google wanting to be the centre of the digital world for your health record [link to post]. They’re still banging on about it, but a new player has decided to barge into a crowded field, this time with added AI.
OpenAI, the company behind ChatGPT has launched a new service focused on all things health. They have, in a fit of creativity, called it OpenAI for Healthcare [https://openai.com/index/openai-for-healthcare/].
And just like Google, they want to service the whole stack.
They want to go big (targetting the big H health providers)
“We’re introducing OpenAI for Healthcare, a set of products designed to help healthcare organizations deliver more consistent, high-quality care for patients—while supporting their HIPAA compliance requirements. “
They want to go small (targeting small health providers and even users themselves)
“We’re introducing ChatGPT Health, a dedicated experience that securely brings your health information and ChatGPT’s intelligence together, to help you feel more informed, prepared, and confident navigating your health.” [https://openai.com/index/introducing-chatgpt-health/]”
They want to be everything in between.
So what does that means for the average user?
Is this any different to what Google or the other competitors in the medical space are doing?
Well yes and no.
On the face of it, no, what OpenAI is doing is not so different to what everyone else in the sector is doing. They want to become the glue that holds the sector together and to do that, they need to integrate how the sector does business into their own setup. Everything from medical imaging analysis to treatment recommendations, to recording patient/doctor appointments, and everything in between.
No, where OpenAI differs from say Google is the desire to make ChatGPT and its chatbots a central part of a users medical life. They want to be the first port of call for users to discuss medical issues BEFORE they talk to a doctor. For this to work they need as much of your medical information as possible so that it has enough context to make the right calls.
I have some concerns.
Now it’s a fact that people are already uploading medical information into ChatGPT and other general chatbots. These bots don’t have the sort of guardrails or protections that would be required when dealing with sensitive personal information of a medical nature. They’ve also been proven to be less than reliable.
OpenAI argues that they can get their bots to give better advice and they would be less likely to do things like recommend taking Sodium Bromide if they have access to all the datas to build the context around the user.
Which is logical from a certain point of view. Another approach might be to have a look at the main service and find out WHY it’s telling people to do terrible things and then fix it. You don’t need a persons full medical history to do that.
Another thing that bothers me about this whole announcement is that there seems to be a distinct lack of input from the people that would be the subject of all of the information being pumped into this new system. You know, the patients, the people with the disabilities or medical conditions.
Instead they say this:
“Over two years, we’ve worked with more than 260 physicians who have practiced in 60 countries and dozens of specialties to understand what makes an answer to a health question helpful or potentially harmful—this group has now provided feedback on model outputs over 600,000 times across 30 areas of focus. This collaboration has shaped not just what Health can do, but how it responds: how urgently to encourage follow-ups with a clinician, how to communicate clearly without oversimplifying, and how to prioritize safety in moments that matter.”
Which is great, you do need medical professionals involved in developing medical systems, but when you’re building a system that is going to hold as much information about a person as this would, and you want people to actively input their own data from fitness apps and so on, you need to know the challenges and concerns THEY HAVE as well.
The idea of a unified health record isn’t new. The dream of a customised unbiased expert in the patients particular issues, able to pull all relevant information together and make suggestions for their care, isn’t new either. Hell people have been using the internet for decades to try and work out what that rash is.
What IS new is introducing a pseudo-sentient machine layer that, no matter how well you train it, still has a chance to just make stuff up, and appearing to give it the authority to make make medical recommendations.
Finally, and this is a question that applies to every operator of a chatbot. What happens when the chatbot goes rogue. Who is responsible for any possible consequences? There are cases wending their way through the courts in the US at the moment, but we're still very much in the FAFO stage.