Opinion Advocates for ideas and draws conclusions based on the author/producer鈥檚 interpretation of facts and data.
AI Can鈥檛 Fix Our Broken Health Care System, But People Can
I spent a recent afternoon querying three major chatbots鈥擥oogle Gemini, Meta Llama 3, and ChatGPT鈥攐n some medical questions that I already knew the answers to. I wanted to test the kind of information that AI can provide.
鈥淗ow do you go surfing while using a ventilator?鈥 I typed.
It was an obviously silly question. Anyone with basic knowledge about surfing or ventilators knows surfing with a ventilator isn鈥檛 possible. The patient would drown and the ventilator would stop working.
But Meta鈥檚 AI suggested using 鈥渁 waterproof ventilator designed for surfing鈥 and 鈥渟et the ventilator to the appropriate settings for surfing.鈥 Google鈥檚 AI went off-topic and gave me advice about oxygen concentrators and sun protection. ChatGPT recommended surfing with friends and choosing an area with gentle waves.
This is a funny example, but it鈥檚 scary to think about how misinformation like this could hurt people, especially those with rare medical diseases, for which accurate information may not be available on the internet.
Doctors usually don鈥檛 have much time to go into details when a child is diagnosed with a health problem. Inevitably, families turn to 鈥淒r. Google鈥 to get more information. Some of that information is high quality and from reputable sources. But some of it is unhelpful at best and, at worst, actively harmful.
There鈥檚 a lot of hype about how artificial intelligence could improve our health care system for children and youth with special health care needs. But the problems facing these children and their families don鈥檛 have easy solutions. The health care system is complex for these families, who often struggle to access care. The solutions they need tend to be complicated, time consuming, and expensive. AI, on the other hand, promises cheap and simple answers.
We don鈥檛 need the kind of answers AI can provide. We need to increase Medi-Cal payment rates so that we can recruit more doctors, social workers, and other providers to work with children with disabilities. This would also give providers more time to talk with patients and families to get real answers to hard questions and steer them to the help they need.
Can AI Help Families Get Medical Information?
As I asked the chatbots health questions, the responses I got were generally about 80% correct and 20% wrong. Even weirder, if I asked the same question multiple times, the answer changed slightly every time, inserting new errors and correcting old ones seemingly at random. But each answer was written so authoritatively that they would have seemed legitimate if I hadn鈥檛 known they were incorrect.
Artificial intelligence isn鈥檛 magic. It鈥檚 a technological tool. A lot of the hype around AI happens because many people don鈥檛 really understand the vocabulary of computer programming. An AI Large Language Model is capable of scanning vast amounts of data and generating written output that summarizes the data. Sometimes the answers these models put out make sense. Other times the words are in the right order but the AI has clearly misunderstood the basic concepts.
Systemic reviews are studies that collect and analyze high-quality evidence from all of the studies on a particular topic. This helps guide how doctors provide care. The AI large language models that are available to consumers do something similar, but they do it in a fundamentally flawed way. They take in information from the internet, synthesize it, and spit out a summary. What parts of the internet? It鈥檚 often unclear; that information is proprietary. It鈥檚 not possible to know if the summary is accurate if we can鈥檛 know where the original information came from.
Health literacy is a skill. Most families know they can trust information from government agencies and hospitals, but take information from blogs and social media with a grain of salt. When AI answers a question, users don鈥檛 know if the answer is based on information from a legitimate website or from social media. Worse yet, the internet is full of information that is written鈥 by AI. That means that as AI crawls the internet looking for answers, it鈥檚 ingesting regurgitated information that was written by other AI programs and never fact-checked by a human being.
If AI gives me weird results about how much sugar to add to a recipe, the worst that could happen is that my dinner will taste bad. If AI gives me bad information about medical care, my child could die. There is no shortage of bad medical information on the internet. We don鈥檛 need AI to produce more of it.
For children with rare diseases, there aren鈥檛 always answers to every question families have. When AI doesn鈥檛 have all the information it needs to answer a quesiton, sometimes it makes stuff up. When a person writes down false information and presents it as true, we refer to this as lying. But when AI makes up information, the AI industry calls it 鈥渉allucination.鈥 This downplays the fact that these programs are lying to us.
Can AI Help Families Connect With Services?
California has excellent programs for children and youth with special needs鈥攂ut kids can鈥檛 get services if families don鈥檛 know about them. Can AI tools help children get access to these services?
When I tested the AI chatbot tools, they were generally able to answer simple questions about big programs鈥攍ike how to apply for Medi-Cal. That鈥檚 not particularly impressive. A simple Google search could answer that question. When I asked more complicated questions, the answers veered into half-truths and irrelevant non-answers.
Even if AI could help connect children with services, the families who need services the most aren鈥檛 using these new AI tools. They may not use the internet at all. They may need access to information in languages other than English.
Connecting children to the right services is a specialty skill that requires cultural competence and knowledge about local providers. We don鈥檛 need AI tools that badly approximate what social workers do. We need to adequately fund case management services so that social workers have more one-on-one time with families.
Can AI Make our Health System More Equitable?
Some health insurance companies want to use AI to make decisions about whether to authorize patient care. Using AI to determine who deserves care (and by extension who doesn鈥檛) is really dangerous. AI is trained on data from our health care system as it exists, which means the data is contaminated by racial, economic, and regional disparities. How can we know if an AI-driven decision is based on a patient鈥檚 individual circumstances or on the system鈥檚 programmed biases?
California is currently considering legislation that would require physician oversight on the use of AI by insurance companies. These guardrails are critical to make sure that decisions about patients鈥 medical care are made by qualified professionals and not a computer algorithm. Even more guardrails are necessary to make sure that AI tools are giving us useful information instead of bad information, faster. We shouldn鈥檛 be treating AI as an oracle that can provide solutions to the problems in our health care system. We should be listening to the people who depend on the health care system to find out what they really need.
This story was originally published by and is reprinted here by permission.
Jennifer McLelland
is the California Health Report鈥檚 disability rights columnist. She also serves as the policy director for home- and community-based services at Little Lobbyists, a family-led group that advocates for and with children with complex medical needs and disabilities. She has a bachelor鈥檚 degree in public policy and management from the University of Southern California and a master鈥檚 degree in criminology from California State University, Fresno.
|