In partnership with

Introducing the first AI-native CRM

Connect your email, and you’ll instantly get a CRM with enriched customer insights and a platform that grows with your business.

With AI at the core, Attio lets you:

  • Prospect and route leads with research agents

  • Get real-time insights during customer calls

  • Build powerful automations for your complex workflows

Join industry leaders like Granola, Taskrabbit, Flatfile and more.

40 Million People Are Now Asking ChatGPT for Medical Advice. Should You?

Here's something wild: more than 40 million people are now turning to ChatGPT every single day for health advice. Not occasionally. Not as a curiosity. Daily. That's roughly the entire population of California asking an AI chatbot whether that weird rash is nothing or whether they should be in the ER right now.

And listen, I get it. The American healthcare system is a Kafkaesque nightmare of hold music, surprise bills, and insurance portals designed by what I can only assume are sadists. So when you've got chest pain at 11 PM on a Tuesday and the alternative is either a $3,000 ER visit or waiting three weeks for your PCP, yeah, asking a chatbot seems reasonable. The thing is, it's not just reasonable anymore—it's becoming the new normal.

OpenAI just dropped a report showing that health queries make up over 5% of all ChatGPT conversations globally. That translates to billions of health-related messages every week. Among ChatGPT's 800+ million users, roughly one in four asks at least one health question weekly. And now they've launched ChatGPT Health, a dedicated medical feature where you can upload your actual health records and get personalized advice.

So the question isn't whether AI is becoming your doctor's unofficial assistant. It's whether that's terrifying, inevitable, or both.

Why Everyone's Suddenly Consulting Dr. ChatGPT

The numbers are striking, but the why is more interesting. Sixty percent of U.S. adults surveyed have used AI for health questions in just the past three months. That's not early adopters or tech nerds—that's everyone.

What are they asking about? Per the data:

  • 55% are checking symptoms

  • 52% value being able to ask questions anytime

  • 48% need help understanding medical terminology or instructions

  • 44% want to learn about treatment options

But here's the kicker: 70% of health conversations on ChatGPT happen outside normal clinic hours. It's not replacing doctors—it's filling the void when doctors aren't available, which is, you know, most of the time.

And it's not just symptom WebMD-style searches. People are using ChatGPT to decode medical bills and find billing errors. They're feeding itemized hospital statements into the chatbot and discovering duplicate charges or Medicare rule violations they'd never have caught themselves. They're comparing insurance plans, drafting appeal letters for denied claims, figuring out what's actually covered before they schedule that procedure.

The American healthcare system has become so deliberately opaque that an AI trained on internet text is legitimately useful for navigating it. Think about how messed up that is. We've built a system so complex that artificial intelligence is now a consumer protection tool.

The Hospital Desert Problem

There's a darker layer here. Nearly 600,000 health-related ChatGPT messages per week come from U.S. "hospital deserts"—regions where people live 30+ minutes from the nearest hospital. Rural America has been systematically losing healthcare access for decades as hospitals close and doctors flee to higher-paying urban markets. AI isn't solving that problem, obviously. You can't ChatGPT your way to an emergency appendectomy.

But if you're in rural Montana with a sick kid at midnight and the nearest pediatrician is 90 miles away, having something that can help you assess whether this is "drive three hours to the ER now" or "call the doctor in the morning" has real utility. It's a band-aid on a gunshot wound, but when that's all you've got, you use the band-aid.

Enter ChatGPT Health: Your New AI Doctor (Kind Of)

Given this massive adoption, OpenAI just launched ChatGPT Health in January 2026—a separate, encrypted space within the app specifically for medical queries. You can connect electronic health records, sync fitness apps like Apple Health or MyFitnessPal, upload lab results, and get advice "grounded" in your actual health data rather than generic information.

The interface looks clean: dedicated Health tab with a heart icon, separate storage from regular chats, purpose-built encryption. Conversations here aren't used to train OpenAI's models. If you ask a health question in the regular chat, it'll prompt you to switch to the protected space.

OpenAI worked with over 260 physicians across 60+ countries to build this. The goal was to train the AI on when to suggest seeing a doctor, how to ask good follow-up questions, when to use cautious language. They're positioning it explicitly as a "personal health assistant," not a doctor. It won't diagnose or prescribe. It's meant to help you understand health data, prepare for appointments, track wellness goals.

The feature launched on a waitlist and is rolling out to all ChatGPT users on web and iOS soon. Some capabilities—like connecting medical records—are U.S.-only at launch because of data privacy laws in Europe and the UK.

Let's be clear about what this is: OpenAI recognizing that 230 million people worldwide are already asking ChatGPT health questions every week and deciding to channel that into a safer, more controlled environment rather than trying to stop it. They're not creating demand—they're responding to it.

The Upside: Information Asymmetry Meets Its Match

There are legitimate benefits here. Healthcare operates on massive information asymmetry—doctors know everything, patients know nothing, and that power imbalance shapes every interaction. AI can help level that.

Take medical jargon. Doctors speak a different language, and they often forget to translate. When your cardiologist mentions your "ejection fraction" and "left ventricular function" and you smile and nod because you don't want to look stupid, then go home and have no idea what just happened—that's where AI shines. You can paste your lab results into ChatGPT and get a plain-English explanation of what those numbers mean and which ones you should be concerned about.

Or treatment options. Your oncologist presents three treatment protocols with names like "FOLFOX" and "CAPOX" and you're supposed to make an informed decision while processing the fact that you have cancer. Having an AI that can explain the differences, the side effect profiles, the success rates in language you understand—that's valuable. It lets you show up to the next appointment with actual questions rather than just nodding while terrified.

The administrative nightmare is another real use case. Americans spend an absurd amount of time fighting with insurance companies, and most people have no idea how to do it effectively. ChatGPT can analyze your insurance policy, tell you exactly what's covered, help you write an appeal that hits all the right regulatory keywords. One analysis found the AI catching improper billing codes and duplicate charges humans would have paid without questioning.

For doctors, AI is already helping. By 2024, about 66% of physicians were using AI for documentation or decision support, up from 38% the year before. That's huge. If AI can handle the bureaucratic garbage—the prior authorizations, the billing codes, the endless documentation requirements—doctors can spend more time actually practicing medicine. Less burnout for them, better care for patients.

The Downside: When "Hallucinations" Can Kill You

Now for the terrifying part.

ChatGPT hallucinates. It generates confident-sounding bull that's factually wrong. Most of the time, this is annoying. When someone asks about medieval history and the AI invents a king who never existed, nobody dies. When someone asks "Is this chest pain serious?" and the AI says "probably just anxiety" when it's actually a heart attack, that's a different situation.

OpenAI itself warns that ChatGPT can give incorrect or even dangerous medical advice, particularly in high-stakes situations and mental health scenarios. The AI doesn't understand medicine—it's pattern-matching text. It can miss critical nuances, fail to recognize emergencies, provide false reassurance that delays life-saving treatment.

And here's the thing: people are trusting it anyway. One study found that 52% of Americans turn to ChatGPT when experiencing concerning symptoms. Almost one in three said they'd delay or skip seeing a doctor if the AI says their symptoms are low-risk.

Read that again. Nearly a third of people will not go to the doctor because a chatbot told them they're fine.

This is how people die. You've got early cancer symptoms or a developing infection or the warning signs of a stroke, and an AI trained on Reddit posts tells you it's probably nothing, and you believe it because it sounds authoritative and seeing a doctor is expensive and hard. By the time you realize something's actually wrong, you've missed the intervention window.

Doctors are reporting patients showing up to appointments convinced they have (or don't have) specific conditions based solely on ChatGPT's output, and it's undermining the diagnostic process. Instead of starting with a clean clinical evaluation, physicians are stuck disproving the AI's diagnosis while the patient insists the chatbot must be right.

The Mental Health Disaster Waiting to Happen

Mental health is where this gets especially dicey. ChatGPT is not a therapist. It has no training, no empathy, no clinical judgment, no duty of care. But people are using it like one anyway, discussing depression, anxiety, and crises with an AI.

There have been reports of harmful responses. Multiple lawsuits from families alleging loved ones harmed themselves after following ChatGPT's advice during mental health crises. Several U.S. states have responded by banning certain mental health chatbot services or mandating human oversight.

This isn't theoretical. When you're in crisis, an AI giving you the wrong advice can be catastrophic. And unlike a human therapist who can pick up on tone, read body language, intervene if someone's in danger—ChatGPT can't do any of that. It's generating text based on probability, not clinical assessment.

OpenAI says it's working on this. The newer GPT-5 variants are supposedly better at using cautious language, asking follow-up questions, suggesting professional help. But "better" isn't "safe," and there's no amount of fine-tuning that gives an AI clinical judgment.

The Regulatory Void

Here's where it gets weird: ChatGPT Health operates in a regulatory gray zone. Because it doesn't provide direct diagnoses or prescriptions, it's not classified as a medical device. That means it avoids the rigorous FDA oversight that actual medical AI tools face.

OpenAI has strategically positioned this as an "information" or "wellness" product. That's smart business, but it means there's no authority rigorously vetting it for medical accuracy. The FDA announced plans to ease rules for AI health software to encourage innovation, which is a nice way of saying "we have no idea how to regulate this."

Some states are passing their own laws, creating a patchwork of rules that doesn't add up to comprehensive oversight. One physician warned: "Without strong regulatory oversight, we risk embedding bias or error into systems that were meant to improve access and accuracy."

Bias is real. If the training data is skewed, the AI's recommendations will be too. If it has more data on certain treatments or medications, it might favor those even when they're not optimal. If the data underrepresents certain populations—which, let's be honest, medical data absolutely does—the AI will give worse advice to those groups.

Privacy is another issue. Even with encryption, storing your health records in an AI system creates risk. Health data is incredibly sensitive. A breach could expose your entire medical history to bad actors. And once you've uploaded that data, you're trusting OpenAI's security forever. That's a big ask.

So... Would You Trust It With Your Health?

The honest answer is: it depends.

Trust ChatGPT to explain what "eGFR" means on your kidney function test? Sure. Trust it to compare insurance plans or catch billing errors? Absolutely. Trust it to tell you what questions to ask your cardiologist? Yeah, probably useful.

Trust it to determine whether your chest pain is a heart attack? heck no.

The problem is that AI doesn't come with clear labels about when it's reliable and when it's not. It presents every answer with the same confident tone whether it's explaining basic anatomy or making a life-or-death judgment call it has no business making.

What we're seeing is a massive natural experiment in real-time. Forty million people daily are using AI for health decisions, and we have no idea what the outcomes look like. How many people are getting better information and making smarter choices? How many are delaying necessary care? How many are catching serious issues earlier? How many are getting misled into dangerous decisions?

We don't know. The data doesn't exist yet. And that should scare you.

The Deeper Problem AI Won't Fix

Here's what really pisses me off about this whole thing: AI in healthcare is a band-aid on a structural disaster. Americans are turning to ChatGPT because the actual healthcare system is broken. It's too expensive, too complex, too inaccessible. Rather than fixing those problems—universal coverage, price controls, more doctors, better distribution of care—we're offloading patient navigation onto AI.

That's not innovation. That's using technology to paper over policy failures.

If healthcare were actually accessible and affordable, if people could see doctors when they needed to without financial ruin, if medical bills were transparent and insurance wasn't a labyrinth of denial—would 40 million people be asking ChatGPT for medical advice? Maybe some would, out of convenience. But not at this scale.

AI isn't making healthcare better. It's making a broken system slightly more navigable while generating huge profits for tech companies. OpenAI isn't solving the hospital desert problem—they're monetizing it.

What Happens Next

The genie's out of the bottle. Hundreds of millions of people are already using AI for health decisions, and that number's only going up. ChatGPT Health is just OpenAI catching up to reality and trying to make it safer.

The best-case scenario is that AI becomes a genuinely useful tool for patient empowerment—helping people understand their health, navigate the system, prepare for appointments, catch errors. A well-informed patient is better for everyone, including doctors.

The worst-case scenario is that people start substituting AI for actual medical care, miss serious diagnoses, follow dangerous advice, and we see a spike in preventable deaths and complications before anyone figures out how to regulate this properly.

The likely scenario is somewhere in between, with a lot of messy trial and error and some really high-profile failures that eventually force regulatory action.

If you're going to use ChatGPT for health stuff—and statistically, you probably are—treat it like you'd treat a smart friend who read some medical articles. Useful for discussion, terrible for diagnosis. Good for understanding information, bad for making final decisions. Helpful for generating questions, not for answering them definitively.

And remember: the AI doesn't care if you live or die. It has no skin in the game, no liability, no duty to you. It's a tool. A powerful, potentially useful, definitely flawed tool. Use it like one.

But for goodness sake, don't let it replace your doctor. Because when something goes wrong—and eventually, for someone, it will—you can't sue an algorithm.

Reply

or to participate

Recommended for you

No posts found