Skip to main content

What to know before asking an AI chatbot for health advice

This image provided by OpenAI in February 2026 demonstrates a health chatbot on a phone app. (OpenAI via AP) (Uncredited, OpenAI)

WASHINGTON – With hundreds of millions of people turning to chatbots for advice, it was only a matter of time before tech companies began offering programs specifically designed to answer health questions.

In January, OpenAI introduced ChatGPT Health, a new version of its chatbot that the company says can analyze users' medical records, wellness apps and wearable device data to answer health and medical questions. Currently, there's a waiting list for the program. Anthropic, a rival AI company, offers similar features for some users of its Claude chatbot.

Recommended Videos



Both companies say their programs, known as large language models, aren't a substitute for professional care and shouldn't be used to diagnose medical conditions. Instead, they say the chatbots can summarize and explain complicated test results, help prepare for a doctor's visit or analyze important health trends buried in medical records and app metrics.

Here are some things to consider before talking to a chatbot about your health:

Chatbots can offer more personalized information than a Google search

Some doctors and researchers who have worked with ChatGPT Health and similar programs see them as an improvement over the status quo.

AI platforms are not perfect — they can sometimes hallucinate or provide bad advice — but the information they produce is more likely to be personalized and specific than what patients might find through a Google search.

“The alternative often is nothing, or the patient winging it,” said Dr. Robert Wachter, a medical technology expert at University of California, San Francisco. “And so I think that if you use these tools responsibly, I think you can get useful information.”

One advantage of the latest chatbots is that they answer users’ questions with context from their medical history, including prescriptions, age and doctor's notes.

Even if you haven't given AI access to your medical information, Wachter and others recommend giving the chatbots as many details as possible to improve responses.

If you're having worrisome symptoms, skip AI

Wachter and others stress that there are situations when people should skip the chatbot and seek immediate medical attention. Symptoms such as shortness of breath, chest pain or a severe headache could signal a medical emergency.

Even during less urgent situations, patients and doctors should approach AI programs with “a degree of healthy skepticism,” said Dr. Lloyd Minor of Stanford University.

“If you’re talking about a major medical decision, or even a smaller decision about your health, you should never be relying just on what you’re getting out of a large language model,” said Minor, who is the dean of Stanford's medical school.

Consider your privacy before uploading any health data

Many benefits offered by AI bots stem from users sharing personal medical information. But it’s important to understand that anything shared with an AI company isn't protected by the federal privacy law that normally governs sensitive medical information.

Commonly known as HIPAA, the law allows for fines and even prison time for doctors, hospitals, insurers or other health services that disclose medical records. But the law doesn’t apply to companies that design chatbots.

“When someone is uploading their medical chart into a large language model, that is very different than handing it to a new doctor,” said Minor. “Consumers need to understand that they’re completely different privacy standards.”

Both OpenAI and Anthropic say users’ health information is kept separate from other types of data and is subject to additional privacy protections. The companies do not use health data to train their models. Users must opt in to share their information and can disconnect at any time.

Testing shows chatbots can stumble

Despite excitement surrounding AI, independent testing of the technology is in its infancy. Early studies suggest programs like ChatGPT can ace high-level medical exams but often stumble when interacting with humans.

A 1,300-participant study by Oxford University recently found that people using AI chatbots to research hypothetical health conditions didn’t make better decisions than people using online searches or personal judgment.

AI chatbots presented with medical scenarios in a comprehensive, written form correctly identified the underlying condition 95% of the time.

“That was not the problem,” said lead author Adam Mahdi of the Oxford Internet Institute. “The place where things fell apart was during the interaction with the real participants.”

Mahdi and his team found several communication problems. People often didn’t give the chatbots the necessary information to correctly identify the health issue. Conversely, the AI systems often responded with a combination of good and bad information, and users had trouble distinguishing between the two.

The study, conducted in 2024, did not use the latest chatbot versions, including new offerings like ChatGPT Health.

A second AI opinion can be helpful

The ability for chatbots to ask follow-up questions and elicit key details from users is one area where Wachter sees room for improvement.

“I think that’s when this will get really good, when the tools become a little bit more doctor-ish in the way they go back and forth” with patients, Wachter said.

For now, one way to feel more confident about the information you're getting is to consult multiple chatbots — similar to getting a second opinion from another doctor.

“I will sometimes put information into ChatGPT and information into Gemini,” Wachter said, referencing Google's AI tool. “And when they both agree, I feel a little bit more secure that that’s the right answer.”

___

The Associated Press Health and Science Department receives support from the Howard Hughes Medical Institute’s Department of Science Education and the Robert Wood Johnson Foundation. The AP is solely responsible for all content.