AMA Urges Legislation to Curb Medical AI Misinformation
The American Medical Association (AMA) has written a series of letters urging legislative safeguards to prevent misuse of artificial intelligence in medical and mental health care, The Jerusalem Post reports. The AMA cited instances where AI and deepfakes have promoted medical misinformation, impersonated clinicians, and supported fraud. Axios cited an AMA CEO John Whyte quote: "We shouldn't have to make the public detectives to determine whether something's not a deepfake." The Jerusalem Post cites a Nature report that researchers at the University of Gothenburg uploaded two fabricated papers describing a fictional disease, "bixonimania," which the report says was quickly absorbed and reused by AI systems including Microsoft Bing's Copilot, Google's Gemini, Perplexity, and OpenAI's ChatGPT. A Google spokesperson provided a quoted response about in-app prompts and recommending qualified professionals for sensitive medical advice.
What happened
The American Medical Association (AMA) has written a series of letters urging legislative safeguards to prevent misuse of artificial intelligence in medical and mental health fields, The Jerusalem Post reports. The Jerusalem Post cites specific harms including AI-generated misinformation, fraud, impersonation of medical professionals via deepfakes, and chatbots giving misleading or dangerous health advice. "We shouldn't have to make the public detectives to determine whether something's not a deepfake," Axios cited AMA CEO John Whyte as saying.
Documented examples
Per reporting cited by The Jerusalem Post, a recent article in Nature described an experiment by researchers at the University of Gothenburg who uploaded two fabricated manuscripts describing a fictional disease labeled "bixonimania." Nature reported that the fabricated content was subsequently absorbed and reused by AI services including Microsoft Bing's Copilot, Google's Gemini, the Perplexity answer engine, and OpenAI's ChatGPT. The Jerusalem Post also cites a high-profile deepfake that replicated CNN chief medical correspondent Dr. Sanjay Gupta in a fraudulent advertisement for an Alzheimer's cure; Gupta said on CNN's Terms of Service podcast that demonstrably false material continues to circulate widely.
Editorial analysis - technical context
Generative models commonly ingest large, heterogeneous corpora, creating exposure to low-quality or fabricated content. Industry observers note that data contamination and model reliance on scraped content make systems susceptible to amplifying false or synthetic medical claims. For practitioners, this increases the need for rigorous source provenance, domain-specific verification layers, and conservative guardrails when deploying models in health-adjacent products.
Editorial analysis - context and significance
The AMA's push for legislation places regulatory risk squarely on the roadmap for health-focused AI deployments. Industry-pattern observations suggest that increased regulatory attention typically triggers higher compliance costs, stricter validation expectations, and more conservative product claims from vendors operating in regulated clinical contexts. Clinicians and health-tech teams should expect scrutiny of accuracy, provenance, and marketing claims tied to AI outputs.
What to watch
Track legislative proposals or hearings referencing the AMA letters, responses and transparency statements from major platform providers, and follow-up studies on how frequently fabricated scientific claims propagate into model outputs. Also monitor efforts to standardize provenance metadata, medically curated training datasets, and certification schemes for clinical AI tools.
Reported sources
The Jerusalem Post article of May 6, 2026, cites Axios for the John Whyte quote and cites a Nature report and examples involving Google, Microsoft, Perplexity, OpenAI, and CNN's Dr. Sanjay Gupta. A Google spokesperson was quoted in the coverage addressing in-app prompts and professional consultation recommendations.
Scoring Rationale
The AMA's formal push for legislative safeguards marks a notable regulatory development for health-related AI, raising compliance and validation expectations for practitioners. The story directly affects deployment risk and product requirements in clinical contexts.
Practice with real Health & Insurance data
90 SQL & Python problems · 15 industry datasets
250 free problems · No credit card
See all Health & Insurance problems

