U.S. Health Chatbot Users Rate Responses As Less Accurate

What happened
Pew’s survey of more than 5,000 U.S. adults, reported in April 2026, shows growing consumer use of AI chatbots for health information but low perceived accuracy. Just over one in five Americans say they at least sometimes use chatbots for health questions; 7% say they use them often or extremely often. Among those who have used chatbots for health, only 18% rate the responses as very or extremely accurate.
Technical context
Health-focused generative AI has been a strategic target for multiple tech firms, and conversational agents are being positioned as consumer entry points into personalized health guidance. Yet these systems face well-known limitations: hallucination risk, variable grounding in current clinical guidelines, and inconsistent handling of personal medical context. User perception data matters because adoption and downstream behavior (self-diagnosis, treatment decisions, care-seeking delays) hinge on trust and perceived reliability, not just convenience.
Key details from sources
The poll also captures use of other sources: 85% of respondents get health information at least sometimes, with a majority turning often to clinicians; 66% consult other people with the same health issue and 60% use major health websites (the report cites WebMD as an example). Nearly half of chatbot users call the tools very convenient, and more than 40% find them easy to understand, indicating strong UX advantages even where factual trust is lacking. The coverage also notes industry momentum as companies roll out dedicated health chatbots and features that integrate personal health data, while experts and advocates warn about potential harms if tools provide inaccurate or misleading clinical information.
Why practitioners should care
For ML engineers, product managers, and clinical informatics teams, this poll quantifies the user experience trade-off: convenience and comprehension are strong, but trust in accuracy is weak. That gap signals priority areas for model evaluation and product controls: robust grounding to clinical sources, calibrated uncertainty, transparent provenance, safer user prompts and disclaimers, and integration pathways that route critical queries to clinicians. Monitoring real-world user behavior — how often users act on chatbot guidance versus seeking professional care — should be instrumented in deployments.
What to watch
Look for follow-up studies measuring outcomes (misdiagnosis, delayed care), vendor commitments on citation/provenance, regulatory guidance for consumer health AI, and product launches that emphasize clinician-in-the-loop flows. Short term, practitioners should treat consumer-facing health chatbots as high-UX, low-trust interfaces and prioritize safety guardrails.
Scoring Rationale
The poll quantifies user behavior and trust in consumer health chatbots — important for product strategy, safety engineering, and clinical integration decisions. It's not a model or regulatory breakthrough, but it directly informs priorities for practitioners deploying health-facing AI.
Practice with real Health & Insurance data
90 SQL & Python problems · 15 industry datasets
250 free problems · No credit card
See all Health & Insurance problemsStep-by-step roadmaps from zero to job-ready — curated courses, salary data, and the exact learning order that gets you hired.



