Chinese Physicians Reveal Acceptance of AI Medical Tools

A nationwide cross-sectional survey of 4,024 in-service physicians across 29 provincial-level units (January-April 2024) measures acceptance of AI clinical tools using an extended UTAUT framework with a new "positive impact" dimension. The study validated the instrument psychometrically, used structural equation modeling to map causal pathways, and compared six classification models, selecting a balanced random forest with SHAP for interpretability. Hospital level, professional title, AI familiarity, and future optimism were tested as moderators. The combined SEM-plus-explainable-ML approach identifies performance expectancy, positive impact, and facilitating conditions as principal drivers of adoption intent and produces an interpretable importance ranking practitioners can use to prioritize training, integration, and procurement decisions.
What happened
A nationwide survey of 4,024 Chinese in-service physicians across 29 provincial-level units (January-April 2024) assessed acceptance of AI clinical tools using an extended Unified Theory of Acceptance and Use of Technology (UTAUT) that adds a positive impact dimension. The study validated the questionnaire with exploratory and confirmatory factor analysis, estimated causal paths with structural equation modeling, and paired that classical approach with explainable machine learning for prediction and feature ranking.
Technical details
The methods combine psychometrics, SEM, and interpretable ML. Key elements:
- •The survey operationalized five UTAUT constructs: performance expectancy (PE), effort expectancy (EE), social influence (SI), facilitating conditions (FC), plus the new positive impact (PI) factor.
- •Moderators included hospital level, professional title, AI familiarity, and future optimism to detect heterogeneity in effects.
- •Six classification algorithms were compared; a balanced random forest was selected for best predictive performance and class balance, with SHAP used to produce local and global explanation scores.
Context and significance
Mixing SEM and explainable ML addresses two common gaps in adoption research: causal mapping and practical prediction. SEM clarifies directional effects among constructs that guide implementation strategies, while the balanced random forest + SHAP output yields an actionable ranking of predictors for targeting training, infrastructure, and communication. The finding that performance expectancy, positive impact, and facilitating conditions are primary drivers signals that clinicians prioritize clinical utility, perceived net benefit, and operational support when deciding to use AI tools. Effort expectancy and social influence play smaller roles, suggesting adoption barriers are less about ease of use or peer pressure and more about demonstrable value and system readiness.
What to watch
Deployers should instrument pilots to measure the same constructs and feed labeled adoption outcomes into explainable classifiers to replicate feature rankings in their local settings. Regulators and hospital leaders should prioritize interoperability, outcome evidence, and training investments, since those factors are central to physician acceptance.
Scoring Rationale
The study delivers notable, practice-relevant evidence via a large, nationally representative sample and a hybrid SEM-plus-explainable-ML methodology. It is not a frontier-model breakthrough, but it provides actionable signals for deployers, procurement teams, and hospital IT, hence a mid-high relevance score to practitioners.
Practice with real Health & Insurance data
90 SQL & Python problems · 15 industry datasets
250 free problems · No credit card
See all Health & Insurance problemsStep-by-step roadmaps from zero to job-ready — curated courses, salary data, and the exact learning order that gets you hired.


