Expert Personas Reduce LLM Factual Accuracy

Researchers at the University of Southern California publish a preprint reporting that persona-based prompting improves alignment but reduces factual accuracy on knowledge-heavy tasks. Using MMLU tests, persona prefixes cut multiple-choice accuracy to 68.0% versus 71.6% for the base model while boosting safety guardrails like JailbreakBench refusal rates by 17.7 percentage points. They propose PRISM, a gated LoRA routing method to balance trade-offs.
Scoring Rationale
Practical, well-evidenced findings with an actionable PRISM method, limited by preprint status and single-group evaluation.
Practice with real Logistics & Shipping data
90 SQL & Python problems · 15 industry datasets
250 free problems · No credit card
See all Logistics & Shipping problemsStep-by-step roadmaps from zero to job-ready — curated courses, salary data, and the exact learning order that gets you hired.



