Solidroad Raises $25M To Automate Support QA

Solidroad raised a $25M Series A led by Hedosophia, with participation from First Round Capital, Y Combinator, and Sony Innovation Fund. The San Francisco and Dublin startup uses an AI-native quality assurance and training platform to evaluate both human and AI customer interactions at scale, surfacing risk, identifying skill gaps, and triggering personalized coaching in real time. Customers include Ryanair, ŌURA, and Crypto.com. The funding will expand engineering and customer operations, accelerate product integrations, and support hiring to meet growing enterprise demand for consistent, company-specific evaluation rubrics and closed-loop training workflows.
What happened
Solidroad raised a $25M Series A led by Hedosophia, with participation from First Round Capital, Y Combinator, and Sony Innovation Fund. The company, co-founded by Mark Hughes (CEO) and Patrick Finlay (CTO), offers an AI-native quality assurance platform that reviews 100 percent of customer interactions across human and AI agents, flags risk, identifies skill gaps, and automatically converts evaluation data into targeted training.
Technical details
Solidroad applies a configurable evaluation layer that scores conversations against customer-defined rubrics rather than a one-size-fits-all model. Key technical and product capabilities include:
- •Automated scoring and risk detection across human and AI agents, enabling full-coverage QA at scale
- •Real-time triggers that launch personalized coaching, simulations, or micro-training from identified gaps
- •Customer-configurable rubrics covering tone, policy compliance, and interaction standards to ensure brand-specific evaluation
- •Integrations with contact center and ticketing systems to feed scoring into workflows and analytics
The team reports customers running tens to hundreds of thousands of conversations per month through the platform; one cited customer processes more than 800,000 conversations per month. Solidroad claims up to 10x analyst productivity improvement, a 90% reduction in manual review time, and a typical 20% increase in QA coverage across deployments. Mark Hughes explains, "Our system uses AI that sticks to a customized rubric, which means it is evaluating everything at the same standard with the same guidelines."
Context and significance
The round arrives as enterprises scale AI agents for routine work while human agents handle more complex, emotionally charged escalations. That shift raises the operational bar for coaching, compliance, and consistency. Solidroad sits at the intersection of three trends: growing adoption of AI agents, demand for closed-loop coaching systems, and the need for audit-ready evaluation of agent behavior. Its product addresses two persistent industry pain points: the subjectivity and low coverage of manual QA, and the gap between evaluation and actionable training.
For practitioners, Solidroad represents a practical approach to governance and continuous improvement for hybrid human-AI contact centers. The configurable rubric model reduces reliance on generic benchmarks and lets enterprises encode policy and brand voice into evaluation logic. From a controls perspective, full-coverage scoring can support audit trails, compliance checks, and automated escalation rules, but it also raises questions about false positives, bias in automated judgments, and how to validate the scoring models over time.
What to watch
Solidroad will use the funds to expand engineering and go-to-market teams across San Francisco and Dublin and to deepen integrations with enterprise contact center stacks. Watch for product moves around model explainability, drift detection for AI agents, and richer simulation generation to close the coaching loop. Also monitor how customers measure long-term impact on CSAT, escalation rates, and regulatory compliance as AI agents proliferate.
Scoring Rationale
A notable Series A for an enterprise AI startup addressing an operationally critical problem: consistent QA and coaching across human and AI agents. Useful for practitioners running contact centers, but not a paradigm-shifting release.
Practice interview problems based on real data
1,500+ SQL & Python problems across 15 industry datasets — the exact type of data you work with.
Try 250 free problemsStep-by-step roadmaps from zero to job-ready — curated courses, salary data, and the exact learning order that gets you hired.
