DeepMind CEO Praises DeepSeek, Calls Hype Exaggerated

Demis Hassabis, CEO of Google DeepMind, said China's DeepSeek model "is probably the best work" out of China, CNBC reported on Feb 9, 2025. Hassabis called the model "an impressive piece of work" and said it shows "extremely good engineering," but he added that "there's no actual new scientific advance" and that the hype is "exaggerated a little bit," CNBC reported. CNBC also reported that DeepSeek claimed its model was trained at a fraction of the cost of leading AI players and on less-advanced chips, a claim some researchers have questioned. Editorial analysis: The comments highlight rising technical competition from Chinese labs and sharpen practitioner focus on cost, efficiency, and reproducibility.
What happened
Per CNBC, Demis Hassabis, CEO of Google DeepMind, said China's DeepSeek model "is probably the best work" to come out of China and described it as "an impressive piece of work." CNBC reported Hassabis also said the model demonstrates "extremely good engineering" but that "there's no actual new scientific advance" and that the surrounding hype is "exaggerated a little bit." CNBC reported that DeepSeek claimed its model was trained at a fraction of the cost of leading AI players and on less-advanced Nvidia chips, and that those cost and hardware claims have been questioned by other researchers.
Editorial analysis - technical context
The exchange frames two distinct technical narratives: one focused on engineering efficiency and cost reduction, the other on demonstrable scientific novelty. Industry observers note that models achieving strong empirical results through engineering and optimization, rather than new algorithms, still matter for production deployments because they change the cost-performance frontier. For practitioners, that pattern raises implementation questions around benchmarking methodology, training-cost transparency, and hardware-software cooptimization.
Context and significance
Industry context: Public comment from a senior figure at a major lab draws attention to how non-Western research groups can influence perceptions of progress and resource efficiency. Reporting that DeepSeek drew market attention and debate suggests external scrutiny will increase on claimed training costs, chip choices, and reproducibility. For ML engineers and researchers, the episode emphasizes the need for independent cost and performance audits and for clear reporting of training recipes and hardware stacks.
What to watch
Watch for independent reproduction attempts, peer-reviewed follow ups to DeepSeek's paper, and any detailed disclosures of training FLOPs, wall-clock time, and exact hardware configurations. Industry observers will also monitor whether additional labs publish replication studies or open-source checkpoints that clarify whether gains stem from algorithmic advances or engineering and scale-efficiency techniques.
Direct sourcing note
All factual claims about quotes, cost assertions, and market reaction are drawn from CNBC reporting on Feb 9, 2025.
Scoring Rationale
Comments from a major lab CEO about a high-profile Chinese model are notable for practitioners because they focus attention on cost-efficiency and reproducibility rather than new algorithmic breakthroughs.
Practice interview problems based on real data
1,500+ SQL & Python problems across 15 industry datasets — the exact type of data you work with.
Try 250 free problems

