Researchllmpassword generationcsprng
Large Language Models Produce Predictable Weak Passwords
8.9
Relevance Score
New research from Irregular finds LLMs like Claude, GPT, and Gemini generate visually complex but highly predictable passwords. In 50 prompts Claude Opus 4.6 produced only 30 unique passwords and one 16-character string 18 times, and effective entropy estimates drop from ~98 bits to roughly 20–27 bits, enabling million-guess attacks and real-world exposures on GitHub.


