Developer Reconstructs Anthropic Mythos Architecture

A 22-year-old developer, Kye Gomez, published OpenMythos, a from-scratch reconstruction of what Claude Mythos's architecture might be, and posted it on GitHub, according to Remio. Remio reports the repository contains no trained weights and drew over 10,000 GitHub stars within two weeks. Reporting by 36Kr and Remio describes OpenMythos as implementing a Recurrent-Depth Transformer (RDT) with a mixture-of-experts (MoE) routing mechanism and looped inference; 36Kr says the reconstruction uses repeated application of the same weights with different expert paths and cites Gomez summarizing the design: "MoE provides the breadth of domain knowledge, and the loop provides the depth of inference." Remio reports that Anthropic did not publicly release Claude Mythos and kept it inside a vetted consortium called Project Glasswing. Both outlets link the RDT hypothesis to a recent UCSD and Together AI paper, Parcae, cited as technical backing for looped-transformer scaling behavior.
What happened
A 22-year-old developer identified as Kye Gomez published OpenMythos, a community reconstruction of the architecture behind Claude Mythos, on GitHub, according to Remio. Remio reports the repository contains no pretrained weights and accumulated more than 10,000 GitHub stars in roughly two weeks. Reporting by 36Kr and Remio frames OpenMythos as a specification-level reimplementation built from public papers and public commentary about Mythos; neither outlet describes the repo as containing Anthropic code or proprietary models.
Technical details
Reporting by 36Kr describes OpenMythos as implementing a Recurrent-Depth Transformer (RDT) combined with a mixture-of-experts routing mechanism, where a small set of weights is applied repeatedly (36Kr cites repeated loops up to 16 times) and different expert subsets are activated across loops. Remio outlines the project structure as a Prelude, a Recurrent Block, and a Coda and links the RDT hypothesis to a recent UCSD and Together AI paper, Parcae, that the authors published in April 2026. 36Kr quotes Gomez: "MoE provides the breadth of domain knowledge, and the loop provides the depth of inference." These are described as reconstruction claims derived from public literature rather than recovered proprietary artifacts.
Industry context
Public reconstructions of high-profile, unreleased models concentrate community attention on reproducibility, model design hypotheses, and safety tradeoffs. Observers have previously seen similar cycles where withheld or previewed capabilities lead to intense reverse-engineering and speculative reimplementations; such projects often catalyze both technical follow-up work and debate about disclosure norms.
What to watch
Monitor:
- •downstream forks or experimental implementations that attempt to train RDT variants
- •whether safety-focused groups or open-science labs publish empirical comparisons between looped and stacked transformers referencing Parcae
- •discourse from model vendors and platform hosts about hosting or moderating repositories that reconstruct frontier architectures. Remio and 36Kr do not quote Anthropic on the reconstruction or provide a company statement, and reporting indicates the original Claude Mythos model was not broadly released and was kept within a vetted consortium, per Remio
Scoring Rationale
A community reconstruction of a high-profile, unreleased model is notable for practitioners because it focuses research attention on alternative architectures (RDT + MoE) and intersects with safety and disclosure debates. The story is industry-significant but not a paradigm shift.
Practice interview problems based on real data
1,500+ SQL & Python problems across 15 industry datasets — the exact type of data you work with.
Try 250 free problems

