OpenAI acquires Weights.gg voice-cloning team and IP

The New York Times reports that OpenAI acquired Weights.gg, a small voice-cloning startup, earlier this year, and that the startup shut down its consumer service in March (The New York Times). Reporting by The Decoder and PitchBook says Weights.gg raised roughly $4 million and had around six employees; those outlets also report the startup's engineers and intellectual property moved to OpenAI (The Decoder; PitchBook coverage via Techmeme/Intellectia). The New York Times says the startup's public repository included celebrity and political voice models, including reproductions of Samuel L. Jackson, Taylor Swift, Kanye West, members of Blackpink, Bugs Bunny and public figures such as Donald Trump and Joseph R. Biden Jr. Editorial analysis: For practitioners, the acquisition highlights tension between consumer-facing voice-cloning tools and safety/legal controls in production voice systems.
What happened
The New York Times reports that OpenAI acquired the small voice-cloning startup Weights.gg earlier this year, obtaining the company's engineering team and intellectual property, according to two people familiar with the deal who spoke on the condition of anonymity (The New York Times). The Times also reports that Weights.gg announced it was shutting down its services in March (The New York Times). Reporting by The Decoder and PitchBook indicates the startup had raised roughly $4 million in venture funding and employed about six people; The Decoder reports those staff now work across different groups at OpenAI (The Decoder; PitchBook coverage referenced by Intellectia/Techmeme). The New York Times documents that Weights.gg's public repository included voice models that imitated celebrities and public figures, citing examples such as Samuel L. Jackson, Taylor Swift, Kanye West, members of Blackpink, Bugs Bunny, Donald Trump and Joseph R. Biden Jr. (The New York Times).
Technical details
Editorial analysis - technical context: Public reporting describes Weights.gg as operating like a social network for sharing user-created voice models and tools. Industry observers have noted that such platforms typically combine small-footprint generative models, user-contributed model checkpoints, and easy-to-run inference pipelines to enable rapid cloning from short audio samples. Those characteristics raise familiar technical trade-offs between model compactness, sample efficiency, and the difficulty of robustly detecting or watermarking synthetic audio.
Context and significance
Editorial analysis: The acquisition arrives amid ongoing public and industry concerns about voice deepfakes and misuse. The New York Times frames the purchase against OpenAI's prior caution on releasing powerful voice-cloning research publicly in 2024, noting the company had withheld some voice-replication work for safety reasons (The New York Times). For practitioners, the transaction underlines two broader dynamics: first, the consolidation of specialized capabilities into major AI labs; second, the continued presence of high-quality, consumer-accessible voice-cloning models outside large labs' controlled releases, which complicates governance, detection, and rights-management efforts.
Notable artifacts reported in sources
- •The New York Times lists celebrity and copyrighted voices found in Weights.gg's library, including Samuel L. Jackson, Taylor Swift, Kanye West, Blackpink, Bugs Bunny, Donald Trump, and Joseph R. Biden Jr. (The New York Times).
What to watch
Editorial analysis: Observers should track whether OpenAI publishes any technical documentation, safety analyses, or tooling that references components derived from Weights.gg, because public disclosure would clarify whether the assets are being used for internal safety research, product feature work, or other purposes. Reporting by The Decoder states that, according to their sources, OpenAI does not plan to re-release a consumer product like Weights.gg, and that the company is integrating voice capabilities into existing products such as ChatGPT's voice features (The Decoder). Industry monitoring should also focus on licensing and copyright enforcement activity tied to public repositories of voice models, as well as regulatory or platform-policy responses to hosted voice-cloning services.
Limitations of reporting
What is publicly reported about the transaction relies on anonymous sources and secondary databases. The New York Times says the deal terms were not publicly disclosed and cites unnamed people familiar with the acquisition (The New York Times). PitchBook and The Decoder provide funding and headcount figures, which are reported as estimates rather than company filings (The Decoder; PitchBook coverage via Techmeme/Intellectia). OpenAI has not issued a public statement on the rationale for the acquisition in the sources reviewed.
Practical implications for engineers and product teams
Editorial analysis: Teams building or defending voice systems should treat the continued availability of high-quality cloning tech outside controlled releases as a persistent risk vector. Typical mitigation strategies in the field include deployment of watermarking, provenance metadata, multi-factor authentication for voice-driven actions, robustness testing against synthetic inputs, and legal/compliance processes to address copyrighted or impersonation content. Those are industry-wide patterns rather than claims about OpenAI's internal roadmap.
Scoring Rationale
Notable acquisition with direct relevance to voice synthesis, safety, and IP management. The story affects practitioners building or defending voice systems but does not introduce a new model or standard-setting release.
Practice interview problems based on real data
1,500+ SQL & Python problems across 15 industry datasets — the exact type of data you work with.
Try 250 free problems


