Skip to content

Sam Altman Proposed Robot Taxes, a Public Wealth Fund, and a Four-Day Workweek. Critics Called It a Cover Story.

DS
LDS Team
Let's Data Science
7 min
OpenAI published a 13-page policy blueprint comparing the AI transition to the New Deal. It proposes taxing automated labor, giving every American a stake in AI profits, and building tripwires that automatically expand unemployment benefits when job losses hit preset thresholds. Policy experts say the ideas are not new. The timing is.

On April 6, OpenAI published a 13-page document titled "Industrial Policy for the Intelligence Age: Ideas to keep people first." It landed the same day as a New Yorker investigation questioning Sam Altman's credibility on AI safety. Altman's response to the timing was to go bigger: in an exclusive Axios interview, he compared the scale of the coming AI disruption to the Progressive Era and the New Deal.

"Some will be good. Some will be bad," Altman told Axios. "But we do feel a sense of urgency."

The urgency has a specific shape. OpenAI's CEO warned that advanced AI could enable a "world-shaking cyberattack" within one year and that using AI to develop novel pathogens is "no longer theoretical." His proposed solution is not to slow down. It is to tax what AI builds, distribute the proceeds, and shorten the workweek.

Five Proposals That Would Reshape the AI Economy

The blueprint, produced by OpenAI's global affairs team, lays out five interlocking policy proposals. Each one would require congressional action. None of them are small.

1. A public wealth fund, seeded by AI companies. OpenAI proposes a nationally managed investment fund, partially funded by contributions from AI companies themselves, that would "invest in diversified, long-term assets that capture growth in both AI companies and the broader set of firms adopting and deploying AI." The returns would flow directly to American citizens. The model is Alaska's Permanent Fund, which has distributed oil revenue dividends to every resident since 1982.

2. A tax shift from payroll to capital. Because "robots, not humans, will be doing the bulk of the work," the blueprint recommends shifting the tax base away from payroll taxes and toward capital gains, corporate income, and new taxes on automated labor. The logic: if fewer humans are earning wages, taxing wages funds less.

3. Four-day workweeks at full pay. OpenAI recommends piloting 32-hour workweeks, framed as an "efficiency dividend" from AI-driven productivity gains. Employers and unions would negotiate the transition.

4. Auto-triggering safety nets. The most technically specific proposal. The blueprint envisions tripwires tied to economic data: when AI displacement metrics hit preset thresholds, temporary increases in unemployment benefits, wage insurance, and cash assistance automatically kick in. When conditions stabilize, the measures phase out. No congressional vote required once the thresholds are set.

5. AI containment playbooks. Government-coordinated plans for scenarios involving dangerous autonomous AI systems that "cannot be easily recalled." This is the proposal that drew the least public attention and arguably carries the most weight.

The Critics Were Ready

The policy community's response was fast and pointed. The proposals themselves are not controversial. The source is.

Anton Leicht, a visiting scholar at the Carnegie Endowment for International Peace, described the blueprint as "comms work to provide cover for regulatory nihilism." The proposals represent "heavy political lifts," he said, with no clear mechanism for implementation.

Soribel Feliz, an independent AI policy advisor and former Senior AI and Tech Policy Advisor for the U.S. Senate, was more direct. "Most of these pillars have been the framework for every major AI governance conversation since ChatGPT came out," she said. The conversation "needs to happen at this level at this moment," Feliz added, but the gap between naming solutions and building real mechanisms remains the central challenge.

Nathan Calvin, VP of State Affairs and General Counsel at Encode AI, called the document "a real improvement from previous documents that were even more floaty" but pointed to a specific contradiction. OpenAI executives Chris Lehane, the company's head of global affairs, and Greg Brockman, its president, lead an industry lobbying group called the Leading the Future PAC. Calvin accused the PAC of "attacking politicians" who support the very policies the blueprint now endorses, citing opposition to New York congressional candidate Alex Bores, the author of the RAISE Act, and alleged intimidation tactics around California's SB 53.

Lucia Velasco, a senior economist at the Inter-American Development Bank and former UN AI policy head, acknowledged the problem the blueprint tries to solve while questioning its author. OpenAI is "the most interested party" in shaping policy outcomes, she said. The proposals "shape an environment in which OpenAI operates with significant freedom."

The Timing Problem

The 13-page blueprint did not land in a vacuum. OpenAI is preparing for an IPO, closed a $122 billion private funding round, and is under scrutiny for its nonprofit-to-for-profit conversion. The company that wants to tax AI profits is also racing to generate them.

This is the tension every critic identified: the company proposing the solutions is the company building the technology it warns about. Altman compared the moment to the New Deal. His critics note that FDR was not running Standard Oil while proposing antitrust law.

The New Yorker investigation, published the same day, questioned whether Altman's public statements about safety align with his internal decisions. The juxtaposition was not lost on policy observers.

Yet the proposals themselves are difficult to dismiss. A public wealth fund that distributes AI-driven returns to citizens is an idea that economists across the political spectrum have explored. Automatic safety net triggers tied to economic data are a mechanism that already exists in other policy domains. The four-day workweek has growing support from both unions and employers running pilot programs.

The question is not whether these are good ideas. It is whether the company most invested in AI acceleration is the right messenger.

The Counterargument for Taking It Seriously

There is a case for engaging with the blueprint on its merits, regardless of who published it.

OpenAI is one of a handful of companies with firsthand visibility into how fast AI capabilities are advancing. Its internal projections about job displacement and cybersecurity risks come from testing models that the public has not yet seen. If Altman is right that superintelligence is close, waiting for a less conflicted messenger means waiting until the transition is already underway.

The auto-triggering safety net proposal, in particular, addresses a problem that standard policy tools handle poorly: speed. Traditional unemployment insurance requires legislative action to expand. By the time Congress votes, the displacement wave may have already peaked. Tying benefit increases to preset economic thresholds removes the legislative delay.

The containment playbook proposal, buried on page 11, may be the most consequential. It acknowledges what Anthropic's recent Mythos Preview release demonstrated in concrete terms: AI systems are approaching capability levels where recall after deployment becomes meaningfully difficult.

The Bottom Line

OpenAI published a 13-page document proposing that the industry it leads should be taxed, regulated, and constrained for the public good. The proposals borrow from a century of American economic policy: wealth redistribution, automatic stabilizers, shorter workweeks, containment planning.

Every critic quoted in this article acknowledged the same thing: the problems are real. AI-driven job displacement is accelerating. The tax base built on human wages is eroding. The safety net was not designed for this speed of disruption. The debate is not about whether to act. It is about whether to trust the messenger.

"Capitalism as we know it won't be enough," Altman told Axios. He may be right. He is also the person who stands to profit most from what replaces it.

Sources

Practice interview problems based on real data

1,500+ SQL & Python problems across 15 industry datasets — the exact type of data you work with.

Try 250 free problems
Free Career Roadmaps8 PATHS

Step-by-step roadmaps from zero to job-ready — curated courses, salary data, and the exact learning order that gets you hired.

Explore all career paths