ADVERTISEMENT
Advertise with BNC

OpenAI Calls for New Deal-Scale Policy Overhaul to Prepare America for Superintelligence

OpenAI Calls for New Deal-Scale Policy Overhaul to Prepare America for Superintelligence

The $852 billion AI company's 13-page blueprint proposes public wealth funds, robot taxes and four-day workweeks — drawing praise for ambition and scepticism about motive

Sam Altman is asking the U.S. government to tax, regulate and redistribute the wealth generated by the very technology his company is racing to build — a move without clear precedent in Silicon Valley history.

In a 13-page policy paper released Monday titled Industrial Policy for the Intelligence Age: Ideas to Keep People First, OpenAI argued that the coming era of artificial superintelligence — AI systems capable of outperforming the smartest humans — demands nothing less than a new social contract on the scale of the Progressive Era and the New Deal. The document lays out more than two dozen policy proposals spanning taxation, labour markets, energy infrastructure, social safety nets and national security, all framed around the premise that incremental regulatory adjustments will not be enough.

“We want to put these things into the conversation,” Altman told Axios in a half-hour interview accompanying the release. “Some will be good. Some will be bad. But … we do feel a sense of urgency.”

The Centrepiece: A Public Wealth Fund and Robot Taxes

The paper’s most provocative proposal is the creation of a nationally managed Public Wealth Fund that would give every American citizen a direct financial stake in AI-driven economic growth. The fund would be seeded in part by AI companies themselves and would invest in diversified long-term assets across both AI developers and firms adopting the technology. Returns would be distributed directly to citizens regardless of their existing wealth or access to capital markets.

Alongside the fund, OpenAI floats the idea of shifting the tax base away from payroll — which could be hollowed out as AI automates labour — and toward capital gains, corporate income and what the paper describes as “taxes related to automated labour.” This amounts to a form of robot tax, an idea first popularised by Bill Gates in 2017 but never seriously advanced in U.S. legislation.

The proposals arrive at a moment of intensifying anxiety. Goldman Sachs research has estimated that AI is already cutting roughly 16,000 U.S. jobs per month, with younger workers bearing a disproportionate share of the displacement. OpenAI’s paper acknowledges the risk head-on: without thoughtful policy, AI could widen inequality by compounding advantages for those already positioned to capture the upside.

Four-Day Workweeks and Portable Benefits

Beyond fiscal policy, the document sketches a reimagined labour market. OpenAI proposes incentivising companies and unions to pilot 32-hour, four-day workweeks at full pay — converting AI-driven productivity gains into what it calls an “efficiency dividend” of time returned to workers. It also envisions portable benefit systems decoupled from individual employers, following workers across jobs, industries and entrepreneurial ventures.

Access to AI itself would be treated as a foundational right, comparable to literacy and electricity. The paper calls for affordable access to foundational models for workers, small businesses, schools, libraries and underserved communities — a framing that positions AI infrastructure alongside broadband and rural electrification as a public good.

On the safety net, OpenAI proposes automatic economic tripwires: when displacement metrics cross predefined thresholds, temporary expansions of unemployment benefits, wage insurance and cash assistance would activate without requiring new legislation.

Containment Playbooks for Rogue AI

Perhaps the paper’s most sobering section addresses scenarios in which dangerous Terminator style-AI systems cannot be easily recalled — because model weights have been released publicly, developers are unwilling to restrict access, or the systems are autonomous and capable of self-replication. OpenAI calls for coordinated government-industry containment playbooks modelled on crisis-response frameworks from cybersecurity and public health.

The company also advocates for formal incident-reporting mechanisms, pre- and post-deployment auditing of the most powerful models, and international information-sharing networks among national AI safety institutes. Frontier AI companies, it argues, should adopt public-benefit corporate governance structures with explicit commitments to broad wealth sharing and long-term charitable giving.

The Trust Deficit

The timing of the release was, at best, awkward. The paper landed on the same day The New Yorker published the results of a lengthy investigation into OpenAI that raised pointed questions about Altman’s trustworthiness, including allegations from former co-founder Ilya Sutskever that Altman had been deceptive about the company’s safety protocols. Internal board deliberations that led to Altman’s brief firing in late 2023 centred on the conclusion that he had not been “consistently candid.”

This backdrop colours every reaction to the document. Anton Leicht, a visiting scholar at the Carnegie Endowment for International Peace, wrote on X that the proposals amount to “fundamental societal changes and heavy political lifts” that are unlikely to materialise on their own. On his reading, the paper functions as “comms work to provide cover for regulatory nihilism” — big ideas floated to create the appearance of responsibility while the company continues building at full speed.

OpenAI is asking policymakers to build a world that can handle the speed they’re planning to move at; deployment absorption instead of development friction.

“OpenAI is asking policymakers to build a world that can handle the speed they’re planning to move at; deployment absorption instead of development friction.” Source: X

Others were more charitable. Soribel Feliz, an independent AI policy adviser and former senior adviser for the U.S. Senate, said OpenAI deserves credit for putting such proposals on paper. The acknowledgement that American institutions and safety nets are falling behind AI development is correct, she said, “and the conversation needs to happen at this level at this moment.” But she cautioned that most of the underlying pillars — share prosperity, mitigate risks, democratise access — have been the framework for every major technology-policy discussion in recent years.

Wider Context: An Industry Positioning Play

OpenAI’s paper does not exist in a vacuum. It arrives as the company pursues a valuation north of $850 billion, prepares for a potential public offering, and navigates a political environment where AI regulation remains deeply unsettled. The European Union’s AI Act has established a compliance framework; in the U.S., federal legislation remains stalled while states experiment with patchwork rules. OpenAI itself has been accused of using aggressive lobbying to undermine California’s proposed AI transparency legislation, SB 53.

The document can therefore be read on at least two levels. As a genuine policy contribution, it represents the most detailed blueprint any major AI company has offered for managing the economic disruption its own technology may cause. As a positioning exercise, it allows OpenAI to frame itself as a responsible actor advocating for worker protections and wealth redistribution — even as critics question whether the company’s actions match its rhetoric.

TechCrunch noted that the proposals blend traditionally left-leaning mechanisms — public wealth funds, expanded safety nets, robot taxes — with a fundamentally capitalist, market-driven economic framework. That ideological flexibility may be deliberate: the paper is designed to appeal to both progressive policymakers concerned about inequality and centrist technocrats focused on competitiveness with China.

What Happens Next

OpenAI is backing the paper with money and institutional heft. The company announced a pilot programme of fellowships and research grants of up to $100,000, plus up to $1 million in API credits, for work that builds on the paper’s proposals. It will convene discussions at a new OpenAI Workshop opening in May in Washington, D.C.

Whether any of these ideas gain legislative traction is another matter entirely. A public wealth fund seeded by AI companies, higher capital-gains taxes and a national four-day workweek pilot are each, on their own, a heavy political lift in the current Congress. Taken together, they represent a transformation of the American social contract that would require sustained bipartisan will — a commodity in short supply.

But the paper’s significance may lie less in its specific prescriptions than in the fact that it was written at all. The company building what may become the most disruptive technology in human history is now on the record saying that disruption demands a policy response of historic scale. If superintelligence arrives on anything like the timeline OpenAI projects, the question is not whether these conversations need to happen, but whether they are happening fast enough.


OpenAI is accepting public feedback on the paper at newindustrialpolicy@openai.com.


Maximize Your 2026 Crypto-Media Reach – Before It’s Too Late!

BNC AdvertisingBrave New Coin reaches 1M+ engaged crypto enthusiasts a month through our website, podcast, newsletters, and YouTube. Get your brand in front of key decision-makers and early adopters in 2026. Limited slots remaining! Find out more today!


ADVERTISEMENT
Advertise with BNC
Recent Posts
ADVERTISEMENT
Advertise with BNC
Top Gainers & Losers
Discover the biggest crypto gainers & losers
ADVERTISEMENT
Advertise with BNC
Latest Insights More Insights
ADVERTISEMENT
Advertise with BNC