OpenAI has released two documents that together represent the most comprehensive economic policy framework ever proposed by an AI company — and the most self-aware acknowledgment by any major AI developer that the technology it is building poses a genuine threat to the economic security of millions of workers. The reports, released in the final week of April, mark a significant departure from the industry's typical posture of emphasizing AI's job-creating potential while downplaying its displacement effects.

The first document, 'The AI Jobs Transition Framework,' authored by OpenAI Chief Economist Ronnie Chatterji, analyzes more than 900 occupations covering 153.7 million jobs — 99.7% of all US employment. The second, 'Industrial Policy for the Intelligence Age: Ideas to Keep People First,' proposes a sweeping set of government and corporate policy responses, including a 32-hour workweek, enhanced worker benefits, and a public AI wealth fund that would give citizens direct equity stakes in AI growth.

The Capability Overhang

The most striking finding in the jobs framework is what OpenAI calls the 'Capability Overhang' — the gap between what AI can theoretically do in any given industry and what it is actually being used for. According to the analysis, this gap is enormous across every single sector of the economy. AI can already perform far more tasks than workers in these industries are currently using it for, which means the disruption that has already occurred represents only a fraction of what is technically possible.

The framework categorizes all US employment into four groups: 18% of jobs face genuine automation risk; 24% will see workforce shrinkage even though humans remain necessary for key tasks; 12% could actually grow because AI reduces costs and creates more demand (software developers and physical therapists are cited as examples); and 46% will see little change in the near term. The industries with the highest theoretical AI exposure are Business and Financial Operations, Computer and Mathematical roles, and Management — precisely the white-collar, knowledge-economy jobs that have historically been considered most insulated from automation.

"Exposure helps us understand where AI has technical capability. It cannot, on its own, tell us which jobs are most likely to be automated, redesigned, or expanded in the near term."

— OpenAI AI Jobs Transition Framework, April 2026

The Policy Proposals

The policy document is more provocative than the jobs analysis. Its central proposal — that governments should incentivize employers and unions to run 32-hour, four-day workweek pilots with no loss in pay — is framed as a mechanism for distributing AI productivity gains to workers rather than allowing them to accrue entirely to corporate shareholders. The proposal envisions workers converting the hours freed up by AI assistance into either a permanent shorter week or bankable paid time off.

Alongside the workweek proposal, the document calls for companies to increase retirement contributions, cover most healthcare costs, and subsidize childcare as AI reduces routine workloads. It also proposes a 'robot tax' — a levy on companies that replace human workers with AI — the proceeds of which would fund a public AI wealth fund that would give every citizen a direct equity stake in the economic gains generated by AI. The wealth fund proposal is explicitly modeled on Alaska's Permanent Fund, which distributes oil revenue to state residents annually.

The Skeptics

The proposals have attracted immediate criticism from economists and policy analysts who note the inherent tension in OpenAI's position. 'OpenAI wants other companies to pay workers more while also paying them for subscriptions to their services,' said Professor Gina Neff of the University of Cambridge Minderoo Centre for Technology and Democracy, speaking to the BBC. 'The ideas in this policy might work, but doing so will take a complete change in political headwinds.'

The deeper criticism is structural. OpenAI is a company that is actively accelerating the automation of knowledge work while simultaneously proposing that other companies bear the cost of the transition. The robot tax, if implemented, would apply to OpenAI's customers — the companies that deploy its models to replace human workers — but not to OpenAI itself, which profits from selling the tools that enable the displacement. This asymmetry has not been lost on critics.

OpenAI has framed both documents as 'the beginning of the broader conversation' rather than a definitive policy platform. That framing is either an appropriately humble acknowledgment of the complexity of the issues, or a way of generating favorable press coverage without committing to specific outcomes. What is clear is that the documents represent a significant evolution in how the AI industry is willing to talk publicly about the economic consequences of the technology it is building — and that evolution, whatever its motivations, is worth taking seriously.