The Trump administration is considering a significant reversal of its deregulatory stance on artificial intelligence, with officials deliberating an executive order that would establish a formal government review process for new AI models before they are released to the public. The New York Times first reported the deliberations on May 4, citing officials briefed on the discussions. A White House official declined to confirm or deny the report, stating only that any policy announcement will come directly from the president.

The proposed framework would create an AI working group bringing together senior technology executives and government officials to examine potential oversight procedures. The group would evaluate new frontier models against a set of safety and national security criteria before clearance for public deployment. The scope of the review process — whether it would be mandatory or voluntary, and which agencies would hold enforcement authority — remains under deliberation.

"The change could be prompted by concerns about Anthropic's new AI model called Mythos, which cybersecurity experts warn could supercharge complex cyberattacks."

— Reuters, May 4, 2026

The Mythos Catalyst

The immediate catalyst for the policy reconsideration appears to be Anthropic's unreleased Mythos model. Cybersecurity researchers briefed on the model's capabilities have warned that its advanced coding proficiency gives it an unprecedented ability to identify software vulnerabilities and devise exploitation strategies at scale. Unlike previous frontier models that could assist with discrete coding tasks, Mythos is reportedly capable of end-to-end offensive security workflows — from reconnaissance and vulnerability discovery to exploit development and deployment — with minimal human guidance.

The concern is not merely theoretical. As documented in separate analyses published the same day, 2025 already saw non-technical actors using frontier AI models to conduct sophisticated cyberattacks: a 17-year-old in Osaka used AI to breach 7 million user records, three teenagers with no coding background used ChatGPT to attack Rakuten Mobile 220,000 times, and a single actor using Claude Code conducted an extortion campaign against 17 organizations in a single month. Mythos, if the warnings are accurate, would represent a qualitative leap beyond these incidents.

A Reversal of Historic Proportions

The proposed oversight framework would represent a dramatic policy reversal. On his first day in office in January 2025, President Trump revoked the Biden administration's October 2023 executive order on AI safety, which had required developers of AI systems posing risks to national security, the economy, or public health to share safety test results with the government before public release. Trump's AI blueprint, released in July 2025, explicitly prioritized a hands-off regulatory approach, loosening environmental rules and expanding AI exports to allies as part of a strategy to maintain American dominance over China in the technology.

The White House in March 2026 unveiled an AI policy framework for Congress that urged lawmakers to pre-empt state-level AI rules, protect children from AI harms, and shield communities from high energy costs associated with data center expansion — but that framework conspicuously avoided any mandatory pre-release review requirements for AI developers. The shift now under consideration would go substantially further, inserting the federal government into the model development and release pipeline in a way that the administration had previously resisted.

Industry Reaction and Implications

The AI industry is watching the deliberations closely. Major AI labs have long argued that mandatory pre-release reviews would slow innovation and create competitive disadvantages relative to Chinese AI developers who face no equivalent restrictions. Anthropic, whose Mythos model appears to have triggered the policy reconsideration, has historically been the most supportive of government engagement among frontier AI developers — the company has consistently advocated for responsible scaling policies and has maintained a government affairs presence in Washington.

OpenAI, Google DeepMind, and Meta have each built safety evaluation teams that conduct internal red-teaming and dangerous capability assessments, but these processes are voluntary and their results are not systematically shared with government agencies. A mandatory review process would fundamentally change the relationship between AI developers and the federal government, potentially requiring labs to disclose model architectures, training data, and evaluation results to government reviewers before public deployment.

The deliberations reflect a broader tension that has defined AI policy throughout the current administration: the desire to maintain American AI leadership through deregulation and aggressive deployment, balanced against growing evidence that frontier AI capabilities are outpacing the security and governance frameworks designed to manage them. Whether the White House ultimately moves forward with an executive order, and in what form, will have significant implications for the pace and structure of AI development in the United States.