President Donald Trump’s White House is contemplating whether the US government should be allowed to screen the most powerful AI models before they become available to the public, a significant shift from his previously laissez-faire approach to the AI industry.
In the most recent story about White House AI model vetting, the debate boils down to whether the government should intervene before frontier systems with coding or cyber capabilities get distributed to the public. That’s a not a subtle change. That is Washington asking whether the arms race to AI has evolved to the stage where ‘ship it and see what happens’ doesn’t cut it anymore.
The proposal being considered involves an executive order that might establish a working group of public servants and tech executives to look into how regulation could operate.
Per other reporting on the administration’s talks, the conversation has largely centred on sophisticated models that could enable cyberattacks or help identify software weaknesses.
That’s a bit of whiplash, obviously. The administration that pledged to dismantle the barriers to AI development now seems willing to put one in place. Maybe not a wall, maybe just a gate.
It follows anxiety over Anthropic’s latest system, Mythos, which reportedly unnerved cyber experts due to its sophisticated coding and vulnerability-detection talents. The media also reported that included considerations of an approach to vetting models with national-security implications before their general release.
The anxiety is fairly logical: if a model can be employed to help find bugs sooner, it will likely also help hackers to find them even sooner. That is the uneasy knot within this argument.
For Trump it is an important reversal of direction. When he signed an executive order to reduce impediments to AI dominance in January 2025, he dismantled the policies on AI previously instituted by his government, which he said obstructed innovation.
At the time he told us, build fast, limit the government oversight, and you will be victorious. This time the message seems more complicated: do build fast, but don’t hand everyone a cyber blowtorch without first checking the safety switch.
That friction is precisely the reason this article is of importance. AI firms desire speed, as it attracts users, money, and geopolitical influence. Security authorities want prudence because, to an increasing extent, the smartest AI models look more like general-purpose coding and analysis and perhaps cyber warfare systems. Both are right. And that, frustratingly, is why making rules is hard.
The administration’s larger AI strategy focuses largely on speeding things up. America’s AI Action Plan puts U.S. AI policy in three buckets:
boost innovation
build AI infrastructure
lead in global diplomacy and security
The last item is carrying quite a lot of load at the moment. When AI models matter for cyber protection, weapons, intel and critical infrastructure, they become more than another consumer technology. They become national security assets, and national security problems.
There is already some tech groundwork for thinking in risk. Washington is just debating the appropriate scale of enforcement. The National Institute of Standards and Technology has released an AI Risk Management Framework to help organizations deal with risks to people, businesses and communities.
It’s not mandatory. There are no licenses involved. Yet the framework offers government officials a new language to talk about the messy business of mapping out harm, assessing risk, mitigating failures, and figuring out accountability when things go wrong.
All this also is happening in step with AI getting increasingly embedded within government and defense. Days before the recent vetting conversation, the Pentagon agreed to bring AI technologies into classified systems as part of agreements with several big tech companies, as reported in U.S. military announces new AI partnerships.
Once frontier models are integrated into sensitive government operations, the game changes. An error becomes more than just a failed demo. A mishap becomes more than just a bad news story. Reality kicks in fast.
The tech industry won’t appreciate that uncertainty. Admittedly, when Washington starts talking about review boards, you don’t hear many cheers.
Those that will argue that pre-release checks may result in slow innovation, leaks of sensitive technical information, or a foreign competitor with different incentives. The truth is, none of those concerns are frivolous. In AI, a delay of several months may be comparable to showing up to the Formula One race on a bicycle.
Still, that argument is growing harder and harder to ignore. If the next generation of models is going to be used to facilitate cyber attacks, speed up bio research, fabricate better fraud, or automate disinformation campaigns, then “trust us, we tested it ourselves in the lab” may just not fly with the public for much longer. The demand isn’t about a passion for bureaucracy. It’s about the size of the blast radius.
That’s what is most likely, at least over the next few years, rather than a government licensing system for all A.I. models, which would be impossible to execute in practice.
Instead, officials might focus regulation only on the most advanced systems, including those possessing the capacity to carry out large-scale cyberattacks or be used directly by the government. Consider a requirement that A.I. developers first answer a few questions before they can sell high-powered systems to anyone with a credit card.
It is still a milestone, even so. The White House is sending a strong message to the private sector that frontier A.I. may have moved past the stage where it represents only a promising technological tool to become a strategic risk, which of course does not mean the end of the A.I. boom, just to be clear. Rather, it signals that A.I. has developed a few bad teeth.
Silicon Valley has long told Washington that the U.S. needs to race forward to maintain its leadership. It looks like Washington wants to respond: OK, show us your brakes first.