Shortly after Trump reversed Biden's order on his first day in office, the former head of the US AISI, Elizabeth Kelly -- now overseeing Beneficial Deployment at Anthropic -- stepped down in late February. The change appeared due to Trump's dismissal of anything Biden‑related and AI safety and responsibility efforts.
The Trump administration notably did not invite members of AISI to join Vice President JD Vance at France's AI Action Summit in February, where he advocated for doing away with safety precautions to the international community.
On June 3, the US Department of Commerce announced that the AISI would become the "pro‑innovation, pro‑science US Center for AI Standards and Innovation (CAISI)." The release stated that the center would function as the AI industry's primary point of government contact -- much like it did under its previous name, but with a slightly different outlook that appears primarily semantic.
Also: What 'OpenAI for Government' means for US AI policy
"For far too long, censorship and regulations have been used under the guise of national security. Innovators will no longer be limited by these standards," Secretary of Commerce Howard Lutnick wrote in the release. "CAISI will evaluate and enhance US innovation of these rapidly developing commercial AI systems while ensuring they remain secure to our national security standards."
CAISI will develop model standards, conduct testing, and "represent US interests internationally to guard against burdensome and unnecessary regulation of American technologies by foreign governments," the release clarifies. There is no mention of creating a culture of model red‑teaming reporting, for example, or requiring companies to publish the results of certain deployment tests, which some local laws like New York's RAISE Act address as safety requirements.
Safety may not be a top priority for policy, which leaves the AI community to check on itself. Just last week, researchers from several major AI companies came together to advocate for the preservation of chain of thought (CoT) monitoring, or the process of observing a reasoning model's CoT response as a way to catch harmful intentions and other safety issues.
While it's encouraging to see AI companies agree on a safety measure, it's not the same as government‑enforced regulation. For example, an AI policy organized around advancing progress with adequate safety and civil rights protections in place could adapt that recommendation into a requirement for companies releasing new models.