Google, Microsoft to Give US Agency Early Access to AI Models
Alphabet Inc.’s Google, Microsoft Corp. and xAI have agreed to give the U.S. government early access to their artificial intelligence models to assess the systems’ capabilities and help improve their security before the technology is released to the public.
With the agreements, the AI developers join OpenAI and Anthropic PBC in allowing pre-release reviews of their models by the US Commerce Department’s Center for AI Standards and Innovation, according to a statement from the agency on Tuesday. OpenAI and Anthropic have renegotiated their existing partnerships with the center to better align with priorities in President Donald Trump’s AI Action Plan, the agency said.
Related: Australia Regulator Threatens Enforcement for Poor AI Controls
Since 2024, the office has been accessing and evaluating models from OpenAI and Anthropic before their public release. It has completed more than 40 evaluations of AI models, including state-of-the-art models that remain unreleased, according to the statement.
The agreements are being unveiled as Anthropic’s Mythos system rattles U.S. officials, signaling a wider mandate for the relatively new center, which was established under President Joe Biden as the AI Safety Institute in 2023 and re-established under a new name by the Trump administration last year.
The center calls itself the “industry’s primary point of contact within the US government” for testing, collaborative research and best-practice development. Its existence hasn’t yet been codified into law, though some U.S. lawmakers have introduced draft legislation to give the center more permanent footing.
Related: ‘The Arms Race Is On’: Chubb’s Greenberg on Mythos, Middle East
“These expanded industry collaborations help us scale our work in the public interest at a critical moment,” Chris Fall, the center’s director, said of the new agreements. Fall has taken the reins following the abrupt departure of Collin Burns, a former AI researcher at Anthropic who was chosen for the role but forced out just days after starting the job, according to news reports last month.
The new evaluation agreements follow reports from the New York Times and the Wall Street Journal that the Trump administration is considering an executive order to create a government review process for AI tools. The review process would be a form of oversight, the reports say. A White House official said any announcement would come directly from Trump and cast discussion of potential executive orders as speculation.
Trump’s AI Action Plan, released in July, directs that the Center for AI Standards and Innovation be part of a so-called AI evaluations ecosystem and lead national security-related AI model assessments. It adds that regulators should “explore the use of evaluations in their applications of existing law to AI systems.” It’s possible the center’s wider evaluation of models could pave the way for new enforcement of laws that are already on the books.
The administration’s efforts to shape AI policy have accelerated since Anthropic announced last month that its breakthrough Mythos model was adept at finding weak points in cybersecurity defenses. White House Chief of Staff Susie Wiles, Treasury Secretary Scott Bessent and National Cyber Director Sean Cairncross have become more involved in efforts related to Mythos, and the White House has already opposed a plan by Anthropic to expand access to its Mythos model.
Any Commerce Department evaluation agreement with Anthropic may be complicated by the Pentagon’s dispute with Anthropic, which continues to play out in two lawsuits over whether the Defense Department can legally declare the AI company a supply-chain risk.
Both Defense Secretary Pete Hegseth and Trump have outlined a six-month phase out period for the government to stop using Anthropic’s tools, while a forthcoming White House memo on agencies’ AI use addresses tension points of contract negotiations with nods to both sides’ perspectives.
Top photo: AI developer Anthropic PBC. Photographer: Gabby Jones/Bloomberg.