U.S. to assess new AI models before their release

SAN FRANCISCO, CALIFORNIA - APRIL 07: (L-R) Francis deSouza and Sharon Goldman speak onstage during the "How Google Is Scaling Enterprise AI Adoption with Trust" panel at the HumanX Conference San Franciso 2026 at Moscone Center South on April 07, 2026 in San Francisco, California. Big Event Media/Getty Images for HumanX Conference/AFP (Photo by Big Event Media / GETTY IMAGES NORTH AMERICA / Getty Images via AFP)
ad-banner-setar-tourist-sim-watersport2024
265805 Pinchos- PGB promo Banner (25 x 5 cm)-5 copy
ad-banner-costalinda-2024
ad-banner-aruba-beach-club-5x5

New York, United States – The U.S. government on Tuesday announced in a policy shift that it will have access to tech giants’ new AI models to evaluate them before they are released.

The agreements with Google DeepMind, Microsoft and xAI come after the Trump administration had earlier adopted a hands-off approach to regulation as Silicon Valley rolled out artificial intelligence technology that is changing modern life at breakneck pace.

The partnerships are based on agreements reached when Joe Biden was in power, and have been renegotiated under Donald Trump, officials said.

The New York Times reported Monday the White House is discussing an executive order that would establish a working group of tech executives and government officials to examine potential review procedures for new AI models.

The Center for AI Standards and Innovation (CAISI), which is part of the Commerce Department, said it “will conduct pre-deployment evaluations and targeted research to better assess frontier AI capabilities and advance the state of AI security.”

It was not immediately clear if the agreements announced Tuesday are linked to the working group that the Times says is being discussed.

The CAISI replaced the U.S. Artificial Intelligence Safety Institute created by the Biden administration in 2023.

President Trump had positioned himself as a champion of unfettered AI development, rolling back Biden-era safety evaluation requirements and casting regulation as a threat to US competitiveness with China.

The Biden administration had issued an executive order in 2023 that required AI developers to share safety test results with the government and directed federal agencies to set standards for the technology. But Trump rescinded these measures shortly after taking office.

The immediate catalyst was the emergence of a powerful new AI model called Mythos, built by the San Francisco start-up Anthropic.

The company has described the model’s ability to identify software security vulnerabilities as potentially leading to a cybersecurity reckoning and has declined to release it publicly.

“Independent, rigorous measurement science is essential to understanding frontier AI and its national security implications,” said CAISI Director Chris Fall said Tuesday.

U.S. news outlets have said the National Security Agency has gained access to Mythos and is carrying out tests on it.