A cross-party group of 60 British parliamentarians accused Google DeepMind of violating international obligations to safely develop artificial intelligence. open letter published exclusively in TIME ahead of publication. A letter published on August 29 by activist group PauseAI UK said Google's March release of Gemini 2.5 Pro without accompanying security testing details “sets a dangerous precedent.” The letter, whose signatories include digital rights campaigner Baroness Beban Kidron and former defense secretary Des Browne, calls on Google to clarify its obligations.
For many years, experts in artificial intelligence, including CEO of Google DeepMind Demis Hassabiswarned that AI could pose catastrophic risks to public safety and national security— for example, helping potential bioterrorists develop a new pathogen or hackers destroy critical infrastructure. In an effort to manage these risks, at an international artificial intelligence summit co-hosted by the UK and South Korean governments in February 2024, Google, OpenAI and others signed the Frontier AI Safety Commitments. Signatories committed to “publicly communicate” system capabilities and risk assessments, and explain if and how external actors such as government, AI Safety Institutesparticipated in testing. Without mandatory regulation, the public and legislators have largely relied on information obtained through voluntary commitments to understand emerging AI risks.
However, when Google released Gemini 2.5 Pro on March 25, which it said outperformed competing AI systems in the industry, landmarks with “significant profits” – the company did not publish detailed information about safety tests for more than a month. The letter says this not only reflects a “failure to comply” with international security obligations, but also threatens fragile norms that support safer AI development. “If leading companies like Google view these commitments as optional, we risk entering a dangerous race to deploy ever more powerful AI without adequate safeguards,” Brown wrote in a statement accompanying the letter.
“We are committed to our public commitments, including our commitment to the security of Seoul Frontier AI,” a Google DeepMind spokesperson told TIME in an emailed statement. “As part of our development process, our models undergo rigorous safety testing, including from the UK's AISI and other third-party testers, and Gemini 2.5 is no exception.”
The open letter calls on Google to set specific deadlines for when security assessment reports will be published for future releases. Google first published the Gemini 2.5 Pro model card—a document that typically shares information about security tests—22 days after the model's release. However, the eight-page document included only a brief section on safety testing. It wasn't until April 28—more than a month after the model became publicly available—that the model card was updated with a 17-page document detailing specific scores, concluding that Gemini 2.5 Pro showed “significant” but not yet dangerous improvements in areas including hacking. The update also refers to the use of “third-party external testers” but does not disclose which ones or whether the British Institute for Artificial Intelligence Safety was among them, which the letter also cites as a breach of Google's commitments.
After previously failing to resolve the issue media request To comment on whether the company provided Gemini 2.5 Pro to governments for security testing, a Google DeepMind spokesperson told TIME that the company did share Gemini 2.5 Pro with the British Institute for Artificial Intelligence Security, as well as a “diverse group of external experts,” including Apollo Research, Dreadnode and Vaultis. However, Google says it only shared this model with the British Institute for Artificial Intelligence Security after the release of Gemini 2.5 Pro on March 25.
On April 3, shortly after the release of Gemini 2.5 Pro, Google senior director and Gemini product lead Tulsi Doshi said TechCrunch The reason for the lack of a safety report was that the model was an “experimental” version, adding that it had already passed safety tests. She said the goal of these experimental rollouts is to release the model in limited quantities, gather user feedback and improve it before going into production, after which the company will publish a model card detailing the safety tests already completed. However, a few days earlier, Google extended this model to all of its hundreds of millions free users saying, “We want to get our smartest model into the hands of more people as quickly as possible.” mail on X.
The open letter states that “labeling a public model as 'experimental' does not relieve Google of its security obligations” and calls on Google to establish a more robust definition of deployment. “Companies have a major public responsibility for testing new technologies and not involving the public in experiments,” says the Bishop of Oxford, Stephen Croft, who signed the letter. “Imagine a car manufacturer producing a car saying, 'We want the public to experiment and [give] feedback when they have an accident or hit pedestrians or when the brakes don’t work,” he adds.
Croft questions restrictions on safety reporting at the time of publication, reducing the issue to a question of priorities: “What part [Google’s] huge investments in AI go towards public safety and confidence, but how much goes towards massive computing power?”
Of course, Google isn't the only industry giant that seems to be flouting security responsibilities. Elon Musk xAI has not yet published a security report on Grok 4, an artificial intelligence model released in July. Unlike GPT-5 and other recent launches, OpenAI's February release of its Deep Research tool did not have a security report submitted on the same day. The company says it conducted “extensive security testing” but did not release the report until 22 days later.
Joseph Miller, director of PauseAI UK, says the organization is concerned about other cases of apparent abuse and that the focus on Google was driven by its proximity. DeepMind, the artificial intelligence lab acquired by Google in 2014, remains headquartered in London. The current UK Secretary of State for Science, Innovation and Technology, Peter Kylesaid during the 2024 election campaign that he “demandLeading AI companies have shared security tests, but in February it was reported that the UK's plans to regulate AI were shelved as it sought to better align with the Trump administration's hands-off approach. Miller says it's time to replace company promises with “real regulation,” adding that “voluntary commitments just don't work.”






