Are artificial intelligence companies saving humanity from the potential harms of AI? “Don’t bet on it,” says the new report card.
As AI plays an increasingly important role in the way people interact with technology, the potential harm is becoming increasingly clear: people are using AI-powered chatbots for advice and then death by suicideor use AI for cyber attacks. There are also future risks: artificial intelligence will be used to make weapons or overthrow governments.
However, artificial intelligence companies have little incentive to prioritize human safety, and this is reflected in AI Safety Index was published Wednesday by the Silicon Valley-based nonprofit Future of Life Institute, which aims to steer artificial intelligence in safer directions and limit existential risks to humanity.
“This is the only industry in the U.S. that produces powerful technologies that are completely unregulated, which puts them in a race against each other where they simply have no incentive to prioritize safety,” institute president and MIT professor Max Tegmark said in an interview.
The highest overall grades were only C+, awarded to two San Francisco-based AI companies: OpenAI, which makes ChatGPT, and Anthropic, known for its AI chatbot model Claude. Google's artificial intelligence division, Google DeepMind, received a C grade.
Even lower were Menlo Park-based Facebook parent Meta and Palo Alto-based Elon Musk's xAI, which received a D. Chinese firms Z.ai and DeepSeek also received a D. The lowest grade went to Alibaba Cloud, which received a D-.
The companies' overall scores were based on 35 indicators in six categories, including existential security, risk assessment and communication. The index compiled evidence from publicly available materials and company survey responses. The assessment was conducted by eight artificial intelligence experts, a group that included scientists and executives from AI-related organizations.
All companies in the index were rated below average in the existential security category, which takes into account internal monitoring and control measures and existential security strategy.
“While companies are increasing their ambitions in artificial intelligence and superintelligence, none have demonstrated a credible plan to prevent catastrophic misuse or loss of control,” the institute said in its AI Safety Index report, using an acronym for artificial general intelligence.
Both Google DeepMind and OpenAI have said they are investing in security efforts.
“Security is at the core of how we build and deploy AI,” OpenAI said in a statement. “We invest heavily in cutting-edge security research, build robust security controls into our systems, and rigorously test our models both internally and with independent experts. We share our security concepts, assessments and research to help improve industry standards, and we continually strengthen our defenses to prepare for future opportunities.»
In a statement, Google DeepMind said it takes a “rigorous, science-based approach to AI safety.”
“Our Border Security System describes specific protocols for identifying and mitigating the serious risks associated with powerful, advanced artificial intelligence models before they manifest themselves,” Google DeepMind said. “As our models become more advanced, we continue to innovate in security and management to meet capabilities.”
The Future of Life Institute report said xAI and Meta “do not have any monitoring and control obligations, despite having risk management mechanisms, and have not provided evidence that they are investing more than minimally in safety research.” Other companies, such as DeepSeek, Z.ai and Alibaba Cloud, do not have publicly available existential security strategy documents, according to the institute.
Meta, Z.ai, DeepSeek, Alibaba and Anthropic did not respond to requests for comment.
“Heritage media is a lie,” xAI said in response. An attorney representing Musk did not immediately respond to a request for additional comment.
Musk is also an adviser to the Future of Life Institute and has provided funding to the nonprofit in the past, but was not involved in the AI Safety Index, Tegmark said.
Tegmark said he is concerned that if there is not enough regulation of the artificial intelligence industry, it could lead to terrorists being able to create biological weapons, manipulate people more effectively than they currently do, and in some cases even threaten government stability.
“Yes, we have big problems and things are going in a bad direction, but I want to emphasize how easy it is to fix it,” Tegmark said. “We just need to have mandatory security standards for AI companies.”
The government has made attempts to increase oversight of artificial intelligence companies, but some bills have faced pushback from technology lobbying groups that argue increased regulation could slow innovation and force companies to move elsewhere.
But some laws have been passed aimed at better monitoring security standards at artificial intelligence companies, including SB 53which was signed into law by Gov. Gavin Newsom in September. It requires businesses to share their security protocols and report incidents such as cyberattacks to the government. Tegmark called the new law a step in the right direction, but much more is needed.
Rob Enderle, principal analyst at the consulting firm Enderle Group, said he thinks the AI Safety Index is an interesting way to approach the core problem of under-regulated AI in the US. But there are problems.
“It’s not clear to me that the United States and the current administration are capable of having well-designed rules at this point, which means the rules could end up doing more harm than good,” Enderle said. “It’s also unclear whether anyone has figured out how to implement the rules to ensure compliance.”






