A new study has found that artificial intelligence can design DNA for all kinds of dangerous proteins and do so in a way that DNA manufacturers' biosecurity measures cannot reliably detect them.
Malte Muller/fStoap/Getty Images
hide signature
switch signature
Malte Muller/fStoap/Getty Images
Large biotech companies that produce custom DNA for scientists have safeguards in place to keep dangerous biological material out of the hands of potential attackers. They check their orders to catch anyone trying to buy, say, smallpox or anthrax genes.
But now it's new study in the magazine Science demonstrated how AI can be used to easily bypass these biosecurity processes.
A team of artificial intelligence researchers has discovered that protein engineering tools can be used to “rehash” the DNA codes of toxic proteins, “rewriting them in a way that preserves their structure and perhaps their function,” says the team of artificial intelligence researchers. Eric HorwitzChief Scientist at Microsoft.
Computer scientists used artificial intelligence software to generate DNA codes for more than 75,000 variants of dangerous proteins – and the firewalls used by DNA manufacturers were not always able to intercept them.
“To our concern,” says Horwitz, “these reformulated sequences have escaped the biosafety screening systems used by DNA synthesis companies around the world to flag hazardous orders.”
A fix was quickly written and added to the biosafety testing software. But he is not perfect – he still failed to detect a small part of the options.
And this is just the latest episode to show how AI is exacerbating long-standing concerns about the potential misuse of powerful biological tools.
The Dangers of Open Science
“Protein engineering using artificial intelligence is one of the most exciting areas in science. We are already seeing advances in medicine and public health,” Horwitz says. “However, like many powerful technologies, these same tools can often be misused.”
For years, biologists have worried that their ever-improving DNA tools could be used to create powerful biological threats, such as more virulent viruses or more easily spread toxins. They even discussed whether it was really wise to openly publish certain experimental results, even though open discussion and independent replication were the lifeblood of science.
The researchers and the journal that published this new study have decided to withhold some of their information and limit access to their data and software. They engaged a third party, a non-profit organization called the International Biosafety and Biosafety in Science Initiative, to decide who had a legitimate need to know.
“This is the first time such a model has been used to manage the risk of dissemination of dangerous information in a scientific publication,” says Horwitz.
Scientists, who have been concerned about future biosecurity threats for some time, praised the work.
“My overall reaction was positive,” says Arturo Casadevalmicrobiologist and immunologist at Johns Hopkins University. “We have a system in which we identify vulnerabilities. And what you see is an attempt to fix known vulnerabilities.”
The problem, says Casadevall, is “what vulnerabilities that we don't know about will require patching in the future?”
He notes that this team has not done any lab work to actually create any AI-designed proteins to test whether they actually mimic the activity of the original biothreats.
Such work would be an important reality check as society grapples with this kind of emerging threat from AI, but it would be difficult to implement because it could be hampered by international treaties banning the development of biological weapons, Casadevall said.
Get ahead of the AI freight train
This is not the first time that scientists have explored the possibility of malicious use of AI in a biological environment.
For example, a few years ago another team wondered if AI could be used to create new molecules that had the same properties as nerve agents. In less than six hours, the AI tool obediently generated 40,000 molecules that met the requested criteria.
They not only developed known chemical warfare agents, such as a notorious drug called VX, but also developed many unknown molecules that looked plausible and were predicted to be more toxic. “We have transformed our innocuous generative model from a useful medical tool into a generator of potentially lethal molecules,” the researchers write.
The team also did not openly publish the chemical structures the AI tool developed or create them in the lab “because they considered them too dangerous,” the team notes. David Relmanresearcher at Stanford University. “They simply said: We are telling you all this as a warning.”
Relman says this latest research, which shows how AI can be used to evade security checks and find a way around the problem, is commendable. At the same time, he said, it only illustrates a huge problem brewing.
“I think it makes us hesitate and wonder, 'Well, what exactly should we do?' – he says. “How do we get ahead of a freight train that is speeding up and racing down the tracks, risking derailment?”
Despite such concerns, some biosecurity experts see reason to be reassured.
Twist Bioscience is a major supplier of custom DNA and has had to make orders to law enforcement agencies fewer than five times in the past ten years, says James Diggans, head of policy and biosecurity at Twist Bioscience and chairman of the board of directors of industry group International Gene. Synthesis Consortium.
“This is an incredibly rare thing,” he says. “In the cybersecurity world, there are many actors trying to gain access to systems. In biotechnology, the situation is different. The actual number of people who are actually trying to create abuse may be very close to zero. And so I think these systems are an important bulwark against this, but we should all take solace in the fact that this is not a common scenario.”