- Codemender automatically generates security corrections, proven AI for open source projects.
- Google DeepMind claims that Codemender reduces the load on vulnerability by checking code
- DeepMind plans a wider issue for developers as soon as the reliability of Codemender is confirmed
Google Deepmind introduced Codemender, an agent of artificial intelligence, which, according to them, can automatically detect and correct software vulnerability before hackers use them.
Google research in artificial intelligence hand says the new tool can protect Open source code Projects by creating patches that can be used after their verification by researchers.
Codemender is based on the Gemini Deep Think model from Deepmind and uses several analysis tools, including phasing, static analysis and differential testing, to identify root causes of errors and prevent regressions.
Help not replace people
Raluka Ada Pop, senior researcher DeepMind, and John Four Flynn, vice president of security, said that the system had already made dozens of corrections.
“Over the past six months, while we created Codemender, we have already implemented 72 security corrections into open source projects, including about 4.5 million lines of code,” the Pop and Flynn wrote in their report. Message on the blog DeepMind.
The company claims that Codemender can act both reactively and proactively, correcting the detected deficiencies and rewriting the code for the complete removal of vulnerability classes.
Ultimately, the system should be able to reduce the working load on maintaining safety by checking its own corrections before sending them for verification.
Google seeks to emphasize the verification stage, noting that Codemender is not designed to replace people, but rather in order to act as a useful agent and expand the growing volume of vulnerabilities that can detect automated systems.
In one case, according to the team, Codeender automatically used the annotations -fbounds -Safety to parts of the Libwebp image compression library. According to DeepMind, this step would prevent past exploits.
Annotations force the compiler to check the boundaries of the buffer, reducing the risk of top -up attacks.
The developers also recognize the growing use of AI by attackers and claim that defenders need similar tools.
DeepMind plans to expand testing with the help of open source code and, as soon as its reliability is properly proven, hopes to release Codementer for wider use by developers.
Google also revised her Secure Ai Framework system and launched a new rewarding program for vulnerabilities associated with artificial intelligence.
You may also like
Follow Techradar in Google's news. And Add us as a preferred source To receive our expert news, reviews and opinions in their ribbons. Be sure to click the “Subscribe” button!
And, of course, you can also Follow Techradar in Tiktok. For news, reviews, unpacking in the video form and get regular updates from us on WhatsApp too much.