AI Is Exposing a Security Gap Companies Aren’t Staffed for: Researcher

Companies may have cybersecurity teams, but many are still unprepared for how AI systems actually go down, says an AI security researcher..

Sander Schulhoff, who wrote one of the first operational engineering guides and specializes in AI system vulnerabilities, said on Sunday's episode of “Lenny's Podcast” that many organizations lack the talent needed to understand and address AI security risks.

Traditional cybersecurity teams are trained to fix bugs and address known vulnerabilities, but AI behaves differently.

“You can fix the bug, but you can't fix the brain,” Schulhoff said, describing what he sees as a disconnect between the way security teams think and the way large language models fail.

“There is a disparity in how AI works compared to classic cybersecurity,” he added.

This gap is evident in actual deployments. Cybersecurity experts can test an AI system for technical flaws without asking, “What if someone tricks the AI ​​into doing something it shouldn’t do?” said Schulhoff, who runs an operational design platform and artificial intelligence hackathon.

Unlike traditional software, AI systems can be manipulated through language and indirect indications, he added.

Schulhoff said people with experience in both AI security and cybersecurity will know what to do if an AI model is tricked into generating malicious code. For example, they run code in a container and ensure that the AI ​​output does not affect the rest of the system.

At the intersection of AI security and traditional cybersecurity are “the security jobs of the future,” he added.

The Rise of AI Security Startups

Schulhoff also said that many AI security startups are installing fences that don't provide real protection. Because artificial intelligence systems can be manipulated in countless ways, claims that these tools can “catch everything” are misleading.

“This is completely false,” he said, adding that there will be a market correction in which “the revenues of these security fences and automated red-team companies will just dry up completely.”

AI Security Startups were on the wave of investor interest. Big tech and venture capital firms are pouring money into the space as companies rush to secure artificial intelligence systems.

In March, Google bought cybersecurity startup Wiz for $32 billion, a deal aimed at strengthening its cloud security business.

Google CEO Sundar Pichai said AI poses “new risks” at a time when multi-cloud and hybrid setups are becoming more common.

“Against this backdrop, organizations are looking for cybersecurity solutions that improve cloud security and span multiple clouds,” he added.

Last year, Business Insider reported that growing concerns about the safety of artificial intelligence models had helped fueling a wave of startups providing tools for monitoring, testing and protecting artificial intelligence systems.

Leave a Comment