AI agents – all this is in fashion – in fact, A recent study confirmed 96% of European enterprises reported the use or planning of the use of artificial intelligence agents by 2026.
AI agents in nature need a number of permits to be able to act on behalf of the user; Everything, from your calendar to payments, and even potentially confidential information of the company.
This leaves the potential so that this access falls into someone else's hands, or even for an agent of artificial intelligence, to “go fraud” and fulfill tasks that you have not approved. There are a lot of pressure in the entire district to accept artificial intelligence agents, since companies around the world have become more productive, but many do this before putting forward proper rails.
What are AI agents?
In the recent Octane 2025 eventWe heard everything about the importance of providing AI – but what exactly does it mean, and how can you do it?
We talked with the EMEA CISO STEPHEN MCDERMID from the octa and the president of the Auth0 Shiva Ramji to find out how important it is to provide these inhuman identities (NHIS).
Artificial intelligence agents are quite independent – these are software systems that autonomously perform tasks on behalf of the user. They use generative AI models and can simultaneously process information, whether it is a voice, text, video or code.
This is a new brilliant type of technology that many people use both in their personal and personal life. But, as in all technologies, cybersecurity experts warn that these agents should be used with caution.
To complete the tasks on your behalf, the AI agent must have access to your systems, such as your calendar, e -mail, loyalty schemes and even information about the credit card in many cases.
According to McDermide, it is inevitable that people will use this technology, but this should be done carefully.
“That's why this is so important, because the reality is that people will go with this,” he notes.
“They will go and try to be innovative, as you heard, they are on all pressure to do more with less – AI – a very fast way to do this, but this is also a very quick way to open some risks, expose the data, potentially identifying your users. I think that is why it is growing anxiety. ”
What are the risks?
This, of course, is a significant risk if the agent is not assigned properly. AI is gullible, and it is easy for them to manipulate.
Although it is great when you want it to prioritize in your work, this means that cybercriminals are equally able to deceive the models so that they work for them.
If your AI is compromised, this can leave you open at a number of levels.
“This is a confidential data leak, your personal information is disclosed, and the risk is all, from legal to financial, before they do not participate in the rules in different countries,” warns Ramji.
This is also not only theoretical. Risks are well illustrated by recent problems with McDonald's AI personnel recruitment platform, which was instructed to raise staff – Despite the fact that the weak link in this case was an incredibly simple password (123456), the artificial intelligence agent had access to all the collected data, including personal information – and a total of 64 million records were issued, emphasizing the danger of asking agent AI to process information.
How can you fix them?
Unfortunately, providing these NHIS is not a simple task; “There is greater pressure with AI to realize this change. I think that this is probably more complicated, because AI moves so fast that he does not have [same level of] Management, ”says McDermide.
Octa performs a mission to protect AI, introducing agents into your identification security fabric. Their platform helps to determine risky configurations and manage the agent’s permits to make sure that your agent has access only to what is necessary – not longer than necessary.
“Everyone should begin to introduce safety before they start playing with AI, because, unfortunately, you saw headlines, there were already some examples [breaches] Continuing, ”says McDermide.
Octa also helps maintain continuous security for active agents, finding and responding to an abnormal or high risk, which means that you will be warned if your agent looks like it can become fraud.
The access of the smallest privilege is standardized to ensure the authentication process for agents, and a clear audit trace helps to track each agent to make sure that you will remain compatible and over each agent acting on your behalf.
Gap in the rules?
Octa also introduced a new set of standards, CROSS App Access (XAA). They strive to help the whole industry protect themselves by installing the protocols to help the security teams stay a step ahead of the threat actors.
“It will not be a silver bullet,” McDermide warns. “In the future, he will not stop all these attacks, but, of course, this gives us the best opportunity to make sure that we get the technical capabilities there and within the products that we offer.”
These standards, adopted in all areas, are part of wider efforts for security commands to work together to protect themselves and their colleagues. According to McDermide, threat actors are already cooperating, which gives them an advantage with new practices and attacks;
“The actors of threats are actually divided by methods, they are divided by platforms. They work together like a cohort, but customers and organizations are not. I think that it is here that we need to improve. ”
McDermide claims that learning from each other is an important part of the protection of the industry. It would be easy to be intimidated or overloaded with a constant series of attacks that we see in the headings, but in order to really solve the risks, the teams must study on these incidents and measure their own safety tools against these attacks.
“You must continue to try and maintain management over these management means and support this cyber gigien, because I think that if you do not know what attacks look like, then you do not evaluate your own impact on them.”
He does not warn people from the use of technology – quite the opposite. AI agents will be used regardless of whether the company will approve or not, so it is important to establish politicians to ensure safe use – earlier, and not later.