Designing digital resilience in the agentic AI era

Although global investment in AI is projected to reach $1.5 trillion in 2025, less than half of business leaders Confident in their organization's ability to maintain service continuity, security, and cost control during unexpected events. This uncertainty, coupled with the enormous complexity associated with autonomous decision-making and interaction of AI agents with critical infrastructure, requires a rethink of digital resilience.

Organizations are turning to the concept of a data fabric—an integrated architecture that connects and manages information across all business layers. By breaking down silos and providing real-time access to data across the enterprise, a data fabric can empower both human teams and agent-based AI systems to identify risks, prevent problems before they occur, recover quickly when they occur, and support operations.

Machine Data: The Cornerstone of Agent-Based AI and Digital Resilience

Earlier AI models relied heavily on human-generated data such as text, audio, and video, but agent-based AI requires a deep understanding of an organization's machine data: logs, metrics, and other telemetry generated by devices, servers, systems, and applications.

To use agent-based AI to improve digital resilience, it must have seamless, real-time access to this data stream. Without end-to-end machine data integration, organizations risk limiting AI capabilities, missing critical anomalies, or introducing errors. As Kamal Khati, senior vice president and general manager of Splunk, a Cisco company, points out, agent-based AI systems rely on machine data to understand context, model outcomes, and continuously adapt. This makes machine data control a cornerstone of digital resilience.

“We often refer to machine data as the heartbeat of the modern enterprise,” says Hathi. “Agent-based AI systems are powered by this vital impulse to access information in real time. It is critical that these intelligent agents work directly with a complex stream of machine data and that the AI ​​itself learns from that same stream of data.”

Few organizations currently achieve the level of machine data integration required to fully implement agent systems. Not only does this narrow the scope of possible use cases for agent-based AI, but worse, it can also lead to data anomalies and errors in output or actions. Natural language processing (NLP) models developed before the development of generative pretrained transformers (GPTs) suffered from linguistic ambiguities, biases, and inconsistencies. Similar misfires can happen with agent-based AI if organizations rush ahead without providing models with basic knowledge of machine data.

For many companies, keeping up with the breakneck pace of AI development has become a major challenge. “In some ways, the speed of these innovations is starting to hurt us because it creates risks that we are not prepared for,” Hathi says. “The problem is that with the rise of agent-based AI, relying on traditional LLMs trained on human text, audio, video or printed data does not work when you need your system to be secure, resilient and always available.”

Designing a Data Structure for Resilience

To address these shortcomings and improve digital resilience, technology leaders should turn to what Hathi calls a data structure that better suits the requirements of agent-based AI. This involves bringing together fragmented security, IT, business operations, and network resources to create an integrated architecture that connects disparate data sources, breaks down silos, and enables real-time analytics and risk management.

Leave a Comment