If 2024 was the year AI crashes into cybersecurity2025 was the year when interdependence became impossible to ignore.
Looking back over the past 12 months, the most important lesson I've learned is an inconvenient one for security professionals: you don't actually “control” your risk, you share it. You share it with suppliers, operators, cloud and AI platforms, and people in your own teams whose resilience is declining.
In our study in predictor we've seen the number of attacks continue to skyrocket. According to numerous reports, we see that the total volume of attacks has more than doubled compared to last year, and the number of incidents in critical infrastructure has increased several times. In the first half of 2025 alone, we tracked thousands of ransomware events around the world, with services, manufacturing, technology, retail and healthcare consistently among the most targeted sectors. This is no longer an IT hygiene issue; it became a continuity problem for the real economy.
Operational technology has moved from footnotes to the main story. Our threat intelligence work on critical infrastructure and state-sponsored hacktivism has documented repeated attempts to disrupt water utilities, healthcare providers, energy companies and manufacturers by targeting the industrial systems that run them. In parallel, our The riskiest connected devices Research shows that routers and other network equipment are overtaking traditional endpoints as the highest-risk assets in many environments, with risks concentrated in sectors combining IT, operational technology (OT), Internet of Things (IoT) and sometimes medical devices. Systems that keep things moving and devices that seamlessly connect them are now prime targets.
The same interdependence is evident when you look at the devices and components on which each depends. In the same Most Dangerous Connected Devices report, we saw an average increase in device risk of 15% year-over-year, with routers alone accounting for more than half of the devices with the most dangerous vulnerabilities, and risks concentrated in retail, financial services, government, healthcare and manufacturing. At the same time, our research into router and OT/IoT vulnerabilities has shown how one family of widely deployed network or industrial devices with remotely exploitable flaws can simultaneously compromise hospitals, factories, power generators and government agencies. This is not a theoretical ecosystem risk; it's a design feature of how we now create technology and deliver services. When one link is weak, the consequences spread.
Working with organizations through real-life incidents this year, one pattern continues to emerge: resilience has become an ecosystem property. You could have well-managed endpoints, a competent SOC, and a decent incident response framework, and still be taken offline because a third-party vendor got hurt, a “non-critical” OT asset became a bridge to IT (or vice versa), or the people running your program are simply exhausted. Burnout is increasingly recognized as a safety risk, not just an HR issue.
So what does this mean for 2026?
One trend that I expect to crystallize is what I call “buybacks.” Traditionally, extortion targets the organization that has been hacked. We believe that attackers will increasingly reverse this logic: compromise a smaller manufacturer, logistics firm or service provider whose defenses are weaker, and then put pressure on the larger brands and operators that depend on them to support the entire chain. The party that can pay will no longer always be the party affected by the breach. For advocates, this means that supplier transparency, co-discovery and collaboration should be seen as a core competency rather than procurement paperwork.
The second shift concerns artificial intelligence and social engineering. New Phishing and voice cloning written by artificial intelligence will be erased; it will simply be how social engineering is done. In our 2026 predictions, we talk about “social engineering as a service”: ready-made infrastructure, scripts, cloned voices, convincing pretexts, and even real human operators available to anyone with a Bitcoin wallet. At the same time, I expect to see a more serious and less hyped adoption of AI in defense: weak signal correlation between IT, OT, cloud and identity, continuous mapping and prioritization of assets and vulnerabilities, and reducing the cognitive load on analysts through triage automation. If done correctly, it's not about replacing people; it's about giving them the opportunity to think and delve into more useful things.
The third trend is normative. Between NIS2 in Europe, the development of sustainability requirements in the UK and similar moves in other countries, boards will find that ecosystem security is becoming not only an operational responsibility, but also a legal responsibility. Regulators are increasingly interested in how you manage third-party risk, how you protect critical processes, and how you prove that your controls actually work under stress.
If 2025 taught me that total control is largely an illusion, then I hope that in 2026 we respond with humility and cooperation rather than fear. This means investing in ongoing IT, OT, IoT and cloud transparency, building true partnerships with suppliers and colleagues rather than throwing questionnaires over the fence, and better considering the well-being of the people we rely on to make good decisions under pressure.
We will never return to a simpler threat picture. But we can build a fairer system that recognizes interdependence, accounts for it, and distributes the load more intelligently.
Rick Ferguson – Vice President of Security and Intelligence at predictoras well as Special Advisor to Europol and Co-Founder Respect in safety initiative. A seasoned cyber professional and renowned industry commentator, this is Ferguson's first contribution to the CW security think tank.






