On Black hat Europe conference in December, I met with one of our senior security analysts, Paul Stringfellow. In this first part of our conversation, we discuss the complexity of using cybersecurity tools and identifying appropriate metrics to measure ROI and risk.
John: Paul, how does the end user organization understand this? We are here at Black Hat and there are many different technologies, options, themes and categories. Our research covers 30-50 different security topics: state management, service management, asset management, SIEM, SOAR, EDR, XDR and so on. However, from an end user organization perspective, they don't want to think about 40-50 different things. They want to think about 10, 5 or maybe even 3. Your role is to implement these technologies. How do they want to think about it and how do you help them turn the complexity we see here into the simplicity they're looking for?
Floor: I attend events like this because the problem is very complex and rapidly evolving. I don't think you can be a modern CIO or security leader without spending time with your suppliers and the industry at large. Not necessarily at Black Hat Europe, but you need to interact with your suppliers to get your job done.
Going back to your point about 40 or 50 suppliers, you're right. The average number of cybersecurity tools in an organization is between 40 and 60, depending on which study you refer to. So how do you deal with this? There are two things I like to do when I go to events like this, and since I started working with GigaOm I've added a third. One of them was to meet with sellers because people asked me to. Second, go to a few presentations. The third is to walk around the Expo floor, talk to the vendors, especially ones I've never met, and see what they do.
Yesterday I attended a meeting and the title caught my attention: “How to Determine the Cybersecurity Metrics That Will Benefit You.” This caught my attention from an analyst perspective because part of what we do at GigaOm is creating metrics to measure the performance of a solution on a specific topic. But if you're implementing technology as part of a security function or IT operations, you're collecting a lot of metrics to try and make decisions. One of the things they talked about in the session was the challenge of creating so many metrics because we have so many tools that there is so much noise. How to start recognizing value?
The long answer to your question is that they suggested to me a very sensible approach: take a step back and think as an organization about what metrics matter. What does a businessman need to know? This will allow you to reduce noise and also potentially reduce the number of tools you use to obtain these metrics. If you decide that a certain metric no longer matters, why keep the tool that provides it? If it does nothing other than provide you with this metric, remove it. I thought this was a really interesting approach. It's almost like, “We've done it all. Now let's think about what else is actually important.”
This is an evolving space, and how we deal with it must evolve too. You can't just assume that because you bought something five years ago it still has value. By now you probably have three other tools that do the same thing. Our approach to threat has changed, and so has our approach to security. We need to go back to some of these tools and ask, “Do we really need this anymore?”
John: We measure our success by this and, in turn, we are going to change.
Floor: Yes, and I think that's extremely important. I was recently talking to someone about the importance of automation. If we are going to invest in automation, are we better off now than we were 12 months ago after implementing it? We've spent money on automation tools, and none of them come for free. We were convinced that these tools would solve our problems. One thing I do as CTO, apart from my work with GigaOm, is turn vendors' dreams and visions into reality as per customer requirements.
Salespeople have the hope that their products will change the world for you, but the reality is that it's the customer on the other end. This is a kind of consolidation and understanding – the ability to measure what happened before we implemented something and what happened after. Can we demonstrate improvements and are these investments of real value?
John: Ultimately, here's my hypothesis: Risk is the only measure that matters. You can break it down into reputational risk, business risk or technical risk. For example, are you going to lose data? Are you going to compromise your data and therefore damage your business? Or will you expose data and upset your customers, which could hit you like a ton of bricks? But there is a downside: are you spending more money than necessary to reduce risks?
So you're concerned with cost, efficiency, etc., but is that how organizations think about it? Because that's my old school take on it. Perhaps things have moved forward.
Floor: I think you're on the right track. As an industry, we live in a small echo chamber. So when I say “industry,” I mean that part of the entire industry that I see. But within this piece, I think we're seeing a shift. There is much more talk about risk in conversations with clients. They begin to understand the balance between cost and risk, trying to figure out what level of risk they are comfortable with. You can never eliminate all risks. No matter how many security tools you implement, there is always a risk that someone will do something stupid that exposes the business to vulnerabilities. And that's before we even get to AI agents trying to befriend other AI agents to perform malicious actions – that's a whole other conversation.
John: Like social engineering?
Floor: Yes, very much so. This is a completely different show. However, awareness of risk is becoming more widespread. People I talk to are starting to understand that this is about risk management. You can't eliminate every security threat, and you can't handle every incident. You need to focus on identifying the real risks to your business. For example, one of the criticisms of CVE scores is that people look at a CVE of 9.8 and assume it's a huge risk, but there's no context around it. They do not take into account whether the CVE has been seen in the wild. If not, what is the risk of encountering it first? And if the exploit is so complex that no one has seen it, how likely is it that someone will use it?
This thing is so difficult to operate that no one will ever use it. It has a 9.8 and on your vulnerability scanner it says “You really need to look into this.” The reality is that you've already seen a shift where no context is applied to it – if we've seen it in the wild.
John: Risk is equal to probability times exposure. So you talk about probability and then will it impact your business? Does this affect the system used for maintenance every six months, or does it affect your customer-facing website? But I'm curious because back in the '90s, when we were doing this in practice, we went through a wave of risk avoidance and then we came to the conclusion, “We have to stop everything,” which is what you're talking about, right down to risk mitigation, risk prioritization, and so on.
But with the rise of cloud technology and new cultures like agile emerging in the digital world, it feels like we're back to, “Well, you need to prevent it, lock all the doors, and implement zero trust.” And now we're seeing a wave of, “Maybe we need to think about this a little smarter.”
Floor: That's a really good point and you actually make an interesting parallel. Let's argue a little while we write this down. Do you mind if I argue with you? I'll question your definition of zero trust for a moment. Thus, zero trust is often seen as an attempt to stop everything. This probably doesn't apply to zero trust. Zero trust is more of an approach, and technology can help reinforce that approach. Anyway, this is a personal dispute with me. But zero trust…
Now I'll just come here later and argue with myself. So, zero trust… If we take this example, then it is good. Previously, we used implicit trust: you logged in and I accepted your username and password, and everything you did after that inside the secure bubble was considered valid and did not contain any malicious activity. The problem is that when your account is hacked, logging in may be the only non-malicious thing you do. Once logged in, everything your compromised account tries to do is malicious. If we use implicit trust, we are not being very smart.
John: So the opposite of this would be to completely block access?
Floor: This is not reality. We can't just stop people from logging in. Zero trust allows us to allow you to log in, but not blindly trust everything. We trust you for now and continually evaluate your actions. If you do something that makes us no longer trust you, we act on it. It's about constantly assessing whether your actions are appropriate or potentially harmful, and then acting accordingly.
John: This will be a very disappointing argument because I agree with everything you say. You've argued with yourself more than I can, but I think, as you said, the castle defense model is once you're in, you're in.
I'm mixing two things up, but the idea is that once you're inside the castle, you can do whatever you want. That has changed.
So what to do about it? Read Part 2on how to provide a cost-effective response.
Fast Understanding cybersecurity. Part 1: Looking Through Complexity first appeared on Gigaom.