What’s Your Risk Appetite? A Q&A with Palo Alto Network’s Paul Calatayud
At both Early Adopter Research and in his Forbes column, Early Adopter’s Dan Woods has covered the need for enterprises to create cybersecurity portfolios extensively. As with a financial portfolio, a well managed cybersecurity portfolio diversifies risk across a number of assets that align to an individual or company’s goals. But whether in finance or cybersecurity, understanding how to properly assess, weigh, and manage risk is obviously easier said than done.
In this Q&A, Woods gets to delve deep in the topic of balancing risk with Palo Alto Network’s CISO Paul Calatayud. Calatayud offers a lot of advice for companies in general and CISOs in particular on how to determine their ideal risk appetite in the ever-changing world of cybersecurity.
Dan Woods: Paul, I wanted to talk with you about the idea creating a balanced cybersecurity portfolio. What do you look for when you’re examining how to make a portfolio of products work together?
Paul Calatayud: It starts with business alignment and an understanding of what drives the business. It is analogous to trying to understand business terms in order to balance security. The goal is not to bankrupt, slow, shut down, or inhibit innovation within the organization. And if you look at security from a mindset where the CISO thinks their role is to own risk or be in the position of saying no or yes, or a gatekeeper in innovation, it becomes very challenging. I advocate that a CISO’s job is to be a trusted advisor, with the goal of effectively communicating risk to the business so other decision makers can make an informed decision. Once they make an informed decision, the CISO’s job is to execute against that risk tolerance.
To do this, there are two key components. First, companies must define their risk tolerance and what that looks like for the organization. Second, CISOs must take on the operational role of managing decisions around risk in the organization.
This framework solves a lot of the problems by clarifying who owns the risk. When somebody feels that they own the risk, do they want to say no to everything? It’s a double-edged sword because the minute you tell a leader that they own the risk, it’s like telling a child that they’re responsible for their own decisions.
So, the first thing a CISO must do is document decisions that are being made. The CISO has a fundamental role as an objective observer of decisions and enabling the board of directors and executive management to see what’s occurring when it comes to risk decisions. In other words, you can’t be effective simply by saying to a business, you’re in charge of risk and my job is to advise, if you don’t also document decisions and create accountability. It’s like when a patient at an ER refuses treatment or won’t wait any longer and then leaves against medical advice. The CISO says, I want to make sure you know what my advice is, and you can either accept it or modify it but there’s no doubt in anybody’s mind that I’ve explained this clearly.
And once you’ve documented those decisions, accountability is key. CISOs have to be very effective in communicating that to the board. If you just document for the sake of documentation, but there’s no chain of authority assessing the impact of decisions, things break down very quickly. A lot of my success has been in building the relationship with the board of directors because they know there is a transparent objective lens into decisions that are being made and they have an opportunity to change them. The most important thing you can do in a risk-driven organization is to allow people to review decisions and give them the opportunity to disagree and potentially reverse the decision. It becomes an unemotional conversation about decisions that are occurring within the business.
How do you prepare to understand the landscape and review or confirm the selections that have been made in a cybersecurity portfolio?
First you must identify the key decision-makers. Who in the organization can accept risk and what types of risks can they accept? I categorize risk by type, whether it’s legal, compliance, technology, or operations, and that allows me to build a matrix of people who need to make a decision. It’s almost never just one individual who weighs in on risk acceptance.
I then build a risk committee and a charter with the mission stating their needs. The risk committee needs to be able to determine an outcome, defining the scope. Organizations don’t have to take on every type of risk on day one. They can focus on certain types of risks that are most impactful to their business. I do a business impact assessment, survey executives and try to determine what’s their biggest concern. That allows you to have conversations about risk allocation. You don’t want to misallocate or miscategorize risk if the business doesn’t deem it impactful.
This is how you start to design your portfolio. Now I’m aligning to the business and I’m going to execute on investments and develop metrics around the health of those investments against that key risk. The goal is that I reduce that risk to an acceptable tolerance. But my job isn’t to make the risk go away, but to reduce it to an acceptable risk. That’s where you start to balance the risk appetite of the organization. I say, I can spend $1 million dollars or $50 million dollars to protect this data. What is the tolerance that you have? And then I operationalize the program.
Choosing the right mitigation strategies is also important because it’s a combination of products, people and technology. In some cases it’s all three. If I’m reducing the risk of data loss, the first thing in choosing the right mitigation strategy is to make sure that the business is engaging in the minimal, acceptable use of that data. And then you start to calibrate continuous improvement.
Then I look at service level agreements, which is what it takes to maintain that risk tolerance. If I tell the organization we can only go 50 miles an hour, I’m going to start measuring that. I’ll use metrics like the number of databases that are encrypted or unencrypted. It could be the amount of time to respond to an incident. It could be the number of times employees have violated policy. All of these performance metrics help us see whether we are successful or unsuccessful with the SLA.
I love positioning a cybersecurity program in that capacity because it’s entirely unemotional when I need budget. My entire program becomes about measuring whether or not I’m delivering on that commitment. And if I’m not, I should communicate to my executive staff where my program is deficient. Do I have enough people to respond? Do I have technology giving me insight into whether incidents are getting a response?
A lot of times executives think that they can’t define success for the million dollars they invested in a cybersecurity program. It shouldn’t be that ambiguous. It should be these are the 14 KPIs that are going to improve by 10%, 5%, 30% when you give me a million dollars.
How do you categorize risks?
Once you’ve identified risks, you can classify them as high, medium, and low and decide how to measure them. You create a system that reduces this risk to the desired level as measured by certain KPIs and explain this is how much it’s going to cost and this is how we’re trying to measure it. You can also measure what you’re currently doing.
This process must be revisited periodically to assess progress and whether things have changed. You also need a mitigation strategy to see whether you’re delivering the metrics you promised. Once you’ve done that, you can do the rebalancing and optimizing. Again, the addition of the metrics makes it more concrete and dispassionate.
There are many point solutions in cybersecurity, like Splunk and many others, and certain vendors have created integrations. But usually the integrations are low-fidelity. It doesn’t mean they’re bad but more powerful integrations are possible in a portfolio where somebody has control over all capabilities. Do you think we’re ever going to escape this low-fidelity integration and create a better way for cybersecurity capabilities to work together?
I do believe we will get there because we need to. We are at a point in the journey now where we realize that APIs and integration are important. But it’s very hard for the vendor and the customer to create a meaningful integration. We’re at an inflection point where people are saying, I don’t want best of breed, I don’t want complexity, I want platforms that build ecosystems to allow me to take advantage of the integration that you as a vendor or a group of vendors are committing to. There’s a big, aggressive appetite in the customers I talk to. They say, I want three vendors to solve my portfolio. They’re exhausted with integration and it’s not working. They want deep integration. They want platforms.
The way it’s going to be successful is that we start to break the platforms up in such a way that they’re not closed-looped, and we move towards a democratization of data, and a democratization of cyberthreat intelligence. These types of tenants are going to allow for ecosystems to still thrive and not require deep integration. If I can build a centralized data lake where my technology is generating the right level of fidelity but don’t know how it’s going to be consumed, that is innovation that allows me to build a sustainable strategy because what I’m stating is that I have data that needs to be enriched. And I’m going to build a marketplace or offer vendors to participate in that consumption. And that can be done if it’s built right and it’s cloud designed and it’s an open architecture.
Let’s take a real-world example, the Apple iPhone is a platform. It is an integration of phone, web, compute, media, music, it is multi-tenant, multi-purposeful, but if I want to consume a new app I’m not downloading an API and building a MSI file and trying to figure out how to install this thing. The architecture is such that the developers work with the platform, Apple publishes an SDK, and the developers publish their app and the consumption is me determining whether or not I want a product from Apple or from a third-party. Once you’ve made that determination you can quickly expose your internal data, your apps, your phone, everything that you’ve collected over time on that platform to a new service and see if it enriches and adds value.
I hear you saying that cybersecurity has been under-productized. Do you think that cybersecurity will achieve these levels of integration when it reaches this higher level of productization because customers are going to flock to it and are demanding it?
Correct. They’re demanding it right now. They don’t know what it looks like per se but they ask for it in different terms. They’re getting frustrated with integration and lack of value return on that. That’s why they get to a point of complacency or frustration because it took so long to determine that they actually didn’t want it in the first place. To some degree you can’t go back to the business and say that six months needs to be thrown away; let’s try again. So the tendency has been to say let’s make it work to the best of our ability.
If I can decouple infrastructure and make platform decisions where data is being generated and accessed independent of application, I can build an architecture that can be aggressive in transition. That is the ecosystem and vision of how security will look in the future.
Let’s move onto the cyber data warehouse. I’ve talked to a lot of people about whether we can we take all cybersecurity data and collect it in a data warehouse. Some cybersecurity solutions essentially work that way, like the AI based solutions and the user behavioral analytics solutions. But then the question is who’s going to create that repository? Who’s going to manage it? Who’s going to operationalize it? I think it’ll be too hard for companies to do it on their own. What’s your view of the likelihood that we’re going to get to a cybersecurity data warehouse?
I’m going to give you the short view and a quick response and then I’m going to dive into the realization. I think organizations, small or big, need to be thinking about how they build a data lake in the cloud and here’s why. The reason it needs to be in the cloud isn’t because they don’t have the means to do it internally. It’s because if they do it in the cloud and with a vendor or a set of vendors capable of providing that service, you get what I call a community effect.
I don’t think it should be on-prem. If it’s on-prem and I have the capability of doing it, I get the advantages of cloud, I get all the ability to do the integration but I’m missing one of the most critical components when it comes to analytics and that is population size. Even at large organizations it’s going to be unsuccessful because now it’s on the cybersecurity team to build up that competency level. The result of bringing in a large data warehouse and bringing data scientists only to find out that having good confidence in data on security doesn’t drive increased top-line revenue. If there’s not a compelling reason to drive a bigger business intelligence data warehouse strategy within a large organization outside of cyber, my advice is not to do it.
The reason why my short answer is that it should be in the cloud, regardless of size and capability of business line, is because of the community effect. Even if I work in an insurance company and they say, ‘Paul, you can do as much as you want in cyber analytics,’ my data is still going to be very weak because I’m only able to capture my network, my data, my sensors and the algorithms are only going to be able to see what I can give it.
Regardless of size, the one thing that I’m looking to achieve with analytics is high fidelity and confidence. And the only way that’s actually achievable is with a community. The data needs to be your data, other data, and data from as many communities as possible so that when you’re integrating or turning on the algorithm it’s looking at as many organizations as possible.
At Palo Alto Networks, for example, we’re starting to build some of these strategies. The idea is that we have 55,000 customers that we’re going to start to send observables to. So when we see an adversary and we determine it’s bad, that update goes out to 55,000 customers. It’s almost like a neighborhood watch. That is why I’m a strong advocate of cloud because it removes dependencies on business case and you get the community.
How can cybersecurity information analytics help business operations? Why have we made so little use of this great data about business activity to find business signals?
I think it’s not happening because of that cybersecurity economic model that we talked about initially which is without the right representation of the business in achieving risk appetite and objectives, there’s just a lack of communication and awareness. If I told my CIO, who I may or may not report to, that I want to do analytics, I’m going to build a business case that addresses one risk and that is a cybersecurity risk. Now, if I have a risk committee and I’m communicating very openly about some of the initiatives that are going on within the cybersecurity team, I might have the opportunity to hear from the marketing department who will say “We want Google Analytics because we want to understand Qlik Analytics and the use of features and functionality on top of your product and, Paul, I need you to do a security review.” I’m going to say, “Do you realize that I need the same data string as you? Let’s partner and let’s build a bigger, stronger business case because I need Qlik Analytics as well. Those conversations need to be happening.
The second part is that too often, cybersecurity organizations build business cases in a silo. Their goal is to address cyber risk because that’s their mission. I think their mission should be more about how to accelerate the business.
What do you think are the most harmful myths related to cybersecurity?
The most harmful myth is that you can have a single goal: to prevent a breach. That’s a scary myth because it sets companies up for failure when a breach occurs. People need to have a mindset that focuses on mitigating breaches and creating programs that can absorb a potential breach as quickly as possible. That’s what makes or breaks organizations. Otherwise, it’s like saying I’m never going to have a fire in my home but when I do, I panic and the house burns down pretty quickly because I didn’t set up any preventive practices.
What questions should CEOs be asking their security teams?
What are the incentive structures for your CISO and how do you measure success? How do you enable an organization to have effective leadership and motivation? And when you look at incentives, it helps solve debate around where a CISO should report to. If I look at the CIO and their incentive is uptime and availability and agility and speed, and the CISO’s objective is to manage risk, those incentives are misaligned. You might have to realign the CISO to a different organization so that there’s an alignment of incentives.
How can you tell when an AI claim is a good fit for cybersecurity?
The challenge is that a lot of times with AI and analytics, you have to be very patient. It’s like wine; it ages over time. That’s an unfortunate element of AI because you can’t measure success day one.
However, there’s one thing that I advocate to CISOs when they think about AI. They should ask the vendor the population size that the algorithm is currently operating on.
If I’m an antivirus or endpoint company and I claim that my endpoint product does AI, the only question that I need to ask upfront is how many endpoint agents are deployed because that AI is only going to be as effective as the size of the population. It’s like self-driving cars. If someone makes a claim that AI is going to drive an airplane or drive a vehicle, we should be determining efficacy, fidelity, and whether or not it’s accurate. The way you answer those questions is what’s observable to date, such as how many miles the car has driven, and that tells you how effective the algorithm is. We need to ask these questions about AI because it’s very hard to measure claims without understanding the population size of the product.
I’ve talked to several CISOs about deception technology and some are eager for it because when a deception trap is tripped, it’s high-signal information, generally not a false positive. I’ve also talked to CISOs who said, I get the point of deception but I am so disciplined in my operational and configuration practices and I carefully segment my whole environment. I don’t need deception. What do you think about deception?
Most people think that the goal of deception is to gather intelligence on the adversary. That is an element of it but it’s also about validating the controls and investments you’ve made. When you add complexity to your infrastructure like segmentation, you need validation and you need it continuously.
What I like about deception is you know where to target so that you can beat up on those security controls. If I create add deception in my network and put a firewall in front of it, I’m not very likely to go and beat up my production and network because I might bring it down. But if I know there’s a sensor out there that I should not be able to get to it and it’s listening on SQL, if I can get to SQL on that deception, chances are my segmentation controls failed. It’s a solid way of validating that your technology and your security controls are working and you have a soft target.