What Cybersecurity Belongs in the Cloud: A Q&A with Signal Sciences’ Andrew Peterson

As part of our continuing series on podcast conversations that Early Adopter Research’s Dan Woods conducted while at RSA 2019, Woods spoke with Andrew Peterson, the CEO of Signal Sciences. Woods asked him his three key questions for cybersecurity in 2019, as well as exploring Signal Sciences’ technology. This is an edited version of their conversation that can be listened to in full on the podcast.

Woods: Can you tell us about Signal Sciences?

Peterson: At Signal Sciences, we have a next generation web application firewall product and a RAS product. And what that actually means in non-analyst speak is, if you have a website, if you have a mobile application or you have API systems that you’re building in your company and you want to protect the data that’s behind that, we have software that sits inline to be able to identify where attackers are actually attacking you in the first place. We block those attacks and then we give that data back to the technology and security teams to be able to help them better protect the application in the future.

Can you explain what you mean by RAS product?

Gartner came up with this acronym. It doesn’t roll off the tongue well, but it’s runtime application self-protection. It’s basically an evolution of where WAF has gone in the past. WAF is web application firewall. It’s a new way to get a little bit closer to the actual code execution on the application side to understand some context around what the application is doing to better protect it.

The way I understand it is it’s not that you have to be programmed in as much as other approaches to this are, so you don’t have to have developers and rely on them to program you in. You can actually set a fabric over the surface of the application, like the way people do AB testing, and you can detect a lot of the attacks that way.

That’s correct. One of the big challenges in our space specifically is not just what security value you’re giving or what type of visibility you’re able to give in terms of types of attacks that you’re covering, but it’s how do you even get installed in the first place. And a lot of the techniques in the other tools that are in the space have been so heavyweight that it creates so much pain in the process of trying to get something installed that it just ends up being non-scalable. Our experience before starting the company was being in house and building security technology at a company called Etsy in the past, and so we got a hands-on understanding of the tradeoffs you need to make between the practical realities of getting something installed versus the pie in the sky ideal of what you would want in a perfect world.

Now let’s move to the three questions I’m asking everyone I speak to about RSA. The first is about zero trust. There are so many people talking about zero trust now. We can I think trace the idea back to Google’s new philosophy of BeyondCore and the way that they’ve built that out inside of Google using all the proprietary stack that they’ve created. And the bottom line is that it zero trust, in that vision, envisions a perimeter-less world, or at least a world where the perimeter is not that important. And every asset, every person has to establish who they are, and then when they ask for ability to access something, they are granted or denied based on what is known about them. It’s a clean and nice way of thinking of security because if the person is inside the company, they are protected the same way as when they’re in a Starbucks. The problem is, though, that we’re living in this world in which the perimeters still exist and there’s no one product that delivers what Google is delivering in its massively custom designed stack. What do you think zero trust means in practice, and is it just becoming another additive responsibility for better authentication, or for better authentication aligned with SD-WAN routing?

It’s funny because Google is the poster child for this stuff. But I’d imagine even a company like Google is starting to feel what it’s like, as they start doing acquisitions, as they acquire a company that’s of any size, they’re bringing in new tech stacks, they’re starting to realize whatever they built that was homegrown doesn’t necessarily apply immediately to something that’s new that you’re bringing in. So even the poster child is going to be struggling with the complexities that exist in reality when you go out and try to install philosophical ideas into the real world.

When we talk to companies, we talk to customers about what zero trust means to them and this model around it, and it comes down to three different areas. There are users, there are devices, and then there are applications. Our focus has always been in the application piece, so that’s really where we’ve interacted with companies. But it’s really flipping the model on its head for people. It’s people within a company, who might be using their personal phones to be able to access company information. In the past, that never happened, so now we need to figure out if you are beyond this perimeter, how you are actually solving for the security of that device and of that user.

The application piece means companies are developing internal applications that may hold all of their customer information that the people inside these companies can access to be able to service their customers and help solve customer service problems. Historically, because that’s always been inside the perimeter; they’ve never thought to secure those applications. Now, people are starting to think about it in a zero trust way—which I actually think is smart because when it comes down to it, are you more worried about your consumer-facing application getting compromised or your internal one, where the internal one, you have all these controls about being able to actually change different user settings and stuff for those users? In some ways, the internal applications are the area where they’ve invested the least from a security perspective and actually are the most valuable from a hacker’s perspective. Regardless of if they’re full sail adopting the philosophy around zero trust or not, I think it really is smart to raise awareness of the fact that you can’t just assume that your perimeter is completely unbreakable and you should be starting to understand what are people doing on both internal applications and external applications.

From your perspective, a zero trust style type use of your product would be to use a web application firewall and use your RAS technology on an internal application?

Absolutely. We have customers that are using it equally on external applications as they are on internal ones. And again, the internal ones are new to them. They’ve never bought a WAF or some type of protective technology for an internal application because they just assumed it was protected, but now that the world is changing around this, I think they’re really starting to see this is actually a more vulnerable and more important application to provide protection for than any of the rest of the apps that we’ve historically protected.

Have you had anybody get some serious surprises once they put your technology up?

Absolutely. The difference in this would be the use cases. If you have an app that’s sitting on the Internet that’s a consumer-facing application, you’re kind of assuming that you’re being attacked all the time, so you’re looking at that data very differently. If you have an internal app, you’re not assuming it’s being attacked on a regular basis, so if you see any type of attack signal, that immediately triggers some type of alert that somebody should look at, right?

What could zero trust mean short of the full implementation of a BeyondCore-type Google stack?

I think that both companies internally in security teams need to be thinking about this, but vendors need to be thinking about it too — about how can they apply their tooling not only to the external-facing assets that somebody might have, but also internal ones.

I imagine there’s not a lot of difference in the way your product works when you’re applying it internally or externally.

There is not; it’s more about how you can configure the alerting on it or how you actually set it up in the first place.

So the way you use the product is a little different.

Yeah, but a lot of security folks can get caught up in this concept of what we call security nihilism, where it’s like if you don’t have the full gamut of the solution to this stuff, then you might as well not do it at all. Well, actually making some incremental improvements in your security posture, even just getting some visibility into whether some attacks are happening on some of the internal applications that you’re running, that’s a step forward. And it’s important that people don’t get bogged down by perfection and make progress.

It seems like we’re in a situation in which cybersecurity, for most of the history of the industry, has always been additive. Every generation has new componentry for new threats, new problems. There is no new component that then takes away the need for previous capabilities. It’s very difficult for people to imagine taking away some of their existing capabilities because then they’re worried about that vulnerability still existing, or they’re worried about losing the visibility that they get from that component. But on the other hand, there’s no other realm of computing or technology where you don’t have pruning. When are we going to start pruning the cybersecurity portfolio by having new solutions replace or make obsolete old solutions, or old capabilities?

I agree that I think it’s a hard problem. But at least in our specific space, let’s use endpoint and AV as an example…

That’s a great example because I think every single RSA for the last ten years, somebody’s going around saying AV is dead. But then, the MacAfee and Symantec booths don’t seem to get any smaller.

There certainly are customers who will do full rip and replaces. But for us, there’s a lot of people that have bought legacy web application firewall technology in the past and they’re buying our product, they’re seeing the differences, and then over time, they’re actually being able to completely replace their existing solutions. And by the way, they might have had five or six different point solutions for all the different types of technology platforms that they’re trying to protect via their WAFs. They might literally be managing four different WAFs at the same time and the premise of our technology that’s really helpful for them is that they actually get to replace all four of those with one tool. I’m not saying that that’s super common in the industry, but I think there actually is the premise of, if you have a true next generation technology that’s trying to solve an existing problem that’s been in this space for a long time, you can have some consolidation around vendors there in the first place. As that relates to our specific space, examples would be container security and serverless security, and then service mesh is the new area in security that’s popping up around applications. The way we’ve approached that is that there are certainly aspects of those things that you need to think about securing and providing access to and visibility into, but a lot of that comes down to those are just technologies to run your application and really what you’re worried about is the application behind it. So as long as we can provide the means to be able to get installed and protect the applications that are running in those different architectures, the architecture itself isn’t inherently something brand new to be able to attack.

One of the most powerful idea that’s come out of the discussions I’ve had at RSA is the idea of how can you refactor your environment, your surface area, so that you can either use simpler cybersecurity solutions or just avoid having them. And so the pruning doesn’t come through one technology making something obsolete. It comes through rethinking the way you run your business and the way you expose assets.

I’d agree with that.

But your point, that isn’t an example of pruning. When you take a web application firewall and replace four other web application firewalls, you did simplify your environment, but you didn’t replace and make that whole capability unnecessary. At some point it seems like there should be a way to make certain things unnecessary.

I would caution you to think that that is a common thing that would happen. When we look back and think about scientific and technology innovations, we like to like tell the story in the context that like one day it was a steam engine car and the next day it was a gas engine car, but there were a bunch of small things along the way that led to that. So I think like in our world, when we’re looking at technology innovations, we’ve seen so many happen so quickly that I think we’re really looking for stuff to even go faster and faster and faster. But I just view the reality is that it’s always going to be a step function approach and those steps may be faster now than they ever were in the past. Is there a future where we don’t need to have a WAF at all? I think that’s really going to come down to what other technology innovations are happening around applications. One of the examples that’s actually really common right now is when people are moving to the cloud, they’re no longer managing hardware infrastructure. And so you don’t have an email server anymore that you need to protect that’s a hardware email server. That does eliminate some pieces of what you need to be responsible for from a security perspective, and there’s a lot of security things that the cloud providers at the network layer are actually taking responsibility for from a security perspective as well. It doesn’t mean that that technology goes away completely because that cloud provider is taking the responsibility, but from the company’s perspective and from the security perspective, they kind of are taking it away because they don’t have to be responsible for it anymore.

What sort of cybersecurity belongs in the cloud? How long is it going to take us to have fewer on premise assets, and how is this migration going to take place?

Simultaneously you hear that people are moving really fast to the cloud, faster than people expect, and then you also hear people say they’re moving slower than we expect. So the question which one is it, and I think it’s both. The reason for the speed that people have moved to the cloud is that all new projects that people are working on, like the moment that they saw the value of how fast they could spin up a new project and create a new application or new types of functionality and software in the cloud, then every other project that was after that, because it was so much faster, they went immediately.

An example of that that I always love to share is from some friends of mine who worked at a—they work at a big bank, right? And they were talking about when they wanted to build a new software application internally, they had to get a physical box provisioned for them in their data center. And so you put a ticket in to get that box provisioned and it literally took nine months to get a box provisioned. When they moved to the cloud, it would be minutes that they could get something up and running. That’s why the adoption has been so fast on new projects.

The backend of our product runs in the cloud. There’s an aspect of the product that gets installed on our customers’ environments that then talks to our cloud backend asynchronously. And what’s interesting is, we kind of assumed that the majority of the installs were going into cloud infrastructure for our customers. Actually, it’s about 50/50. And we work with forward thinking, a lot of times startups, a lot of times early companies, and still a lot of them have hardware technology that they have us installed on. So the key for us has always been flexibility, and that vendors that are in the space from a security perspective, you can’t just think about how do we move people off like the cloud—you know, off the on prem stuff and into the cloud faster. You’ve got to understand how do we actually support both of those things indefinitely. Because if you don’t, you’re just going to become another point solution for another point solution technology that they have that’s an architecture and it might be ten years before stuff actually moves into the cloud.

I have three bonus questions. One is about ops discipline. I’m asking why this doesn’t happen, because everybody agrees that it should happen. Instead of buying the next cybersecurity solution, why not improve your ops discipline, meaning better configuration management, patch management, asset inventory, automation, as we were just discussing. Why is it not happening?

Developers like building things. The hope that I have for the future is that dev ops is starting to incorporate security more than ever in the past. There’s more interest from developers and dev ops groups and operations people in management, maintaining, understanding the security posture and actually taking responsibility for it than I’ve ever seen in the past. The problem for us—and this comes from our experience from before we started the company. What we really learned in talking to our dev ops counterparts on this stuff was, if you’re a security person, you’re like the tinfoil hat guy that’s super paranoid, right? And you’re always telling them, hey, there’s all these vulnerabilities and they could be exploited anytime. And yeah, that’s true, but the response back from your engineering groups is by and large going to be you paid somebody to get a pin test or you paid a bug bounty person to go find that thing in the code. Like these are all theoretical risks. They’re not actual risks to them. The thing that security really needs to understand to be able to change this conversation is you’ve got to take that theoretical risk and move it into a practical conversation around what’s actually happening. And for us, the way we did that was we were able to show real time visibility into where attackers were attacking our applications, get that information directly into the hands of the development groups and the operations teams to say this theoretical area where we found all these bugs, well, 80% of all of our attacks that are happening right now, and you can see them. These attackers are desperately looking for the problems that we already know that are in these applications. And that made it very easy for them to this is not a theoretical risk, this is an actual attack that’s happening right now. We can see the attacks that are happening in these various form fields on these pages. We need to go fix these things now.

The idea is that if you could somehow show the cost of lack of operational discipline, that would be motivating.

It’s the cost, but then it’s also the real threat. If you’re sitting here saying if we just patched everything and we patched it all the time and we got the basics down, like that needs to happen regardless. There’s so many benefits that we’re going to get out of doing that, and the more we can automate those things, the better. But the question is then going to be how do you change behavior? If we’re all sitting here saying we agree that that’s the right thing, then the real problem is behavior is not changing based on what we agree to be the right thing. And the best thing we saw to be able to change behavior is to show the real threat versus the theoretical one.

How do you get cybersecurity education and training to be made a part of everyday life in a company? You don’t want only the security auditor to be the one complaining about the Post-It note with passwords on the computer. You want everybody to be complaining because that’s a bad thing, it’s going to hurt us all.

You’ve got to make it visible. So we learned this lesson at Etsy because Etsy was going through the lessons of dev ops. They were asking the same question, which was our developers want to move really fast with how they’re shipping code. How do we get them to take some responsibility over their operational components of the code that they’re launching so that we can actually do this stuff together and work hand in hand and have the development groups and the operations groups work well together? The way that they did it were two things. One, provide visibility into what the operational impact was when the developer wanted to launch that code. If they had the visibility, then they also could go and fix those things. But to create that visibility in the first place, you needed the ops teams to make tools to make it super easy for the developers. So our philosophy around this stuff is, one, if you want other people in the organization to actually take responsibility for security, you’ve got to invest a bit in tooling, but you have to make it so easy that they’re almost embarrassed that they don’t use it, right, that they don’t utilize that in the first place. But then you have to make that information that you’re giving them useful so that they can take action on it.

My last question is about cyber insurance. A lot of CIOs, CTOs and CISOs are being forced to buy cyber insurance for various reasons. A lot of them don’t like it. It’s often got a lot of escape hatches. There’s lots of ways not to pay. And what it insures is not necessarily the loss, but like ancillary costs like forensic costs or legal costs or other costs. It doesn’t necessarily insure against the loss of whatever the attack was. On the other hand, few people say that you’re going to be victorious arguing your way out of this one. So what would you recommend for a CISO to do to make the best of it?

The thing that is critical for CISOs to be thinking about is that security has always been in a position of trying to be the ones responsible for risk within an organization and so they want to say no to everything to reduce risk as much as possible. Getting away from the culture of saying no but saying, “Hey, we can enable you to do the things that you need to do on the business side and here’s the ways that we can help you reduce risk in the context of that,” I think that’s an important way to think about it and approach it in the first place. The second thing I’d say is something that I’ve heard from a couple of different folks, but the former CISO and the current head of risk for Goldman Sachs says this a lot. His name is Phil Venables. He talks about if you’re bringing security to the business and you’re trying to say it’s important to actually add these things to the business, you’ve got to start thinking about how security can actually help the business do their jobs better and how does it add to the rest of the business to be able to achieve the business goals that they have. Security can actually be a driver of value instead of just a risk reducer within the organization.