Defining Your Risk: A Podcast with Tim Mackey of Synopsys

At this year’s RSA 2019, Dan Woods of the Early Adopter Research podcast spoke with Tim Mackey, the senior technology evangelist at Software Integrity, a division of Synopsys. As Woods did with other tech leaders he spoke with at the conference, he asked about his three key cybersecurity questions for the year. Their conversation covered:

* 2:30 – What Synopsys does
* 4:30 – The reality of zero trust
* 8:45 – Should companies be focusing on pruning their cybersecurity portfolios?
* 12:25 – How fast will the migration of cybersecurity components to the cloud occur?

Listen below:

Q&A

Woods: Explain Software Integrity’s product and its capabilities in relation to the NIST framework for cybersecurity, the one that includes identify, protect, detect, respond and recover.

Mackey: The Software Integrity group isn’t a single product; it’s not a single service. It’s something that we created through acquisition five years ago when Synopsys acquired Coverity, which is a static analysis tool. And over the intervening years we’ve acquired a number of tools that fill out the portfolio of tooling that you would expect to have if you’re on the development side of the equation. And some of those tools have applicability on the operations side as well. In addition to that, we have a series of services that allow organizations to recognize when they are a little behind where they want to be and move themselves forward. We’re in a position where we can identify the types of vulnerabilities that might be created within the development stream and provide e-learning and training and tutorials to assist developers in doing a better job of recognizing their coding patterns and how they can be improved from a security posture. We build in dynamic and protocol fuzzing capabilities, so if you’re down the path of an IoT solution, you can look at the entire full stack of what’s being repaired, not just the code that you’re writing. We’re doing some interactive application analysis so we can embed ourselves in profilable languages and recognize whether or not the code being created is in fact something that is going to be more vulnerable or not. We then put an overlay on that from a product perspective of what Gartner likes to call software composition analysis but was really an open source risk management perspective and the risks that might come in as a result of a developer making certain choices.

We have all sorts of references to other products that provide related capabilities; it’s just that you collect them all. If this is like “Die Hard on a boat,” it would be more like is this contrast Signal Sciences and Black Duck on a boat.

Correct. And Black Duck is a perfect example because Black Duck is a company’s solution that Synopsys acquired a little over a year ago to solve the open source governance perspective. It is quite literally Black Duck on a boat. And where we stop is on the network security side. We’re not going to be the perimeter edge defense solution, but we complement those very, very nicely by being able to say this is what you actually have in terms of the vulnerabilities, the composition, the structure, the architecture, around that software stack. By extension, if you’re not doing continuous monitoring for issues then you’re going to be expending defensive energy protecting against things that aren’t vulnerable in your environment.

The idea is you can understand the quality and nature of your attack surface based on a deep application analysis.

Exactly. And then from a services perspective, we can bring in pen testing and threat modeling and red teams to be able to facilitate.

I have three questions that I want to get your reaction to. People are hearing a lot about the idea of zero trust. When you hear about it, it sounds like the perimeter is no longer there and that zero trust means that every entity in the system has an umbrella of protection around it. That firewall that formerly created this trust space no longer is needed because we have all the points in the network protected. But it turns out that that’s not really the case when you look at a hybrid world, people are going to implement zero trust capabilities inside of perimeters and those same trust mechanisms won’t be used when people move outside of that perimeter. What do you think zero trust is going to mean for most companies? Is it going to be better individual authentication and do you think it creates better adaptive protection?

The way I look at it is for decades, we’ve been trying to build bigger, better, bolder, stronger, faster perimeters. And yet the breaches keep happening and the attacks keep coming in. But fundamentally, what’s missing in the entire equation is what’s the value of the asset that someone’s trying to compromise? Are they trying to compromise the corporate entity or are they trying to exfiltrate some data from within that corporation and what is the vector around it? From a zero trust perspective, I liken it to making certain that you have the authorization to access this type of information at this point in time from this location. If we look back at the impetus behind bring your own devices to corporate America, you had these lovely iPads and Android devices that were being brought in and they didn’t really have a security model that was consistent with how a laptop or a desktop might have been secured. And so that for me was really the genesis around how do I actually create a zero trust model within an organization? When you start getting all of the IoT devices that everyone likes, all the sensors, all the connected cameras and conference room monitors and so forth, effectively there needs to be a clear understanding of what that data is.

The clear understanding of the data on the device?

The data that the device is accessing. So if I’ve got a door camera or I’ve got a web-enabled, cloud-enabled service that’s going to print out a visitor badge, some amount of information about that corporate structure is going to be there. Maybe the email address of the person that you’re visiting or the name of the conference room and the attendees in the conference room. What that device needs to have beyond that is rather limited. If you’re not segmenting around the expected limitations and capabilities of the device then you’re probably either over-restricting yourself or leaving more of the barn door open to the outside world.

The next question is about portfolio pruning. My research mission on earlyadopter.com, related to cybersecurity, is all about how to prune and create a balanced cybersecurity portfolio. What I’ve been searching for and haven’t found is the mechanisms and the ideas that you would use to actually identify when you can prune a cybersecurity capability. When does a new capability replace an old capability? And so far in the history of cybersecurity it’s only been additive.

That’s a fantastic question and gets at the heart of a lot of customer conversations that I have. One of the key problems that I keep running up against is a legacy definition for how things are patched. I hear them say, We patch virtual machines this way and physical servers this way. But from a container perspective, being able to shell into a running container is effectively opening up a security hole that doesn’t need to exist. And for me, the question then becomes what are the security capabilities, what do you get that’s additive out of making it a transformation from your existing business paradigm when it comes to software development and deployment to this new thing that you might be trying to get to, whether that’s a container or micro services or even a unikernel strategy, that those new technologies should be giving you a greater security posture as opposed to just faster development or greater visibility, as important as those things might be.

What you’re saying is that if you can move to a landscape in which security is more built in, you might need fewer ways of observing for bad security?

Correct. Instead of a blacklist model, you can get to a white list model. This micro service only ever acts on this data that’s only ever going to come from this location. Why don’t I within the container orchestration solution define a network that only allows for that to happen? Anything else, by definition, is considered malicious and damaging in some capacity.

But in order to do that, you have to understand your environment much better than most people do?

Correct. You really truly have to go down the path of having a level of feedback and communication between the development requirements and the operational requirements that is the holy grail of a dev ops-type movement and have that from within the organization and with vendor supplied components as well.

The third question I have is about how fast the migration of cybersecurity componentry will happen to the cloud. How is the migration to the cloud going to happen with cybersecurity componentry and will it be equivalent to the migration of the computing infrastructure to the cloud?

It’s actually more a question of risk. So on prem I can go and I can define my risk, I may not like the result of that definition but I can define it, I know the performance levels of my cybersecurity teams, I know the performance levels of my development and my application, my monitoring teams and so forth. I know their ability to react. So if a new vulnerability is disclosed in something today I know roughly how long it’s going to take them to identify it. As that infrastructure migrates into a cloud service, effectively what you’re doing is a transfer of risk. If from the inside world I can create the moral equivalent of an outage within a cloud-based security solution, what does that actually mean to my overall visibility? Do I effectively end up in a scenario that’s the equivalent of someone cutting the hardline wire to my old school security system in my facility and then throwing a rock through the window? The net result is that we should always be asking questions around, “What happens when”—because the whens are going to become realities at some point.

You’re saying with the migration in cybersecurity to the cloud, the important thing to think about is how is the risk changing as you move those assets?

Exactly.

I have one bonus question about ops discipline. Most of the CISOs and vendors that I talk to all agree that they would be better off spending more on operational discipline and improving their configuration management, patch management, asset inventories, levels of automation in their environment than they would be on yet another piece of cybersecurity componentry. But it seems really hard for people to act that way. Do you think that operational discipline would be a good investment for most environments?

I would definitely agree. If I go back to patch management as an example, if I don’t know exactly what I have within that data center operations team, on prem or cloud, I can’t possibly patch it. If I don’t what my dependencies are, I can’t possibly monitor for them. And so effectively today, a modern application is an amalgam of custom code, some open source libraries, some third-party APIs and a bunch of configuration metadata. If that third party API goes down for some reason, what’s the impact? And how would I actually manage and monitor for it? Does that have a phone-home mechanism in it some way so effectively do I now end up in a regulatory morass that I don’t want to be in because I’ve deployed that component in a health care environment or under a GDPR managed scenario and now there’s the potential for data going out that we just didn’t know about? Having a visibility level that says this is what I have, then feeds into being able to increase that operational discipline is beneficial.