Quantitative Methods for Assessing Cyber Risk Part 2

Part 2: Quantitative Methods for Assessing Cyber Risk

Marshall Kuypers

By Marshall Kuypers, Senior Director, Cyber Risk 01.29.2019

LINKEDIN

Accurately model risk to up-level cyber discussions and evolve security postures

Most businesses are very comfortable assessing risk, whether it be from a project failing, market uncertainty, workplace injury, or any other number of causes. But when it comes to cybersecurity, rigor disappears, hand-waving commences, and analysts pick a color (red, yellow, or green).

Quantification is one of the most valuable endeavors in business, but most organizations are still guessing about cybersecurity. Return on investment isn’t known and analysts rely on media hype instead of hard data. But it doesn’t have to be this way. The data and tools exist for organizations to make better investment decisions, and with a little practice anyone can become an expert.

In part one of this three-part series, we’ll explain why we need data. Lots of evidence shows that without data, we make really bad decisions. In part two, we focus on the data itself, and where to get it. In part three, we focus on the tools we need to calculate cyber risk. Putting these together, combined with some practice, will have you making data-driven decisions in no time.


Part 2: Data Dreaming

Many organizations want to make better data-driven decisions, but don’t have access to the data that they need. A major belief in cybersecurity is that we need more data sharing. While this is true, a surprising amount of data already exist within organizations and to the public that can be used to inform decisions.

The data exist

Organizations actually have huge reservoirs of data at their fingertips if they know where to look. Many organizations keep records of cyber incident data in some form or another. This could range from well-structured incident databases to ticketing systems to an email inbox. If you walk into most organizations and ask for the number of website compromises last year, chances are someone can get you this information.

During my PhD, I was granted access to 60,000 incidents that had been recorded at a large US-based organization. The above graph shows these incidents over six years, with the x-axis being time and the y-axis being impact in hours on a log scale. One thing that is immediately evident is that the vast majority of incidents take less than 10 hours to investigate and remediate. However, every so often, there’s an incident that costs thousands of man-hours to do so.

Once an organization is collecting the right data, it’s possible to answer some fascinating questions. For example, looking at Shellshock attacks at this organization (time on the x-axis and the number of incidents per day on the y-axis) we can see that within 5 hours of the announcement of the vulnerability, this organization was already being attacked. This is a great demonstration of how quickly malevolent actors can take a new vulnerability and start to leverage it.

We can also see that Thursdays and Fridays are the most common days for attacks (which tells us that perhaps hackers like to take long weekends and sleep in on Mondays and Tuesdays). We also found that attacks did not correlate with US work day hours, so we could infer that the attacks were likely coming from Eastern Europe. We also found that the incidents continued to occur for several months, which illustrates the importance of getting to 100% patch level, because if you leave a couple of systems out there, it’s really only a matter of time before the criminals find them.

It’s also possible to extend this approach to specific incidents, like website attacks, phishing emails, or denial of service incidents. Often, there are surprises. Certain incident types are not becoming more frequent, but are stable or actually decreasing!

This leads us to conclude that things are changing in cyber, although not nearly as quickly as everybody else thinks: there are increases in new types of attacks, like ransomware, or changes in the rate of attacks, but it is far more predictable that you’ve been led to believe. You just need to graph it in a way that makes sense, which is great news, since it means that you can model cyber attacks pretty easily.

When we look at cyber in a rigorous way, lots of interesting insights come out. Attack trends change, but much more slowly than we typically think. This conclusion has been reached by a number of independent researchers using a variety of public and private data sources. Often, the volatility in attack trends is due to using the incorrect analysis tools, not an underlying trend.

At Expanse, we’re constantly monitoring the global Internet for devices. Our outside-in approach discovers and monitors high-confidence devices based on multiple attribution indicators. We can watch the Fortune 500 or any other group to see if network perimeters are becoming more or less secure. With the increase in attention and budget of cyber security, you would think that the network perimeter is becoming more secure, but that’s not the case. Looking across a wide sample of companies, we see perimeter security actually becoming worse at many organizations:

Organizations need to take a data-driven approach to risk analysis. A CISO that I worked with once said that supply chain attacks (like the alleged SuperMicro incident) were their number one priority for budget. We consulted the data, and over six years, not one supply chain attack had occurred at his organization. However, during the same time period, 375 website compromises led to website defacements, SQL injections, and a nation-state infiltrating their org. Confronted with these data, it was clear that focusing on supply chain security was akin to reinforcing the walls of a bank vault while leaving the front door unlocked. The perimeter is a similar story: organizations are still getting compromised via insecure devices on their network edge. There is still much work to be done.

In the next post, I’ll discuss the tools we need to quantify cyber risk once we have the data.


Dr. Marshall Kuypers, Director of Cyber Risk, is passionate about quantitative risk and cyber systems. He wrote his PhD thesis at Stanford on using data driven methods to improve risk analysis at large organizations. He was a fellow at the Center for International Security and Cooperation, and he has modeled cyber risk for the NASA Jet Propulsion Lab and assessed supply chain risk and cyber systems with Sandia National Labs.