Part 3: Quantitative Methods for Assessing Cyber Risk
Accurately model risk to up-level cyber discussions and evolve security postures
Most businesses are very comfortable assessing risk, whether it be from a project failing, market uncertainty, workplace injury, or any other number of causes. But when it comes to cyber security, rigor disappears, hand-waving commences, and analysts pick a color (red, yellow, or green).
Quantification is one of the most valuable endeavors in business, but most organizations are still guessing about cyber security. Return on investment isn’t known and analysts rely on media hype instead of hard data. But it doesn’t have to be this way. The data and tools exist for organizations to make better investment decisions, and with a little practice anyone can become an expert.
In part one of this three-part series, we’ll explain why we need data. Lots of evidence shows that without data, we make really bad decisions. In part two, we focus on the data itself, and where to get it. In part three, we focus on the tools we need to calculate cyber risk. Putting these together, combined with some practice, will have you making data-driven decisions in no time.
Part 3: Intelligent Investing
Part 1 of this series focused on why we need data to make better decisions in cyber security. Part 2 discussed where organizations can find data, and how it can be insightful. Part 3 discusses pulling the data and the tools together to calculate return on investment for security products.
Quantitative Cyber Risk Assessment in Practice
The field of probabilistic risk analysis has been widely used for decades and works very well for cyber. It is a classic method for calculating engineering risk. Anyone who attends university and takes classes on industrial engineering is likely to study some form of probabilistic risk analysis. Probabilistic risk analysis helps determine the potential outcome from various security investments so that we can decide on the best course of action.
Let’s look at an example that quantifies perimeter risk. All the evidence points to perimeter risk being remarkably consistent over time. Bad actors have been running scans for years, and aren’t stopping any time soon. With a little digging, most organizations can also calculate the impact distribution based on your existing understanding of the potential consequences of all possible attacks with respect to monetary, reputation, and intellectual property damage or loss. We know there’s a great deal of uncertainty, and we also know that most incidents will be small and then occasionally one will be large. (Footnote to readers, we’re skipping over a lot of details here. If you want to learn more, detailed resources for how to implement this type of analysis are listed in the bibliography at the bottom of this post).
We now have the two key components that we need to calculate risk; the rate of incidents and the impact distributions. You can use a simple Monte Carlo simulation to simulate year after year, which results in a probability distribution of losses due to perimeter cyber attacks. The x-axis shows the yearly cost in dollars due to perimeter risk. The y-axis is the complementary cumulative distribution function, which is basically probability. Just think of the y-axis as probability and the blue line shows what the actual risk is.
To read this graph, focus on the blue line and the shaded blue areas. There is a 50% chance per year that perimeter attacks are going to cost more than $3 million. There’s also a 0.1% chance that perimeter attacks are going to cost more than $200 million. Now, you have the information you need to have an intelligent, informed discussion with analysts and decision makers. You can see that the chances of large loss are relatively small.
You can also start to frame a conversation about risk based on the application of different security safeguards to help decide action steps. In the graph below, the x-axis is the yearly loss in millions of dollars and the y-axis is probability. We can compare and contrast different risk curves from the attack vectors of interest. The farther up and to the right, the worse and the higher the risk that the curve represents. For example, looking at data spillage, the red curve is in the lower left-hand corner, which tells us that data spillage is a low risk that should be deprioritized compared to others. The purple curve tells us that losses from perimeter compromises are usually pretty small, but every once in a while are huge. The vast majority of perimeter risk are low level incidents like website defacement, while every so often somebody finds that one device that’s exposed and this has a huge impact on the organization.
This approach can be used to compare before and after a security investment is made. Looking at the brown curve for laptop theft, we can see when full disk encryption was implemented and that dramatically reduced the risk curve, whereas an expensive DLP solution failed to do so.
Cyber Risk is Quantifiable
It’s difficult to do accurately, but cyber risk is quantifiable, including things that may be difficult to assess such as reputation, damage, business interruption, and others. Using numbers instead of hand waving arguments results in a higher quality conversation where assumptions can be challenged, sensitivity can be studied, and impact can be measured. In this case study, the organization studied had been persistently underestimating it’s cyber risk due to perimeter attacks. When the data were gathered, it turned out that perimeter issues were the major driver of risk, even under a wide range of input assumptions. These data can help decision makers get the highest return on security investments, and align analysts and board members alike on problem areas. Data driven analysis may seem complicated and confusing, but it can be incredibly effective. Give it a try and you may be surprised at the results!
For Further Study
- Edwards, Benjamin, Steven Hofmeyr, and Stephanie Forrest. “Hype and Heavy Tails: A Closer Look at Data Breaches”. “Workshop on Economics of Information Security”
- Kuypers MA, Maillart T, Pate-Cornell E. An empirical analysis of cyber security incidents at a large organization. Technical Report. Stanford University, 2016.
- Maillart, T., and D. Sornette. “Heavy-tailed distribution of cyber-risks.” The European Physical Journal B 75.3 (2010): 357-364.
- Wheatley, Spencer, Thomas Maillart, and Didier Sornette. “The Extreme Risk of Personal Data Breaches & The Erosion of Privacy.” The European Physical Journal B 89.1 (2016)
- Herrmann, A. (2013). The Quantitative Estimation of IT‐Related Risk Probabilities. Risk Analysis, 33(8), 1510–1531.
- Mersinas, Konstantinos, et al. “Experimental Elicitation of Risk Behaviour amongst Information Security Professionals.” WEIS 2015.
- Florêncio, Dinei, and Cormac Herley. “Sex, lies and cyber-crime surveys.” Economics of information security and privacy III. Springer New York, 2013. 35-53.
- How to Measure Anything in Cybersecurity Risk. Hubbard.
- Measuring and Managing Information Risk: A FAIR Approach. Freund and Jones.
- The Theory That Would Not Die. McGrayne.
- Thinking fast and slow. Kahneman
Dr. Marshall Kuypers, Director of Cyber Risk, is passionate about quantitative risk and cyber systems. He wrote his PhD thesis at Stanford on using data driven methods to improve risk analysis at large organizations. He was a fellow at the Center for International Security and Cooperation, and he has modeled cyber risk for the NASA Jet Propulsion Lab and assessed supply chain risk and cyber systems with Sandia National Labs.