Introducing the Expander Cloud Module

While business has been occupied with digital transformation, attackers are taking advantage of the security gaps that result from the speed that assets are created and the lack of visibility that organizations have into these assets.

In an effort to curtail some of the issues that have come with rapid cloud development, organizations have begun to implement cloud governance strategies that establish policies and procedures. Some of these strategies might include sanctioning specific cloud providers, establishing authorized cloud accounts, creating security policy requirements, and implementing monitoring tools for policy enforcement.

Form a solid cloud foundation

Before you can apply policies and control cloud usage, you must have a current, continuously-updated inventory of every cloud asset. This is a foundational component of any cloud governance strategy: you need to know what you own in order to know what you need to protect.

And beginning without the knowledge of all of your Internet-connected assets means that you are leaving gaps that create risk. This is especially true in the cloud, where any developer–or any person in your organization at all–with an email address and a credit card can spin up assets and infrastructure with the click of a button.

Introducing the new Expander Cloud module

Our flagship product, Edge Expander, provides organizations visibility into every Internet-connected asset that exists on the public internet. Our new Cloud module gives an even deeper visibility, with a lens on our data that helps organizations focus on their cloud assets and infrastructure across all cloud providers.

Some of the benefits that Expander users will realize while using the new Cloud module include:

  • Discovery and tracking of all cloud assets across all cloud providers, not just the big three
  • The ability to quickly uncover unknown and rogue assets that are not part of sanctioned cloud accounts
  • Continuous monitoring of global cloud providers for newly-created assets that tie back to your organization
  • Analysis of your cloud footprint to better understand and consolidate cloud asset management into sanctioned IaaS accounts
A view of cloud domains across multiple providers that tie back to an organization in the Expander Cloud module

Unmatched visibility, from cloud to on-prem

Our customers have already seen results from utilizing the Expander Cloud module, including uncovering information about cloud assets that were previously unknown and providing visibility across all cloud providers.  An information security leader at Experian told us,

Expanse’s Cloud module is helping to innovate the way in which we’ve developed cloud security controls. The Cloud module helps to validate external exposures across our global canvas and has provided a degree of data and visibility which we feel is currently unmatched by other tool providers.

The new capabilities provide the continuously up-to-date view of all Internet- and cloud-connected assets needed to minimize and close gaps that have come from fast-moving digital transformation. This helps provide IT operations, DevOps, and security teams the confidence that cloud governance and digital transformation projects are pursued and implemented securely and according to policy, and that they stay that way over time.

Want to learn more?

To learn more and see the new Cloud module in action, register for our upcoming Cloud webinar.

The Expander Cloud module is in beta to all customers starting today and will be generally available in March 2019. To request a custom demo, reach out to us here.


Haley Sayres is a product manager at Expanse. During her tenure, she’s been responsible for Expanse’s pioneering product and early efforts and led the development of Expanse’s APIs and SIEM integrations,  general customer workflows within the UI, and most recently led the development of the cloud security module. Haley worked closely with the security teams at dozens of Fortune 500 customers to create, test, and release the cloud module. She holds a bachelor’s and master’s from Stanford University, where she focused on technological trends in society, computer science, and human-computer interaction. She began her career on a data insights and consulting team in customer experience management.

Machine-speed attacks create new security risks for remote workforce technologies

RDP and other productivity-enhancing tools leave organizations exposed to attacks on their ever-changing attack surface

In a previous post, we discussed advances in technology that have made it possible to scan the entire public Internet much faster than ever before. Because of these advances, the thought that exposures can simply hide on the Internet is no longer true. You may think that your organization isn’t a target for cybercriminals, but the ease through which an exposure can be found opportunistically means that you may end up a victim anyway.

Every asset you connect to the Internet can put your organization at risk because they are easy to find and can provide a high-value foothold into your network for hackers to exploit. This blog details just one of many types of exposures that attackers will look for.

RDP Exposures and Remote Workers

In the past decade, the rise of the remote workforce has led to increased numbers of exposed assets, as non-IT employees attempt to access sensitive internal systems remotely. One of the technologies that enables this is a Microsoft product called remote desktop protocol (RDP). RDP enables remote workers to log into their corporate office desktop from anywhere using a graphical interface and work as if they are sitting at their desk. All they need is their username and password. Through RDP, the remote employee has the same access to important files and network resources as they would were they working in the office itself.

But by exploiting this technology, attackers can have the same power over the corporate office desktop that the employee has, too: they can access networked machines, steal credentials or data, or inject malware or ransomware such as Samsam into the remote system.

One aspect that makes RDP exposures difficult to remediate is how and where they persist. While some RDP exposures can show up on a specific workstation at the same IP address day after day, others may show up in unregistered IP space or IP space belonging to a coffee shop or hotel where a remote employee is working. It’s like playing Whack-A-Mole for an IT organization because it’s nearly impossible to detect these sorts of exposures quickly enough to track them down and take remediation actions. And most IT teams don’t have the tools required to detect these exposures outside of their corporate IT space.

Regardless of the intention of the employee, RDP exposures create a real security risk. It’s essentially like leaving a laptop that has access to your corporate network out on the street where anyone can try to enter credentials to get into that device. Adding to that, credential dumps happen every day, and most people don’t practice good password hygiene, meaning that just about any hacker can easily compromise the system.

So easily, in fact, that the FBI issued a public service announcement in September 2018 recommending that, “businesses and private citizens review and understand what remote accesses their networks allow and take steps to reduce the likelihood of compromise, which may include disabling RDP if it is not needed.”

Do you know if you have RDP exposures?

One might think that only an inexperienced administrator would fail to limit routing of such protocols onto the public Internet, but the reality is that these exposures are quite widespread across Fortune 500 organizations. In fact, Expanse observed that 70% of the Fortune 500 had at least one machine with RDP accessible in the last month.

These lapses can occur easily, and again, may seem innocuous. Any time an employee fails to correctly initiate a VPN connection they may be sending up a flare, beaconing to attackers that their machine is ripe for the taking.

And where there’s an exposure, there’s a marketplace. Cybercriminals are capitalizing on these types of exposures, recognizing that they’re likely an easy target for any opportunistic hacker. Anyone could go to the dark web and purchase a list of recently observed RDP exposures for as little as .00088 Bitcoin (or $3).

Protecting yourself and securing your exposures

The FBI’s announcement on RDP includes eleven suggestions to minimize your exposure to RDP exploits, specifically, “because RDP has the ability to remotely control a system entirely, usage should be closely regulated, monitored, and controlled.” Some key recommendations include:

  • Audit your network for systems using RDP for remote communication.
  • Enable strong passwords and account lockout policies to defend against brute-force attacks.
  • Verify all cloud-based virtual machine instances with a public IP do not have open RDP ports, specifically port 3389, unless there is a valid business reason to do so.
  • Minimize network exposure for all control system devices. Where possible, critical devices should not have RDP enabled.

These are good recommendations, but they are difficult–if not impossible–to implement. Because of the nature of RDP and its use by employees (who tend to be off premises, using home, hotel, or client networks), organizations must be able to see machines from the same perspective of the would-be hackers looking for exposures to exploit. Traditional tools simply can’t give organizations this visibility.

See yourself how an attacker sees you

Expanse continuously collects data about every public-facing device connected to the Internet. The data is then correlated with other information sources to attribute devices and infrastructure to organizations. This results in a comprehensive, global view of all of your assets, not just the ones that you know about. In short, your security and IT operations teams have the visibility and context needed to protect your organization from hackers performing reconnaissance.

Expanse discovers and tracks your global Internet attack surface and can provide a constantly up-to-date picture of your RDP exposures, as well as hundreds of other ways your organization may be exposed to attackers.

To learn more about how Expanse can help your organization, request a demo here.

Part 3: Quantitative Methods for Assessing Cyber Risk

Accurately model risk to up-level cyber discussions and evolve security postures

Most businesses are very comfortable assessing risk, whether it be from a project failing, market uncertainty, workplace injury, or any other number of causes. But when it comes to cyber security, rigor disappears, hand-waving commences, and analysts pick a color (red, yellow, or green).

Quantification is one of the most valuable endeavors in business, but most organizations are still guessing about cyber security. Return on investment isn’t known and analysts rely on media hype instead of hard data. But it doesn’t have to be this way. The data and tools exist for organizations to make better investment decisions, and with a little practice anyone can become an expert.

In part one of this three-part series, we’ll explain why we need data. Lots of evidence shows that without data, we make really bad decisions. In part two, we focus on the data itself, and where to get it. In part three, we focus on the tools we need to calculate cyber risk. Putting these together, combined with some practice, will have you making data-driven decisions in no time.


Part 3: Intelligent Investing

Part 1 of this series focused on why we need data to make better decisions in cyber security. Part 2 discussed where organizations can find data, and how it can be insightful. Part 3 discusses pulling the data and the tools together to calculate return on investment for security products.

Quantitative Cyber Risk Assessment in Practice

The field of probabilistic risk analysis has been widely used for decades and works very well for cyber. It is a classic method for calculating engineering risk. Anyone who attends university and takes classes on industrial engineering is likely to study some form of probabilistic risk analysis. Probabilistic risk analysis helps determine the potential outcome from various security investments so that we can decide on the best course of action.

Let’s look at an example that quantifies perimeter risk. All the evidence points to perimeter risk being remarkably consistent over time. Bad actors have been running scans for years, and aren’t stopping any time soon. With a little digging, most organizations can also calculate the impact distribution based on your existing understanding of the potential consequences of all possible attacks with respect to monetary, reputation, and intellectual property damage or loss. We know there’s a great deal of uncertainty, and we also know that most incidents will be small and then occasionally one will be large. (Footnote to readers, we’re skipping over a lot of details here. If you want to learn more, detailed resources for how to implement this type of analysis are listed in the bibliography at the bottom of this post).

We now have the two key components that we need to calculate risk; the rate of incidents and the impact distributions. You can use a simple Monte Carlo simulation to simulate year after year, which results in a probability distribution of losses due to perimeter cyber attacks. The x-axis shows the yearly cost in dollars due to perimeter risk. The y-axis is the complementary cumulative distribution function, which is basically probability. Just think of the y-axis as probability and the blue line shows what the actual risk is.

To read this graph, focus on the blue line and the shaded blue areas. There is a 50% chance per year that perimeter attacks are going to cost more than $3 million. There’s also a 0.1% chance that perimeter attacks are going to cost more than $200 million. Now, you have the information you need to have an intelligent, informed discussion with analysts and decision makers. You can see that the chances of large loss are relatively small.

You can also start to frame a conversation about risk based on the application of different security safeguards to help decide action steps. In the graph below, the x-axis is the yearly loss in millions of dollars and the y-axis is probability. We can compare and contrast different risk curves from the attack vectors of interest. The farther up and to the right, the worse and the higher the risk that the curve represents. For example, looking at data spillage, the red curve is in the lower left-hand corner, which tells us that data spillage is a low risk that should be deprioritized compared to others. The purple curve tells us that losses from perimeter compromises are usually pretty small, but every once in a while are huge. The vast majority of perimeter risk are low level incidents like website defacement, while every so often somebody finds that one device that’s exposed and this has a huge impact on the organization.

This approach can be used to compare before and after a security investment is made. Looking at the brown curve for laptop theft, we can see when full disk encryption was implemented and that dramatically reduced the risk curve, whereas an expensive DLP solution failed to do so.

Cyber Risk is Quantifiable

It’s difficult to do accurately, but cyber risk is quantifiable, including things that may be difficult to assess such as reputation, damage, business interruption, and others. Using numbers instead of hand waving arguments results in a higher quality conversation where assumptions can be challenged, sensitivity can be studied, and impact can be measured. In this case study, the organization studied had been persistently underestimating it’s cyber risk due to perimeter attacks. When the data were gathered, it turned out that perimeter issues were the major driver of risk, even under a wide range of input assumptions. These data can help decision makers get the highest return on security investments, and align analysts and board members alike on problem areas. Data driven analysis may seem complicated and confusing, but it can be incredibly effective. Give it a try and you may be surprised at the results!


For Further Study


Dr. Marshall Kuypers, Director of Cyber Risk, is passionate about quantitative risk and cyber systems. He wrote his PhD thesis at Stanford on using data driven methods to improve risk analysis at large organizations. He was a fellow at the Center for International Security and Cooperation, and he has modeled cyber risk for the NASA Jet Propulsion Lab and assessed supply chain risk and cyber systems with Sandia National Labs.

Part 2: Quantitative Methods for Assessing Cyber Risk

Accurately model risk to up-level cyber discussions and evolve security postures

Most businesses are very comfortable assessing risk, whether it be from a project failing, market uncertainty, workplace injury, or any other number of causes. But when it comes to cyber security, rigor disappears, hand-waving commences, and analysts pick a color (red, yellow, or green).

Quantification is one of the most valuable endeavors in business, but most organizations are still guessing about cyber security. Return on investment isn’t known and analysts rely on media hype instead of hard data. But it doesn’t have to be this way. The data and tools exist for organizations to make better investment decisions, and with a little practice anyone can become an expert.

In part one of this three-part series, we’ll explain why we need data. Lots of evidence shows that without data, we make really bad decisions. In part two, we focus on the data itself, and where to get it. In part three, we focus on the tools we need to calculate cyber risk. Putting these together, combined with some practice, will have you making data-driven decisions in no time.


Part 2: Data Dreaming

Many organizations want to make better data-driven decisions, but don’t have access to the data that they need. A major belief in cyber security is that we need more data sharing. While this is true, a surprising amount of data already exist within organizations and to the public that can be used to inform decisions.

The data exist

Organizations actually have huge reservoirs of data at their fingertips if they know where to look. Many organizations keep records of cyber incident data in some form or another. This could range from well-structured incident databases to ticketing systems to an email inbox. If you walk into most organizations and ask for the number of website compromises last year, chances are someone can get you this information.

Cyber incidents at a large US-based organization

During my PhD, I was granted access to 60,000 incidents that had been recorded at a large US-based organization. The above graph shows these incidents over six years, with the x-axis being time and the y-axis being impact in hours on a log scale. One thing that is immediately evident is that the vast majority of incidents take less than 10 hours to investigate and remediate. However, every so often, there’s an incident that costs thousands of man-hours to do so.

Once an organization is collecting the right data, it’s possible to answer some fascinating questions. For example, looking at Shellshock attacks at this organization (time on the x-axis and the number of incidents per day on the y-axis) we can see that within 5 hours of the announcement of the vulnerability, this organization was already being attacked. This is a great demonstration of how quickly malevolent actors can take a new vulnerability and start to leverage it.

Prevalence of attacks by weekday and time.

We can also see that Thursdays and Fridays are the most common days for attacks (which tells us that perhaps hackers like to take long weekends and sleep in on Mondays and Tuesdays). We also found that attacks did not correlate with US work day hours, so we could infer that the attacks were likely coming from Eastern Europe. We also found that the incidents continued to occur for several months, which illustrates the importance of getting to 100% patch level, because if you leave a couple of systems out there, it’s really only a matter of time before the criminals find them.

It’s also possible to extend this approach to specific incidents, like website attacks, phishing emails, or denial of service incidents. Often, there are surprises. Certain incident types are not becoming more frequent, but are stable or actually decreasing!

This leads us to conclude that things are changing in cyber, although not nearly as quickly as everybody else thinks: there are increases in new types of attacks, like ransomware, or changes in the rate of attacks, but it is far more predictable that you’ve been led to believe. You just need to graph it in a way that makes sense, which is great news, since it means that you can model cyber attacks pretty easily.

When we look at cyber in a rigorous way, lots of interesting insights come out. Attack trends change, but much more slowly than we typically think. This conclusion has been reached by a number of independent researchers using a variety of public and private data sources. Often, the volatility in attack trends is due to using the incorrect analysis tools, not an underlying trend.

At Expanse, we’re constantly monitoring the global Internet for devices. Our outside-in approach discovers and monitors high-confidence devices based on multiple attribution indicators. We can watch the Fortune 500 or any other group to see if network perimeters are becoming more or less secure. With the increase in attention and budget of cyber security, you would think that the network perimeter is becoming more secure, but that’s not the case. Looking across a wide sample of companies, we see perimeter security actually becoming worse at many organizations:

Organizations need to take a data-driven approach to risk analysis. A CISO that I worked with once said that supply chain attacks (like the alleged SuperMicro incident) were their number one priority for budget. We consulted the data, and over six years, not one supply chain attack had occurred at his organization. However, during the same time period, 375 website compromises led to website defacements, SQL injections, and a nation-state infiltrating their org. Confronted with these data, it was clear that focusing on supply chain security was akin to reinforcing the walls of a bank vault while leaving the front door unlocked. The perimeter is a similar story: organizations are still getting compromised via insecure devices on their network edge. There is still much work to be done.

In the next post, I’ll discuss the tools we need to quantify cyber risk once we have the data.


Dr. Marshall Kuypers, Director of Cyber Risk, is passionate about quantitative risk and cyber systems. He wrote his PhD thesis at Stanford on using data driven methods to improve risk analysis at large organizations. He was a fellow at the Center for International Security and Cooperation, and he has modeled cyber risk for the NASA Jet Propulsion Lab and assessed supply chain risk and cyber systems with Sandia National Labs.

Quantitative Methods for Assessing Cyber Risk

Accurately model risk to up-level cyber discussions and evolve security postures

Most businesses are very comfortable assessing risk, whether it be from a project failing, market uncertainty, workplace injury, or any other number of causes. But when it comes to cyber security, rigor disappears, hand-waving commences, and analysts pick a color (red, yellow, or green).

Quantification is one of the most valuable endeavors in business, but most organizations are still guessing about cyber security. Return on investment isn’t known and analysts rely on media hype instead of hard data. But it doesn’t have to be this way. The data and tools exist for organizations to make better investment decisions, and with a little practice anyone can become an expert.

In part one of this three-part series, we’ll explain why we need data. Lots of evidence shows that without data, we make really bad decisions. In part two, we focus on the data itself, and where to get it. In part three, we focus on the tools we need to calculate cyber risk. Putting these together, combined with some practice, will have you making data-driven decisions in no time.


Part 1: Cyber Confusion

Making good decisions in cyber security is hard. Misinformation abounds, the public doesn’t understand root causes of data breaches, and a million security vendors want to sell you something. Without the aid of hard data, decision makers can quickly be led astray. In this post, we’ll cover a basic overview of how organizations make investment decisions, and talk about the pitfalls of poor risk management programs and misinformation.

Quantitative Cyber Risk Assessment Is an Immature Field

The bar for assessing cyber risk is pretty low. Many organizations make decisions by sitting around a table, or using a lot of hand-waving (often by a salesperson). Security vendors like to tell customers that all they have to do is buy their product and it will fix all of their problems.

When formal cyber risk programs at organization do exist, it’s often qualitative, meaning that you label risks as likely or unlikely and the potential impact as minor, moderate, or major. And while this methodology gets people to sit down and discuss cyber risk, it’s limiting because of the subjectivity of language. Something as simple as the word “unlikely” might mean different things to different people.

People may interpret things in radically different ways with even small changes in phrasing. Here’s an example that comes from “Thinking Fast and Slow” by Daniel Kahneman: a group of surgeons are asked if they recommend a surgery that has a one-month survival rate of 90%. 84% of the doctors recommend the operation. Then you can ask another group of surgeons if they recommend a surgery that has a one-month mortality rate of 10%. Notice that this is the same question, just phrased differently. But this time, only 50% of the doctors recommend surgery! It’s a bit terrifying to think that if you’re going in for a life-saving surgery or procedure, you can influence the surgeon’s recommendation by how you phrase the question.

Now imagine that you want to ask your CISO a question like “how secure are we these days?” How do you know that the CISO understands what “secure” means in the same way you do?

Confusion Abounds

Confusion also permeates discussions about cyber security. For example, several media outlets ran stories stating that cybercrime costs the global economy $1 trillion dollars per year. This number is nonsense, and was based off a survey of companies that asked about data breach costs. The study was then taken out of context and extrapolated, basically saying that if 100 companies lost $500 million, then all 20,000 companies in the US must have lost $100 billion. This huge extrapolation doesn’t fit other data or rigorous studies, but the $1 trillion headline was quoted repeatedly at very high levels in government and in the media. No one ever asked to see the data behind the claim.

Vendors have generated some of the most misleading cyber reports that have ever been published. An example comes from a vendor report that talks about “the precision interval for the mean value of annualized total cost”. Doesn’t that sound sophisticated? The report really makes it look like they’re coming up with detailed analyses of how much cyber is costing, but there’s actually no such term as a precision interval. It’s a completely made-up word. Confidence intervals and credible intervals exist, but precision intervals do not.

As we can see, for cyber risk assessment, if you simply use the right mathematical tools, you’re ahead of the game. The bar is really quite low when it comes to quantitative cyber risk assessment and you’ve got to be careful about how you’re ingesting information. There’s another great paper called “Sex, Lies, and Cybercrime Surveys” about the issues surrounding a reliance on vendor reports or cyber crime surveys that are used as evidence for different problems in the cyber landscape.

Countless other examples of bad studies exist, but the result is the same: they create inaccurate perceptions about what is important, and lead to poor decision making. IT professionals should be skeptical of headlines, and demand transparency into where data came from.

At Expanse, we’ve integrated data-driven decision making into the entire customer lifecycle. Customers can make purchase decisions based on facts – not fear, uncertainty, and doubt. We track key metrics about an organization’s perimeter over time, and arm the IT organization with data that guide their security program.

In the next two posts, we’ll dive into examples of data (where to get it), and the tools needed to make better investment decisions (how to use it).

Read part two here.


Dr. Marshall Kuypers, Director of Cyber Risk, is passionate about quantitative risk and cyber systems. He wrote his PhD thesis at Stanford on using data driven methods to improve risk analysis at large organizations. He was a fellow at the Center for International Security and Cooperation, and he has modeled cyber risk for the NASA Jet Propulsion Lab and assessed supply chain risk and cyber systems with Sandia National Labs.