The Internet Is Small
Technology breakthroughs have made it possible for us to scan the entire public Internet much faster than ever before, and as a result the Internet has become small.
Fifteen years ago, it took a team of researchers with state-of-the-art equipment, world-class skills, and months of time to map the Internet. Today, anyone can download open source tools and find 99% of the world’s servers vulnerable to a given exploit in hours.
As recently as 2012, you still needed massive compute power to see what the Internet looked like. A paper was published in 2012 by some researchers who took over a botnet to scan the global Internet (the paper was published under a pseudonym because, while creative, it is illegal to use botnets for research purposes). The Carna botnet used embedded devices like web cameras and printers to map the entire global Internet. This massive undertaking resulted in a picture of the global IPv4 space, something that was rare at the time.
ZMap Changed Everything
Everything changed in 2013 when a new scanner, ZMap, was released by a research team from the University of Michigan. ZMap fundamentally changed the way we’re able to view the global Internet, increasing the speed at which you could scan IPv4 space by 1500 times.
The improvements that ZMap made are intuitive to understand. One of the first things that it did was randomize the order of the IPs that you’re scanning on. For example, if you use Nmap and you’re scanning the Internet, and you start with the first IP address, and then you go to the second, and the third, you might end up sending a large number of packets to a contiguous range. This acts like a denial of service attack against that IP range. But if you randomize the IPs that you’re scanning, you can actually scan much quicker because you’re not sending a bunch of packets to the same owner at the same time.
ZMap applied a cool mathematical algorithm where it said, “Look. Let’s take all the IPs that we want to scan, and let’s arrange them on a circle. And we can use this cool algorithm where we’re going to bounce around that circle. And when we’re done, we’re guaranteed to have visited every IP once, and only once.” This is also useful because you don’t have to keep record of who you’ve scanned, and who you haven’t; you follow the algorithm and you’re guaranteed to hit every IP.
The second improvement is around handshakes, which are usually conducted in a couple of milliseconds. Every so often, however, if there are some network latency issues, there might be a delay. If you have to wait for that delay to wrap up – for example, if it takes five seconds – that can start to add up and slow down how quickly you can scan all these different devices. ZMap wraps that handshake process into packets and sends them off as quickly as it can, and then they can come back to the scanner at their leisure. That also removed some of the latency issues so you could scan much faster.
The result is that with ZMap, you can scan the entire 4.3 billion addresses in IPv4 space on an individual port/protocol pair in less than 45 minutes with a 1 Gigabit connection. This has fundamentally changed how attackers or adversaries can view the Internet. No longer can organizations really rely on security by obscurity, but instead the bad guys can ask within an hour, for example, “show me every exposed database that is on the global Internet.” And if they find them, they don’t really care who they belong to, but they can start to attack them and steal the information in the database, or install ransomware. This has significantly changed the way that organizations need to behave and how they need to secure their existing network perimeter.
The Internet is now small, and attackers have a rapid way of identifying devices. Paired with this on the defenders’ side, we also see that networks have never been more complicated or complex than how they are right now. There are many different reasons for this. If you go back about 15 years, maybe you had a master IP list of all of the different assets that made up your network boundary. You could plug those into your favorite perimeter vulnerability scanner and scan them once a week. If everything came back patched, then you could sleep well at night.
Today, networks have become so complicated that a static list of IPs is no longer enough; organizations need Internet-scale monitoring to verify that they are secure.
There are several reasons why networks have become more complex over time:
- Networks grow as new offices are built and the organization expands
- Shifts in work norms, like employees working from home, create new risk
- Mergers and acquisitions create inherited risk if the acquisition doesn’t know their network space well
- Cloud deployments cannot be tracked with static IP lists, and employees may spin up shadow IT outside of approved cloud deployments
- Strategic supply chain risk grows as organizations and their networks have become more interdependent with their suppliers
Today, organizations can’t rely on traditional tools to keep track of their network, they need to scan the entire global Internet to make sure that there isn’t one of these edge cases out there that will result in an attack.
This is why Expanse exists; to give defenders the same view of their networks that attackers already have.
At Expanse, we continually index the IPv4 space, collecting data about every public-facing device connected to the Internet. We then correlate these data with other information sources to attribute devices to organizations. This results in a comprehensive, global view of all of your assets, not just the ones that you know about.
The Internet has gone through a phase shift over the past several years. Many IT professionals are unaware that attackers can find their exposed devices in minutes. Expanse levels the playing field by providing global visibility across the entire Internet. Expanse discovers and tracks your Global Internet Edge, helping your organization know your unknowns.