The Flaw That Led to A Lot of Heartbleed

“I was working on improving OpenSSL and submitted numerous bug fixes and added new features. In one of the new features, unfortunately, I missed validating a variable containing a length.”
– Robin Seggelmann, OpenSSL author

In November 1988, a Cornell University student named Robert Tappan Morris launched the first modern worm onto the then-nascent Internet. Morris’ stated purpose was to try to map the Internet. While the experiment went catastrophically awry, it taught us a couple of lessons: First, ensure that production software does not have the developers’ debugging code enabled. Second, make sure that the code a programmer has written operates the way that they intended.
heartbleed486743573

Robert Morris’ code violated the first rule on the servers it attacked, and bugs in the code allowed the software to run rampant. Whether deliberate or not, the so-called “Morris Worm” crippled the young Internet for several days. Today, Robert Morris is following in his father’s footsteps in cybersecurity, teaching at MIT.

Let’s fast-forward a little bit more than 25 years. A bug in one of the most commonly used services on the now-mature Internet is generating a lot of press. For historical reasons, the ability to encrypt information traversing the early Internet (and before that, the ARPANET) was extremely limited. As a result, much of the communications between individuals or between clients and servers depended on protocols that sent their data “in the clear.”

Introduced in February 1995, Secure Sockets Layer (SSL) provides a way of encrypting data in-flight so nobody can eavesdrop on the conversation. The Internet equivalent of Peeping Tom would only be able to see the encrypted information. While it has been claimed through the Edward Snowden leaks that the National Security Administration can “crack” SSL, probably very little short of the NSA’s technical knowhow and infrastructure could actually break the code.

Whitfield Diffie and Martin Hellman, two computer scientists working in the problem of key exchange, developed a mechanism where each participant needed to know only part of part of a solution to determine the secret key for a set of encrypted communication. Once the key was “exchanged” between the parties, they could communicate cheaply, securely and efficiently without worrying that someone could eavesdrop in the communications channel.

Ron Rivest, Adi Shamir and Henry Adelman, the founders of RSA, which is now part of EMC2 Corporation, extended this to practical and commercial use. While it appears that the British counterpart to the U.S. NSA called GCHQ may have developed similar technologies, they remained classified for many years.

Under so-called “public-key cryptography,” one member of a conversation generates a pair of keys. Unlike symmetric cryptography, where the same key is used to both encrypt-and-decrypt, a public key infrastructure (PKI) uses two (or more) keys. Even if one has the eponymous public key, the secrecy of the private key protects the information. While there are various algorithms and technologies for generating public-private-key-pairs, the idea is simple: The amount of computation to deduce the private key from the public key makes it impossible to calculate one from the other.

When a client, such as a Web browser, connects securely to a service on the Internet, such as a Web server, a “handshake” takes place, exchanging critical cryptographic information. The Web server sends the Web browser (or client) a digital certificate, which contains the server’s public key. Presuming that the certificate passes various integrity checks, the client uses the server’s public key to send a one-time session-key back to the Web server. The Web server then applies its private key to decrypt the session key and use that for cheap and easy symmetric encryption. As long as both the Web browser and server maintain the secure https:// connection, data is encrypted between the user and the network service.

Roughly speaking, key length is tied to the strength of encryption. While it isn’t universally true, one can assume that the longer the key length, the better the encryption. That’s why Web standards are moving to 256-bit SSL keys.

Now, my young Padawan, if it were only that easy.

Unfortunately, there are other mechanisms to attack encryption besides attacking the encrypted data. One of the most common is to attack the algorithm that encrypts the data. In other words, cryptologists find bugs in the program code that implements the encryption. This is where Heartbleed comes in.

Open-source software gets its name because the code of a program, subsystem or even operating system is available for anyone to inspect, modify or use. Richard Stallman, the father of the GNU system, said that software should be free.

One of the tenets of the open-source movement is that because anyone can inspect the code or an application, then bugs, security holes and other problems can be easily identified. Unfortunately, people need to look in the right place.

For the Heartbleed bug, the bug was in plain sight for at least two years. An open-source set of utilities, commonly called OpenSSL, had a fundamental programming error. In fact, the error is at the heart of the lab exercise in the Buffer Overflows module in the Certified Ethical Hacker (CEH) class I teach.

In short, the Heartbleed bug allows an attacker to “read” 64 kilobytes of the memory of a program. If the program, such as OpenSSL, has privileged information or has access to the system’s RAM, then the attacker can scavenge critical portions of memory — albeit a small amount at a time. With this small amount of memory captured on each request, it would take 131,072 requests to read a program’s entire 8 GB memory space.

Programs written in the C or C++ programming languages use functions from code libraries to provide primitive functions in the applications. These functions might include sending data to the screen or accepting input from the keyboard. Other primitive functions in the programming language are used for internal purposes such as moving data around in the application’s memory.

In the Buffer Overflow module’s lab in the CEH v8 class, we write code to exploit the strcpy function. Heartbleed uses a similar function called memcpy. Both copy bytes of memory from one location to another, and neither checks that the destination field has enough room to accept the data from the source. When this occurs, the hacker’s data overflows the RAM following the destination memory location, the program’s overwriting memory for nefarious purposes. The end result could be crashing the program to produce a denial of service (DoS) attack or adding code so that a hacker could usurp the running program for remote-control purposes.

The question is, then, what information could the attacker collect? Simply put, anything in the application’s memory.

For a Web server, this could be usernames and passwords, or the website’s private key. If an attacker obtains the private key, then they can decrypt any supposedly secure transactions between Web browsers and the websites to which they connect. What’s at stake? Your username and password to your bank’s website, for example.

The good news is that this is hard to exploit. The bad news is that it has been.

Once the attacker has a website’s private key, they can intercept traffic between a Web browser and a website. In itself, this is difficult and would probably require the resources of a nation-state or major organized crime organization. Once they obtained the private key, they would have to intercept and decrypt the secure, encrypted network conversations.

The problem with the Heartbleed attack is that it uses common, almost required, communications. To maintain communication between the Web browser and the website, they periodically exchange heartbeat messages — hence the name of the attack. This heartbeat is needed because there may be a crash at one end or the other of the connection, and Web applications need to know that communication has broken. Because of the common nature of the heartbeat between Web servers and Web browsers, most organizations don’t record the heartbeat traffic in their security logs. As a result, organizations using OpenSSL don’t know whether their systems have been attacked.

Recently, when I logged into a major electronics retailer’s website, the site made me change my password. I could investigate whether the site used the vulnerable versions of OpenSSL, but it isn’t worth the effort. Their change in security policy is enough to imply that they suffered the problem.

A few weeks ago, I was teaching a CEH class — my 80th since August 2005. One of the lessons is on software flaws and Buffer Overflows. After giving a rather theoretical explanation of how security exploits work, I brought the Heartbleed code up on the projector. Aside from the fact that the Heartbleed bug was due to a “rookie mistake,” it provided a great example in a complex section of the course.

Related Course
Certified Ethical Hacker v8

Please support our Sponsors here :