DEVELOPING NEW EXPLOITS

Attackers can target new exploits by reverse engineering patches

by Rik Farrow

For many years, a debate has raged between security professionals and application vendors concerning how much detail about security vulnerabilities should be included with patches. Vendors, and organizations such as CERT that announce vulnerabilities, have stuck with providing the minimal amount of information they consider necessary to encourage administrators to install patches or workarounds. But does this policy really serve the people responsible for installing patches?

An announcement with too few details may fail to motivate system administrators to install the patch, but will also not help attackers craft exploits. At least, that's the theory. The reality is very different.

Security researchers, competent penetration testers, and attackers have been examining security patches to uncover the location of vulnerable functions. By comparing the patched version to the unpatched version of code, individuals can develop tests for vulnerabilities or new exploits. Past obsessions with providing only vague details about security patches are no longer warranted.

History of Disclosure

The aftermath of the Internet Worm in November 1988 left both the US government and the fledgling Internet reeling. Determined not to be taken by surprise again, the Computer Emergency Response Team (www.cert.org) was founded at Purdue University
to collect information about security vulnerabilities. CERT's mandate also included distributing information about vulnerabilities as well, so these problems could be addressed in a timely fashion.

Before CERT, most attempts at posting information about security bugs were at best haphazard, but more commonly quite secretive. Security vulnerabilities were kept hidden both by the handful of security researchers and the hacking community.
While both groups found vulnerabilities, and neither group shared them with anyone else. Administrators charged with patching systems had little or no information to work with.

CERT initially broadcast security advisories that were so detailed it was sometimes possible to create an exploit using the information provided. But soon, a backlash pushed CERT into an era of vagueness, something the organization has pretty much recovered from, but a characteristic quite commonly found in the security bulletins of major software vendors today.

Recently, programmers, especially those working for large, prominent vendors and within the open-source arena, have become better at avoiding the easily exploitable code writing mistakes that were so common in the past. For the most part, writing new exploits has become a task that requires coding experts. These same people have no trouble reverse-engineering patches, and using this information to pinpoint the locations of vulnerable code in unpatched software.

In March 2003, a CERT advisory about sendmail provoked widespread patching, but also a closer examination of the vulnerable code. Finding the vulnerable section of code was very easy, as the patch distribution contained the differences in the sendmail source code between the buggy and the patched versions.

Actually crafting an exploit from the bug, a bit of code where the counting of angle brackets (<>) failed, took considerable skill ("Getting the Most out of Firewalls", www.spirit.com/Network/net0403.html ). August 2003 brought MSBlaster, as well as still existing variants of exploits like Welchia. Microsoft does not release source code differences with its patches. When the Microsoft Security Bulletin for MS RPC/DCOM, (see Resources) first came
out in July, anyone curious about the details of the vulnerability had two choices: probing the MS RPC service, or reverse engineering the code that was patched. Through the use of fuzzing (sending many varied versions of RPC requests), some security researchers discovered that the RPC service was not only vulnerable, but that it remained vulnerable after the first round of patching.

But the vulnerability on which MSBlaster is based required actually locating the section of code, disassembling it, and locating the exploitable portions. The very same tools, such as IDA Pro, that make life easier for Windows programmers than for open-source programmers help with the disassembly of Microsoft code.

You might think that simply rearranging the code sections that make up a program like MS-RPC would make the comparison of patched and unpatched versions more difficult. However, most patches result in changes to the sizes of individual functions, so code comparisons are not made by comparing bytes appearing at identical offsets in the two versions, but by comparing the same functions in each version for differences. Simply rearranging code is not going to make the reverse-engineer's task more difficult than it already is.

I don't mean to imply that interpreting disassembled code is easy, or that it's simple to create exploits based on this code. But, based on conversations with security researchers, reverse engineering has become the premier method for uncovering vulnerabilities that have been fixed with patches.

The Bottom Line

The underlying message is quite simple. Even if a vendor declines to announce that a security vulnerability has been patched in a new release of software, it is quite possible--even likely--that reverse engineering practices will uncover patched vulnerabilities.

For managers of networks, the message should be clear: Vendor patches should be applied as soon as feasible. Most system administrators have already discovered that the most common side effect of installing patches revolves around user complaints about broken applications. So in many cases, installing a patch appears to be a lot more trouble than not installing it. The only technique that provides a partial solution to this conundrum is to test all patches in an environment similar to the production environment before pushing out patches.

For software vendors, the message should be just as clear: Hiding details about vulnerable software may provide some protection from script-kiddies, but they are not the true threat. Encouraging good patch management techniques is a better strategy. And patch management does not mean forcing customers to install patches at the software vendor's whim; patches must be tested before installation.

Software will continue to have vulnerabilities. What can change is how vendors handle the process of releasing patches and advisories. Once patching becomes both simple and reliable, system managers will be more willing to install patches, and break the cycle of worms.

Resources:

For a SecurityFocus article (from the BlackHat 2003 conference) that discusses patching issues, full disclosure and reverse engineering, go to: www.securityfocus.com/news/6568

For an example of an open-source source code patch (for the March 2003 sendmail vulnerability), see:
ftp://ftp.sendmail.org/pub/sendmail/sendmail.8.12.security.cr.patch

To see the first security bulletin for the MS-RPC service, go to: www.microsoft.com/technet/treeview/default.asp?url=/technet/security/bulletin/MS03-026.asp