NETWORK DEFENSE

by Rik Farrow

USING FIREWALLS TO DETECT AND BLOCK WEB SERVER ATTACKS

A simple trick in configuring firewalls makes the net more secure for everyone.

When people permit me to attach my notebook to their networks, I am always polite. I don't scan, although I may run a sniffer to make a point during a demonstration. And the one thing that impresses me the most when visiting a network is a firewall configuration that prevents me from using SSH to visit my own network.

Most people view firewalls as devices charged with keeping attackers outside. Network and security administrators configure firewalls to block scanning, attacks, and other hostile activity from external networks, while permitting only restricted access to designated internal servers. But with just a little more work, those same firewalls can easily prevent additional attacks from succeeding--all by blocking arbitrary external accesses. For some servers, firewalls that block all outgoing connections will slow the spread of worms, and even prevent an attack from succeeding.

Let's take a look at why blocking outgoing connections from servers can help your organization.

THE CASTLE GATES

Firewalls function as castle gates, controlling what network traffic gets allowed through. For most organizations, firewall configuration means only permitting through traffic to a select list of IP addresses and ports. But firewalls can do more, by also controlling what traffic is permitted to leave your network.

Imagine, for a moment, a real castle, before the days of gunpowder. Castles provided a tremendous defensive advantage, and even fabled Troy did not fall to a direct assault, but to the soldiers hidden within the Trojan horse. These soldiers climbed out of the horse at night, killed the guards at the gate, and flung open those same gates. Many other successful attacks against castles proceeded the same way, by getting a group of attackers inside the castle walls who would later open the gates.

But, with most firewall configurations, the attackers' work has already been done. The gates have been left open for traffic from the inside and that is all that is necessary for many attacks to be completely successful. Let's take a look at several examples of why permitting all outbound traffic makes matters worse.

The Honeynet Project (see Resources) has collected a lot of information about how opportunistic attackers work. In most of their examples, the 'victims' are UNIX system with default configurations (no security patches installed) setup on a network segment that appears open to attackers. But that network segment is not as open as it appears. While attackers can scan, access arbitrary ports, and launch successful attacks, there is a firewall and ID system in place. Once the ID system has detected five outgoing connections from the victim (where there would normally be none at all), the firewall gets reconfigured to block outgoing connections--the gate gets slammed shut.

The participants in the Honeynet Project discovered that most attacks on UNIX system proceed in several phases. The first phase consists of a scan, sometimes just for an open port, but sometimes to check for a vulnerability. The actual attack comes next, usually opening up the victim to a remote login or network connection. Then, the attacker connects to the victim, cleans up after the attack, and changes the system's configuration to permit further access. The attacker then logs back in and begins to download patches, rootkits, and attack tools. The victim gets set up to begin scanning and attacking other sites, and as soon as the scanning begins, the Honeynet firewall shuts off access. Or, if the attacker begins using the victim as a relay, the firewall will also shut off outgoing access.

The people in the Honeynet Project want to collect information about how attacks typically proceed, and the usual length of time an unpatched system waits before being attacked is about two days. The ID sensor (Snort, www.snort.org) also collects all traffic, permitting a complete analysis of what the attacker has done (see a past Network Defense article about the analysis of just the evidence left behind on a hard disk, May 2001). Having analyzed many attacks, one common pattern of activity stands out. A successful attack will be followed by either outgoing connections or scanning of external systems.

WORM WORLD

In May 2002, we had a short visit from yet another worm. The Spida worm scans for MS SQL Servers listening to the default port (1433/tcp), and attempts to access them using the default account, sa. If the sa account had not had a password added (the default), the Spida worm installs several files, then begins scanning for more vulnerable systems.

The last year has been replete with worms. There were no fewer than three variants of Code Red, all prospecting for vulnerbable Windows 2000 servers running IIS 5. Nimda attacked using both email and vulnerabilities in IIS. The Li0n Worm and the Sadmind worm attacked UNIX systems (although the Sadmind worm would also attack IIS). Worms have been the most prolific attackers in terms of the numbers of attacks launched.

These worms all have in common a single, common, behaviour--scanning. Once the worm has successfull attacked a server, installed itself, the worm begins scanning for other victims. In the case of Web servers, the port used by public Web servers (80/tcp) must be open so that not only can people visit
the Web server, people (or worms) can also attack the Web server.

What Web servers normally do not do is to scan. In fact, most Web servers have no need to make any outgoing connections at all. Web servers function by receiving connections on port 80, and possibly port 443 (used by SSL or TLS, Transport Layer Security). All the work performed by the Web server is in response to these connections-- the Web server never makes an outgoing connection to external networks. One exception would be a Web server configured to perform DNS lookups so that the IP addresses of visitors can be converted into names and stored in logfiles. Most Web administrators avoid DNS lookups because of performance hits. DNS lookups get processed on separate systems dedicated to processing logfiles.

In practice, you protect your Web servers by blocking __outgoing__ access from the Web server. Because any outgoing connections made by a Web server means that something out-of-the-ordinary has happened, you not only block outgoing connections, but set up your firewall so that it alerts you (or the appropriate persons) that the Web server is misbehaving. I don't normally recommend that people set up automatic systems that will page them, but a compromised Web server is as good a justification for a page that I can think of.

Of course, if you are not set up to receive a page, you can still use email notification, or some other extraordinary means of getting notified. Your Web server should NEVER make an outgoing connection while it is working as a Web server. So when this does occur, you don't want the log message buried deep in your firewall logs (which in most cases are rarely examined, if ever). I am not advocating ignoring firewall logfiles, just pointing out that in many organizations, they are largely ignored. In a well-administered site, the security group routinely monitors firewall logs, typically through the use of scripts that summarize the logs and filter out routine events--so that unusual events stand out. But, you don't want to wait for a routine check to uncover that your public Web server has been owned. You want to know long before the New York Times does.

Let's think about how this works. The public Web server sits on a DMZ network, protected by a firewall from the Internet. The firewall also controls access from the Web server to any internal networks, as Web servers are commonly used in attacks that attempt to slip past the firewall. The firewall has been configured to block an outgoing traffic, other than packets belonging to HTTP connections to the Web server. The firewall has also been configured to send a page or other form of extraordinary alert whenever an attempt to scan or connect from the Web server either to Internet or to disallowed internal addresses occurs. In essence, you have just turned your Web server into a Honeypot, in that any unusual activity will trigger an alarm. If you ever get that page or email, something bad will already have happened, but at least you will be warned.

Also, blocking outgoing connections not only warns you, but may also prevent the attack from succeeding. Many, but not all, attacks rely on making outgoing connections to fetch the attack tool. Nimda did this, downloading README.eml or README.exe, and so did the Li0n and Sadmind Worms. But the various Code Red variants did not require a separate download, as the attack code gets included in the attack itself. In the case of Code Red, once the worm starts scanning, your alarms go off, so even though the worm wasn't prevented, the secondary effects (thousands of sites on the Internet noticing that your site has scanned and/or attacked their Web servers) has been prevented. And you have improved the odds that you will have repaired your Web server before anyone notices that it has been hacked.

OTHER SERVERS

Blocking outgoing connections works great for Web servers. You can use variants to this approach to protect other public services, but only to a limited extent. FTP servers cannot be defended this way at all, as FTP servers will make arbitrary, outgoing TCP connections every time a client arranges to list a directory, download, or upload a file. Email (SMTP) and DNS servers have a somewhat more constrained role, as each will only contact other servers of the same typ, that is, on port 25/tcp (SMTP) and port 53 (both TCP and UDP for DNS). Thus, seeing your Exchange server make an FTP connection would be noteworthy, but most of the time it will be connecting to other SMTP servers. And if an Exchange worm shows up, the blocking approach will not help you, as an email worm will attack other email servers, something that will appear similar to normal SMTP server behavior. It would take an ID system to detect unusual behavior by public SMTP or DNS servers (for example, the type of scanning used by Code Red, where a single byte gets sent to port 80).

You can even use the blocking approach for internal Web servers. Linux systems now come with firewall software, and the new version of Solaris 9 also includes a built-in firewall. The included firewall software that comes with newer version of Windows XP will not work, as it is not flexible enough. You can configure server-based firewalls to prevent outgoing connections from the Web server itself, and use this to protect against internal attacks, like those that were launched so successfully by Nimda.

I strongly suggest that you take advantage of your firewall. The time and effor required is small, and the advantages should be obvious. You will at least be notified of in the case of a disaster (your Web site gets exploited), and you might even prevent the attack from completing. You can go further, and configure your firewall so that it permits only expected outgoing traffic. That will certainly involve a lot more work, but your site will be much more secure if you do it.

RESOURCES:

The Honeynet Project, with dissections of example attacks: http://project.honeynet.org

Short description of the Spida worm: http://vil.nai.com/vil/content/v_99499.htm

Paper about the potential for future worms that make Code Red look slow: http://www.icir.org/vern/papers/cdc-usenix-sec02/index.html

Solaris9 includes SunScreen 3.2 integrated stateful packet filtering firewall: http://wwws.sun.com/software/solaris/solaris9_features_security.html

Information about configuring the Linux firewall: http://netfilter.samba.org