History of Firewall Technologies by Rik Farrow and Richard Power Not all firewalls are created equal, and this quick review of firewalls explains how they are different. Not too long ago, the idea of buying a networking device that blocked network traffic was totally ridiculous. What people wanted was networks that ran efficiently and permitted transparent access to as many protocols as possible. After all, just getting computers from different vendors to interoperate was enough of a challenge. A network device which prevented protocols from passing was unthinkable. What changed things was the Internet. Research organizations had been connecting to the Internet, albeit slowly, throughout the decade of the eighties. Robert Morris' Internet Worm (November 1988) served as a wake up call that the Internet was no longer a safe playground, but a means of connectivity which could bring both good and evil. Today there are dozens of firewall vendors, all exclaiming the wonders of their solutions, from stopping viruses to intuiting an attack in progress. Like any product realm, some of these claims are inflated, and not all products are equal. More important, different types of firewalls are better suited to particular circumstances, and you need to know which product will best suit the requirements of your networks. In the Beginning The earliest firewalls were based on routers. Routers are very flexible directors of network traffic, and adding commands which permitted filtering out packets based on information in the packets' headers seemed a natural direction. Cisco had added the filtering capability to their routers very early on. Ben Segal, of CERN, writes of buying Cisco routers for this very purpose in 1987, (http://wwwcn.cern.ch/pdp/ns/ben/TCPHIST.html). Routers function by examining the destination address of every packet received, and forwarding each packet to the next hop toward its destination. Routers can do more than merely a forwarding, and this is where packet filtering comes in. When packet filtering rules, called access control lists in Cisco routers, are applied to a network interface, the headers are copied into the memory accessible to the routers' CPU and checked against the access control list rules. If the packet is permitted, it is routed on to the next hop. If the packet is denied by the rules, the packet is dropped, and an ICMP message may be sent back to the sending system so more packets will not be sent. Routers have serious limitations when used for packet filtering. The router's CPU handles the filtering operation instead of dedicated hardware, which means the filtering operation is much slower than simply routing packets. Early routers whould lose as much as 75% of their peak performance when an access control list containing a single rule was applied. Cisco's current documentation states that adding an access control list results in a 30% hit on performance on packets using that interface. Routers manifest another problem based on their very design. Routers maintain state information, used to make decisions about how to deal with packets, in the form of routing tables. The router deals with each packet that arrives as if the packet was unique, not part of a stream, or flow, of packets as is generally the case. Filtering decisions are also made as if each packet which arrived was unique, and no other similar packets had ever or would ever arrive with similar headers. The way routers deal with packets affects their security capabilities in several ways. Router performance is reduced because the same filtering decision must be made over and over again. Logging information is less detailed, because there is no notion of a transaction, for example, a telnet session or transfer of Web files. Instead, a count of packets between a source and destination can be recorded, an operation which adds another performance hit (30% according to Cisco documentation). Finally, there is no memory of past events, which could be used to influence how future events, the arrival of other packets, will be dealt with. Another Approach System vendors took a different approach to TCP/IP security. IBM and DEC, for example, designed application gateways to control traffic entering of leaving their networks. You can imagine how natural this was for an organization whose stock in trade was operating systems and supporting software. An application gateway, once called a proxy server, acts as a server to the client, and a client for the remote server. To do this, the programmers writing the application gateway had to implement the application protocol within their code. For example, an FTP client communicates with an FTP server using simple, English, commands, such as USER or PASS. The server communicates back to the client by sending three digit result codes, for example, 200, indicating a successful operation. Programmers wrote application gateways so they would recognize the client's commands and the server's result codes, the application protocol, and behave appropriately. This approach has both pluses and minuses. On the plus side, application gateways have very fine grained control over what a client, working on behalf of a user, can do. DEC's early, in-house firewall permitted internal users to fetch files from the Internet unfettered. But when a user attempted to send a file, transmission speed was limited to 9600 bits-per-second, the same speed of modems of the day. The configuration file for an application gateway could also be designed to have fine-grained control over the users who had access, which applications they could use, and when the applications could be used. Application gateways are excellent for logging, as they recognize the activities of the user, and can log commands, filenames, number of bytes transferred in or out. Unlike the router, which recognizes patterns in headers, application gateways understand the application protocols, and can easily capture detailed information. Application gateways are programs, running outside of the operating system, which is both positive and negative. The operating system provides security, preventing the application gateway from affecting other programs or accessing other parts of the system. Application gateways open a new network connection to the remote server, so to an external viewer, it appears that a single system represents all clients. No internal source addresses are seen, as the firewall system appears as the source for all client requests. There are two downsides to application gateways. First, because they are programs, each involves the overhead of starting and running an independent program. Network data must be copied from the operating system memory to the program memory, then back again. Firewall vendors can deal with the first problem by prestarting copies of the application gateway, much the way multi-threaded Web servers function. Copying data between the operating system and program memory cannot be dealt with as easily, outside using a high performance operating system, or rewriting those portions of the operating system which deal with TCP/IP. Some vendors have replaced the TCP/IP code used by NT 4.0 with optimized versionse to improve the performance of their solutions. The other problem with application gateways can be considered a plus. Each application gateway firewall product comes with a limited selection of applications supported (see http://www.gocsi.com/firewall.htm for a matrix of firewall products). As long as the product you have chosen supports the applications you require, there is no problem. But if you need other applications, getting these through the firewall becomes problematical. Many application gateway firewalls provide a configurable "application gateway", sometimes called a "plug gateway". These are not application gateways at all, but really circuit relays, programs which provide access control based on header information, but no fine-grained application protocol control or logging. The SOCKS program is a circuit relay, and SOCKS or SOCKS-like programs are common in firewall products. The plus side of a limited list of application gateways is that it is difficult to support dangerous protocols with the firewall. No application gateway vendor wastes time developing application gateways for dangerous protocols. You cannot accidentily enable a dangerous protocol, or be forced into enabling one because of user or management pressure. It is simply impossible to enable an application gateway which does not exist. Some of the firewall products on the market today are descendants of the internal firewalls used by system vendors. The firewall developed by IBM's Watson Research labs was licensed to ANS and Livermore Labs. Raptor's firewall was licensed from Dupont (who had developed it for internal use). DEC sold their early designs as a consulting service. And several employees left DEC and went to Trusted Information Systems where they designed new application gateway designs based on what they had learned at DEC. The Middle Road The firewall vendor with the greatest market share today, Check Point Technologies, based their firewall on different approach. A pair of Israeli programmers developed a technique for packet filtering which overcomes many of the weaknesses inherent in using routers. They called their technique stateful inspection, and based the first version of it on a feature of Sun UNIX systems which permitted adding new modules to the operating system. The stateful inspection module gets loaded into the operating system at a point in the IP stack where it can preview packets before they reach the Internet layer where routing takes place. At this point, the product resembles packet filtering, but this is where one big difference comes in. Instead of examining each packet independently of previous traffic, stateful inspection keeps track of past history. The rule base is used when the first packet representing a new transaction is received, and the decision, to permit or deny the packet, is stored as state information. Subsequent packets can then be permitted or denied based upon the state, instead of the rule-based decision making process. Check Point added another twist by permitting rules to examine the data within packets, peeking at the application protocol. The best example of this is FTP, which in active mode requires a connection back from the server to the client. Permitting these server to client connections opens gaping holes in the router-based packet filters. But this technology can generated new state based on the contents of packets, for example, a request from a client to a server to open an FTP data connection. In other words, the state for each transaction can change dynamically, affecting this transaction's or other transactions' state. Stateful inspection added increased performance, security, and flexibility to packet filtering, and stateful inspection forms its own category of firewall technologies. Stateful inspection still shares some of the problems of packet filtering, though. For example, logging is based on changes to the state information, and detailed logging, such as filenames or the volume of date exchanged, is not included in stateful inspection logfiles. Stateful inspection does not perform address translation by default, although this capability has been added to both routers and stateful inspection firewalls. Also like packet filters, stateful inspection firewalls can be fooled into permitting protocol tunneling. This penetration technique requires cooperation, either intentional or subverted, from an internal user to exchange data through the firewall by using a permitted protocol to transmit another, not permitted, protocol. While this technique is not impossible using an application gateway, an application gateway will at least log the volume of data exchanged, which could point to abuse of the protocol, for example, using the DNS port to transfer file data. Stateful inspection firewalls offer great flexibility. Most of these firewalls come with a long list of applications which can easily be supported. Again, this is both positive and negative, as dangerous protocols, those which have weak or non-existent authentication, are trivial to enable. This flexibility means the correct configuration of a stateful inspection firewall is critical, and misconfiguration can open a network to attack. Check Point Technology was also a market leader in configuration when it first appeared. Firewall products of the day (early nineties) had abysmal configuration interfaces, involving entering long and arcane command sequences (routers) or editing multiple configuration files using a UNIX-style editor. Check Point's approach was to provide a point-and-click configuration interface, an application program based originally on Sun's Openview, and later ported to Motif and Windows. This application program communicates with the operating system module to modify the rules base and control operation. Other application programs handle logging and user authentication. Choosing Firewalls Using routers as firewalls has become rare today. Routers can still function as part of internal security, for example, by permitting limited access between different portions of a wide area network. But weak performance and logging of routers makes them generally undesirable for Internet or Intranet firewalls. The decision between application gateways and stateful inspection firewalls must be based on policy and requirements. Application gateways provide better control and logging, while stateful inspection provides an edge in performance and much greater flexibility at the risk of misconfiguration. You must provide your own risk analysis and policy review, and decide which approach is best for you. Resources: http://www.gosci.com/firewall.htm -- the CSI firewall matrix, with the latest firewall vendor survey and pointers to each vendors' own Web sites. http://www.greatcircle.com/ -- home of the Firewalls mailing list, and another list of firewall vendors