Network Defense by Rik Farrow An indispensable directory service may be giving away your secrets A quick quiz: what is the Internet service, designed in the late eighties, that makes the Internet usable by mere mortals? Don't answer too quickly, because the answer is not the Web (which was not developed until years later). If you answered the Domain Name System (DNS), you are correct. And yet DNS has been, and will continue to be for some years, a security problem. The job of DNS is to convert names, preferred by humans, to IP addresses, which are numbers usable by computers and network equipment. Until DNS was developed, this task was accomplished using an ungainly file-based lookup. Today, people just expect DNS to work. And, most of the time, DNS works quite well. But it can be leveraged by interested parties (say, an attacker) to learn more about your network than you might believe possible. One popular version has been used to break-into UNIX systems. DNS has also been abused in other creative ways, leading to newer, more secure, versions. But not yet truly secure, as we shall see. Without DNS Up until the late eighties, hostnames and their corresponding IP addresses were distributed as a single file, the hosts file. With the slow wide area links of that time, and an ever increasing number of connected systems, the hosts file was quickly becoming unwieldy. It could take over an hour to download a hosts file that would be obsolete before you could finish downloading it. Without an up-to-date hosts file, you could still reach any connected site on the Internet--but you would have to use the IP address, those difficult to remember 32 bit numbers. This situation led to the creation of DNS, with the support first of US Department of Defense (ARPA) and University of California, Berkeley campus, then Digital Equipment, and most recently the Internet Software Consortium. DNS works well because it is designed so that each organization manages its own information. The top two levels of the DNS name space are controlled by Network Solutions, and include the familiar suffixes, such as .com and .edu, and second level domains such as ibm.com, osu.edu, or whitehouse.gov. Each organization then creates databases of names, IP address, preferred mail servers, nicknames, and other information. Subdivisions (subdomains) can be created within large organizations, for example research.att.com, and managed by smaller groups. The hierarchical nature of DNS makes it scale very well. So what's the problem, you might ask? If DNS works so well, shouldn't we leave it alone, and fix other things that are seriously broken? The answer is that while DNS is not broken, it does have serious problems, like other Internet applications designed in the era before security became an issue. And poorly configured DNS servers and firewalls, things that you may be responsible for, are part of the problem. Revealing One of the easiest to access sources of information about any Internet-connected network is DNS. On the one hand, this is as it should be--the whole purpose of DNS is to disseminate information. On the other hand, oftentimes, DNS gives out information useful to would-be attackers. You can easily access DNS servers by using nslookup, a program found both in UNIX and in Win 9x/NT systems. There are other UNIX programs for accessing DNS, such as dig and host, but we will talk about nslookup, as all three have similar capabilities. When you enter nslookup at a command prompt, it enters interactive mode. Any domain name that you enter at the '>' prompt will be taken as a request for the A (address) record for that name, and nslookup will send a UDP packet to the default nameserver asking for that information. nslookup resolves the name, if possible, and displays the results. Looking up individual names is mostly harmless. Mscan, a hacker program designed for massive scanning, uses gethostbyaddr(), a resolving function, as a test to see if there is a corresponding host name for an address in its rdns() function. By trying each IP address in a range, it 'walks' DNS databases checking for valid host names, and thus, useful addresses to probe. There is really nothing that you can do about this, although slplit DNS helps. DNS lookups typically use UDP packets to name servers listening on port 53. However, there is a second mechanism, designed for use by secondary (backup) DNS servers, called a zone transfer. During a zone transfer, a TCP connection is made to the primary DNS server, and an entire database is downloaded in mass. You can do this with the command line tools as well. At your nslookup prompt, enter 'ls -d some.domain > filename' to capture all the zone information from some.domain in the file filename. Using a zone transfer is much quicker than walking DNS. You get everything in the zone requested, from the start of authority (SOA) record to aliases for host names. Clever system administrators configure their name servers so zone transfers are only permitted to secondary servers, so that attempts to capture zone information fails for outsiders. You can also block access to TCP port 53 at your firewall to all hosts other than secondary servers (traditionally you will have a secondary outside of your domain). There is a problem with this, as resolver software will attempt to use TCP to collect the results of any queries that are larger than 512 bytes. While this is rare, it is not impossible. Still, I often recommend blocking TCP to port 53. The Zone An attacker gets a list of all the information in your zone (one domain or subdomain) with a zone transfer. With this information, the attacker can look for intersting hosts. I suggest you try this yourself, although perhaps it is better to show an administrator not familiar with your network, and ask him or her to identify your important servers just from their names. While some sites use inscrutable names, like dt0023, I have seen names like winnt-server, hpux, and sun-enterprise. Another source of information are the host information (HINFO) records that may be included in the zone. Host information can be any two strings, and will normally be the CPU type and the operating system, nice things to know if your are a sysadmin or an attacker. I suggest that you avoid providing HINFO records on public DNS servers. With the advent of firewalls came the notion of split DNS. In split DNS, either a special name server is used, or two name servers. Either way, two different views of the DNS database are provided, one for public consumption and the second for internal users. On the public side, only servers whose interface addresses are exposed by the firewall, such as Web, email, FTP servers, and the firewall's external interface, appear in the DNS databases. The internal database contains the same information as before, and includes those hosts on the public view as well. Some firewalls include split DNS, and most will support it by running the public DNS server on the firewall host itself, or on an exposed server, and using this server as a relay host for lookups from the internal servers (use the forwarder command in named.boot). The newest version of the UNIX named software, BIND 8.1.2, features access control lists for zones, making it possible to use one server and present a split view. You can use split DNS if you have a firewall that hides internal network addresses. Application gateway firewalls, proxy servers, and firewalls that support NAT will all replace internal network addresses with external (public) addresses. In these cases, it makes sense to use split DNS, as most internal hosts should not be accessible from the outside, and so no DNS information is required for internal addresses. Networks protected by packet filtering without NAT should still present DNS information for every system that can access the Internet. But these networks can still use split DNS, and use dummy names for all public addresses, with the exception of public servers (Web, email, etc.). Firewall vendors do not always handle DNS correctly. Several firewall vendors have told me that their number one support problem with newly installed firewalls is DNS. One popular vendor neatly (but insecurely) sidestepped the issue by permitting all traffic to and from port 53, both UDP and TCP, to pass. This configuration opens up the firewall to scanning for targets via port 53, as well as permitting all traffic going to any service started on port 53 to pass. Although port 53 traditionally is for DNS, any system that is not already running a DNS server can run other services at port 53. Several hacker tools suggest that port 53 be used as a technique for bypassing firewalls. Version 3.0 of Checkpoint Technologies FireWall-1 permits all packets to or from port 53 to pass (as part of the Properties menu, Access Lists tab). Disable this, and set up specific rules to control DNS access. DNS clients should talk to internal DNS servers, and only DNS servers should be allowed to send and receive UDP packets from port 53 on the outside. The DNS servers should be permitted to receive TCP connections to port 53 from secondary DNS servers only. Even if you are not using this firewall, check your configuration. Corruption Some older versions of DNS servers would accept any DNS reply they received. All DNS servers cache replies, and if a phony reply were sent, this would be cached as well. When a client requested a name lookup for the phony cache entry, they would get the cached result. If an attacker tried this on older NT servers, it would crash DNS. Newer versions of the BIND software will always reject unrequested responses seeking to corrupt the cache. Another way of corrupting the cache was to create a legitimate request for the phony entry, and the provide a response before the authoritative server did. The attack tool does this by making a legitimate request from the targeted server, getting a response, then sending a recursive request to the targer server for the entry to be faked. Before the authoritative server can respond, the tool sends a spoofed response that will be accepted if the request id (sent to the authoritative DNS server) matches. The attacker can query the targeted DNS server to see if the corruption succeeded. In an interesting turn of events, some UNIX name servers (BIND 4.9.6) were actually used to gain root access. A buffer overflow exploit on named servers running on Linux systems previous to May, 1998 (for example, Red Hat 5.0) gave a remote attack a root shell. A CERT Advisory (CA-98.05) provides more information about vulnerable versions and techniques for disabling the attack and information on updating to a newer version of named. Briefly, the exploit required that the fake-iquery keyword appear in the named.boot file, or was enabled at compile time. The mscan tool has a module that can remotely detect this. The Future DNS servers can lie, pose as other servers, and be attacked themselves. Cache corruption can trash the contents of your cache. There is a way to advoid receiving DNS phony responses. Secure DNS will include digital signatures to prove the authenticity of responses. But general use of secure DNS must wait for the creation of the PKI, which will provide the public keys of DNS servers on demand. For now, you can make certain that your DNS servers are not too revealing of your network's secrets, and that your firewalls are configured to only permit limited access to ports 53 UDP and TCP. And in the future, we will have a DNS service that we can trust. [end text] [resources] The Internet Software Consortium is the home of BIND, the name daemon (named) software: http://www.isc.org/bind.html. This page contains many links for more information about BIND, as well as a version for NT. The most recent CERT Advisory concerning DNS: ftp://ftp.cert.org/pub/cert_advisories/CA-98.05.bind_problems. There have been other CERT advisories as well, http://www.cert.org/advisories/CA-97.22.bind.html, and CA-96.02 (superceded by CA-97.22). John Gilmore has a Web page with links to information about Secure DNS: http://www.toad.com/~dnssec/. [end resources]