Firewall Systems Considered Harmful

I think that firewall systems that attempt to protect networks (corporate, educational, or otherwise) from various external threats are, at best, a stop-gap solution, and at worst, are preventing focus and effort from being applied to the real problem: implementation and deployment of real host security.

In other words, they are probably a bad idea, in the long run, because they can lull you into a false sense of security about your network: "If I have a firewall system to keep the bad people out of my network, isn't that enough?"

Nope.

The first false assumption implicit in a firewall system is that there is only one way into a network, so that you can focus all your security attention on that one point. Reality is that there is an entry into the network for every host on the network. Have you verified that the telephone on every employee's desk has not been connected to a modem, which is, in turn, connected to the computer on his or her desk? Do you have adequate access controls on public terminals/computers/network-drops? How's your physical security - can someone simply walk in the door and walk out with a disk from your server?

The second false assumption implicit in a firewall system is that insiders are trustworthy ("we don't need to secure our internal hosts; we have a firewall!" - Famous Last Words). According to surveys, the majority of computer crime is committed by insiders. This can be read as implying either that external computer security at most corporations is so good that most of the time, the outside bad guys can't get in, or that the internal security is so poor that it's easier for insiders to commit computer crime.

There is an old adage in the Internet community:

"Security is a Host problem, not a Network problem."
What this is intended to convey is an overall philosophy of how host computers should interact over the network, with specific reference to what security "features" they can assume are there to protect them in the network itself: none. You've got to protect each host, individually; that's the only way to real peace of mind w.r.t. computer security.

There are those who disagree with me, of course.


Real Computer Security

Real host security in a network context has these three elements:

  1. network protocol specification with appropriate security elements.
  2. correct implementation of the protocol specification.
  3. correct configuration & operation of the resultant software.

First, the protocol specification (or standard) should have whatever security features that are required for whatever level of security required by the application which the protocol is intended to support. So, for example, if you're moving sensitive data, your protocol should include some kind of encryption to prevent effective interception of the data. If data integrity is particularly important, then the protocol should include a CRC on the data. This is part & parcel of good protocol design.

Second, the implementation of the protocol must be correct (i.e. no "bugs" - no possibility that an attacker could send a packet to the host that would cause the host to perform some operation that the host owner does not wish). This is principally a software engineering and quality assurance problem. An example of an egregious failure of software engineering that resulted in a serious security problem was the "fingerd" bug that Robert Tappan Morris, Jr. exploited in the Internet Worm that he released in November 1988.

Third, the host and host software must be operated in a configuration that is consistent with its design constraints. That is, it does no good to have a password requirement, only to have users choose poor passwords.

The point is that if all these things are done, and done in a manner which results in confidence in the computer system, there is no need for a "firewall" to protect that computer from any external communication. Indeed, part of making a lumpen computer system into a firewall system involves precisely these steps of protocol, software, and configuration verification. The pressing question is, why don't computer systems and software vendors do this for all systems and all software that they sell?

I believe that a big part of the problem is that software vendors accept essentially no legal liability for the software that they sell. If they did have legal liability, there would certainly be more money spent on quality assurance (squashing the bugs), and time spent on good design, up front, because the risks of not doing so would be much higher.


Erik E. Fair <fair@clock.org>
October 18, 1996