North American Network Operators Group|
Date Prev | Date Next |
Date Index |
Thread Index |
Author Index |
Re: Port blocking last resort in fight against virus
- From: Steven M. Bellovin
- Date: Wed Aug 13 15:06:15 2003
In message <Pine.GSO.firstname.lastname@example.org>, "Chris
topher L. Morrow" writes:
>This is the point, atleast I, have been trying to make for 2 years... end
>systems, or as close to that as possible, need to police themselves, the
>granularity and filtering capabilities (content filtering even) are
>available at that level alone.
It's just not possible.
Believe it or not, I don't much like firewalls. But see slide 5 of a
talk I gave in May, 1994 (http://www.research.att.com/~smb/talks/firewalls.ps
or http://www.research.att.com/~smb/talks/firewalls.pdf) for why we
need them. We'll *always* have buggy code.
Yes, Microsoft has shipped buggy code. So has everyone else. For fun,
I tallied up the source of the responsible vendor for every CERT
advisory issued last year or this. There have been 57 of them. 17 were
miscellaneous -- other vendors, incidents (like the current worm), etc.
9 were about Sun. 13 were about Microsoft. And 18 were about open
source software -- a couple of Apache bugs, some ssh issues, etc.
Looking beyond CERT advisories, I recently tallied up how many times
someone could have penetrated my machines, and I run them about as
tightly as possible. Given problems in ssh, various PDF readers, gv,
etc., I tallied about a dozen times I could have been had.
Now, that tally is the tip of the iceberg. Microsoft itself had 72
bulletins last year and 31 this year -- there's no doubt that they've
shipped awful code (and worse yet, they have an awful architecture from
a security perspective). But they're not the only villain, and the
open source software implicated runs on most of the open source Unix
systems (Linux in its many incarnations, *BSD, etc) and even on some of
the proprietary ones.
**The purpose a firewall is to keep the hackers away from the buggy code.**
Should ISPs filter? Sometimes there's no choice.
Note that there are two distinct forms of "network security" problems.
The classic type -- the kind firewalls were designed to deal with --
aren't really network security problems. They're host security
problems where the network is the conduit. I suspect that ISPs should
not filter most such traffic, except with the consent of (and often
payment from) the site. (I'll also note that there are designs for
doing such filtering on the end systems, though it's easier to block
traffic there than to selectively permit it.)
The other kind of problem is one that affects the infrastructure. DDoS
is the classic example, but worms are another instances where some
filtering is often necessary -- not to protect the end users, but to
protect the network for everyone else. On this list, I don't have to
repeat the full litany problems that worms can cause -- I'm sure many
of you remember the clogged ARP caches from Code Red-triggered scans of
cable modem routers, or the near-collapse of many networks from Slammer.
We're living in an exceedingly imperfect software world. Firewalls and
the like are just one form of defense -- but I'll take any defense I
--Steve Bellovin, http://www.research.att.com/~smb