North American Network Operators Group|
Date Prev | Date Next |
Date Index |
Thread Index |
Author Index |
Re: What could have been done differently?
- From: Scott Francis
- Date: Tue Jan 28 17:57:58 2003
On Tue, Jan 28, 2003 at 03:10:18AM -0500, email@example.com said:
> Many different companies were hit hard by the Slammer worm, some with
> better than average reputations for security awareness. They bought
> finest firewalls, they had two-factor biometric locks on their data
> centers, they installed anti-virus software, they paid for SAS70
> audits by the premier auditors, they hired the best managed security
> consulting firms. Yet, they still were hit.
> Its not as simple as don't use microsoft, because worms have hit other
> popular platforms too.
True. But few platforms have as dismal a record in this regard as MS. Whether
that's due to number of bugs or market penetration is a matter for debate.
Personally, I think it's clear that the focus, from MS and many other
vendors, is on time-to-market and feature creep. Security is an afterthought,
at best (regardless of "Trustworthy Computing", which is looking to be just
another marketing initiative). The first step towards good security is
choosing vendors/software with a reputation for caring about security. I
realize that for many of us, this is not an option at this stage of the game.
And in some arenas, there just aren't any good choices - the best you can do
is to choose the lesser of multiple evils. Which leads me to the next point:
> Are there practical answers that actually work in the real world with
> real users and real business needs?
I think a good place to start is to have at least one person, if not more,
who has in their job description to daily check errata/patch lists for the
software in use on the network. This can be semi-automated by just
subscribing to the right mailing lists. Now, deciding whether or not a patch
is worth applying is another story, but there's no excuse for being ignorant
of published security updates for software on one's network. Yes, it's a
hassle wading through the voluminous cross-site scripting posts on BUGTRAQ,
but it's worth it when you do occasionally get that vital bit of information.
Sometimes vendors aren't as quick to release bug information, much less
patches, as forums like BUGTRAQ/VulnWatch/etc.
Stay on top of security releases, and patch anything that is a security
issue. I realize this is problematic for larger networks, in which case I
would add, start with the most critical machines and work your way down. If
this requires downtime, well, better to spend a few hours of rotating
downtime to patch holes in your machines than to end up compromised, or
contributing to the kind of chaos we saw this last weekend.
Simple answer, practical for some folks, maybe less so for others. I know
I've been guilty of not following my own advice in this area before, but that
doesn't make it any less pertinent.
-= Scott Francis || darkuncle (at) darkuncle (dot) net =-
GPG key CB33CCA7 has been revoked; I am now 5537F527
illum oportet crescere me autem minui
Description: PGP signature