North American Network Operators Group|
Date Prev | Date Next |
Date Index |
Thread Index |
Author Index |
Re: BCP38 making it work, solving problems
- From: Fred Baker
- Date: Tue Oct 12 01:35:44 2004
At 08:39 AM 10/12/04 +0530, Suresh Ramasubramanian wrote:
Yes I know that multihoming customers must make sure packets going out to
the internet over a link match the route advertised out that link .. but
stupid multihoming implementations do tend to ensure that lots of people
will yell loudly, and yell loudly enough for several tickets to be
escalated well beyond tier 1 NOC support desks, for ISPs to kind of think
twice before they put uRPF filters in ..
You might want to take a glance at RFC 3704, which looks at a number of the
issues that have been raised in this thread, including the routing of
traffic to appropriate enterprise egress points.
In my heart of hearts, I would like enterprises to (as a default) match
layer 2 and layer 3 addresses on the originating LAN, and
quarantine-as-busted any machine that sends an address other than assigned
on an interface. It seems that the few cases where a device legitimately
sends multiple addresses are exception cases that can be handled
separately. Handling it that close to the source solves the problem for
Practically, that is difficult. If you think getting all of the service
providers (who wind up having to fix ddos attacks, and pay for bandwidth
and services related to ddos attacks) to manage networks well is difficult,
consider the prospect of getting all the edge networks to do so...
As simple solution is, as someone suggested, pose an idiot tax and bill the
customers for doing stupid things. Egress traffic filtering in the
enterprise is relatively simple for the average enterprise - it has at most
a few prefixes and can write a simple ACL on its upstream router. It can
use the ACL either to discard offending packets or to route them to the
right egress. It is also relatively simple for the average enterprises'
ISP: it knows what prefix(es) it agreed to accept traffic from and can
write an ACL.
It gets a little dicier when the customer is a lower tier ISP. In that
case, there are potentially many prefixes, and they change more frequently.
That is the argument for something like uRPF. No, it is not a "sure fix",
but it handles that case more readily, both in the sense of being a fast
lookup and in the sense of maintaining the table. The problem is, of
course, in the asymmetry of routing - it has to be used with the brain
From an ISP perspective, I would think that it would be of value to offer
*not* ingress filtering (whether by ACL or by uRPF) as a service that a
customer pays for. Steve Bellovin wrote an April Fool's note suggesting an
"Evil Bit" (ftp://ftp.rfc-editor.org/in-notes/rfc3514.txt); I actually
think that's not such a dumb idea if implemented as a "Not Evil" flag,
using a DSCP or extending the RFC 3168 codes to include such, as Steve
Crocker has been suggesting. Basically, a customer gets ingress filtered
(by whatever means) and certain DSCP settings are treated as "someone not
proven to have their act together". Should a ddos happen, such traffic is
dumped first. But if the customer pays extra, their traffic is marked "not
evil", protected by the above, and ingress filtering may be on or off
according to the relevant agreement. The agreement would need to include a
provision to the effect that once a ddos is traced in part to the customer,
their traffic is marked as "evil" for a period of time afterwards. What the
customer is paying for, if you will, is the ability to do their thing
during a ddos in a remote part of the network, such as delivering a service
to a remote peer.
Address spoofing is just one part of the ddos problem; to nail ddos, we
also need to police a variety of application patterns. One reason I like
the above is that it gives us a handle on what traffic might possibly be
"not evil" - someone has done something that demonstrates that it is from a
better managed source.