North American Network Operators Group|
Date Prev | Date Next |
Date Index |
Thread Index |
Author Index |
Re: Additions to the NSFNET policy-based routing database
- From: Daniel Karrenberg
- Date: Tue May 10 15:39:53 1994
> firstname.lastname@example.org (William Manning) writes:
> Scale better from the point of view that the closer you get to the
> actual source, the better. With the delegation of CIDR blocks to
> NSPs comes the responsiblity to keep accurate data on assignment and
I agree with the statement per se. Maintenance should always be as
close to the source of the data as possible. However this does not mean
storage has to be!
> We can all send in updates to a centralized service (Internic/Merit/RIPE/
> APNIC) and then compete for cycles (yesterday the InterNic was not availabl
> due to load for about 2 hours, I have had problems with PRDB access due
> to merit.edu availablity) or we can distribute like DNS does.
> I think the rWhois is a win.
The problem with storing data locally is to keep consistency. With the
routing registry this is crucial. Only all routing policy descriptions
together represent the global routing graph which is needed to answer
general questions of policy and consistency. A lot of local checks can
be done but that may not be enough.
Another difficult issue is accessing the information in different ways.
We have found that it is desirable to notify those with particular
routing policies in case something affecting them happens elsewhere.
Think about routes someone makes decisions on being aggregated or
someone starting to announce the same route as someone else. This
requires accessing the information based on many -sometimes komplex-
A third issue also affecting the allocation registry is that of keeping
a trusted record of changes in case of conflicts. Audit trails kept by
a reliable and neutral registrar can be very helpful there.
To summarise: We will have to move to distributed models eventually.
Too centralistic ones will not scale. But this issue is not as clear
cut as Bill describes it. At RIPE I expect that we will keep a central
registry for some tme to come until we have learned enough about how we
use the information to make a decentralised design.
There are two reasons we can do this: First we can easily create
distributed copies for faster and redundant access. As a matter of fact
we have a couple of servers now. One is even in the US near MAE-East
(us-whois.ripe.net). Currently this is updated daily. It is easy to
make this almost real time if needed.
Secondly our update process is totally automated and response times are
as good as e-mail connectivity to the NCC. Our users are happy enough
with this to let us put plans for a TCP based server on the back burner.
Merit is running the same stuff for their new generic routing registry.
- - - - - - - - - - - - - - - - -