Merit Network
Can't find what you're looking for? Search the Mail Archives.
  About Merit   Services   Network   Resources & Support   Network Research   News   Events   Home

Discussion Communities: Merit Network Email List Archives

North American Network Operators Group

Date Prev | Date Next | Date Index | Thread Index | Author Index | Historical

Re: links on the blink (fwd)

  • From: Paul A Vixie
  • Date: Fri Nov 10 00:45:13 1995

> By "load sharing" I presume you mean some sort of TDM where you have n 
> real lines and every n'th packet goes out on any particular line. I 
> suppose this would be even simpler to do at the byte level if we assume 
> that all n lines go to the same endpoint.
> 
> Or do you mean something different?

There's one nice thing about the brick wall we're headed for, and that's
that it'll nail every part of our system simultaneously.  We're running
out of bus bandwidth, link capacity, route memory, and human brain power
for keeping it all straight -- and we'll probably hit all the walls in the
same week.

But no, to answer your question, I mean something different.  Ganging links
and doing reverse aggregation would still require unifying the flows inside
each switching center.  (ATM avoids this by only putting the flows back
together at endpoints.)  One important wall we're about to hit is in the
capacity of the switching centers (routers).  It's not just DS3 that's not
enough; if I ask a Cisco 75xx to route among four OC12*'s it'll up and die.

The boxes who are doing well at higher bit rates are the ones who don't do
anything complicated.  Thus the GIGAswitch and the Cascade, which each do
their (limited) jobs well even though they have many more bits flowing when
they're full and busy than a 75xx can handle without getting dizzy.

So, what I think I mean by static load balancing would look (at the BGP level)
like a bazillion NAPs and a bazillion**2 MED's to keep local traffic local.
It means doing star topologies with DS3's rather than buying a virtual star
via SMDS or FR or ATM from some company who doesn't have a physical star to
implement it with.

This assumes that we can handle 100 or 500 views of 30,000 routes inside a
commonly occuring switching center.  Right now we can't.  If pressured, I
think Sean would admit that this is at the root of his desire for "CIDR uber
alles" and a six-entry routing table in his core routers.  I don't consider
that goal achievable for IPv4 and the allocation plans I've seen for IPv6
do not give me cause for hope.  So we are going to see Ncubed enter the
routing business with 1GB routers and 256 RP's and an SSE in every IP.  It
will cost $1M to provision a superhub and a lot of the smaller folks will 
just go under or become second tier customers of the few who can afford to
run a defaultless core.




Discussion Communities


About Merit | Services | Network | Resources & Support | Network Research
News | Events | Contact | Site Map | Merit Network Home


Merit Network, Inc.