North American Network Operators Group|
Date Prev | Date Next |
Date Index |
Thread Index |
Author Index |
Re: Peering with a big web farm (was Re: BBN Peering Issues)
- From: Sean M. Doran
- Date: Thu Aug 13 03:43:32 1998
| If BBN wants to sell connectivity to a big web farm provider, how does
| BBN's forcing all hits through a cache help BBN? The data all still
| crosses BBN's backbone, and the the web farm provider won't need as big a
| pipe. Maybe I'm missing something, but if BBN starts charging former
| peers, I'd think caching at these edges would be a bad thing for BBN.
Again, keeping this in the realm strictly of "how to build this",
rather than "is this a good idea, is this moral bla bla bla",
what one wants is a hierarchy of caches with the one cache you're
thinking of at the apex, and the others scattered throughout
Alternatively, one might have a few duplicate caches, with
content distributed by, for example, reliable multicast, or
some other technique with the explicit goal of sending no
more than one copy of the content per link.
The thing that you are forcing traffic through is, indeed,
an intercepting device, which lives directly in front of
the peering line. It could do one of a number of strategies,
for example, accepting the first hit and redirecting all subsequent
transfers to one of it's hierarchical children or distributed
It could also intercept and rewrite all the DNS queries that
happened to arrive at it bound for the peer in question,
and answer back giving the addresses of the appropriate
more-local child/copy caches, avoiding taking the first
TCP connection in the first place.
A neat project would be to think about the application layer
"non-"gateways one would want beyond WWW. FTP comes to mind,
of course, as does NNTP. Real Video etc. are of course other
Basically, I suppose the thought is that one could soundly engineer
and operate a peering with a "content-hosting ISP" (big web farm/
ftp archive) which reduces the bandwidth requirement to one copy
per new piece of static content, plus whatever level of properly
dynamic content there might be, at the cost of some disk space per
peering (the interceptor border device and the distributed child/copy
caches can be shared among multiple peering connections).
Again, issues of morality, how one could do a contract to do this,
and the like are more appropriate to com-priv.
Anyone who has actually experimented with things vaguely like this (Ed?
Curtis?) and who has operational advice is on-topic for NANOG, however.