North American Network Operators Group|
Date Prev | Date Next |
Date Index |
Thread Index |
Author Index |
RE: Is anyone actually USING IP QoS?
- From: Jamie Scheinblum
- Date: Tue Jun 15 17:17:43 1999
While this thread is slowly drifting, I disagree with your assertion that so
much of the web traffic is cacheable (nlanr's caching effort, if I remember,
only got around 60% of requests hit in the cache, pooled over a large number
of clients. That probably should be the correct percentage of cacheable
content on the net). If anything, the net is moving to be *more* dynamic.
The problem is that web sites are putting unrealistic expires on images and
html files because they're being driven by ad revenues. I doubt that any of
the US based commercial websites are interested in losing the entries in
their hit logs. Caching is the type of thing is totally broken by
session-ids, (sites like amazon.com and cdnow).
The only way caching is going to truly be viable in the next 5 years is
either by a commercial company stepping in and working with commercial
content providers (which is happening now), or webserver software vendors
work with content companies on truly embracing a hit reporting protocol.
So basically, my assertion is that L4 caching on any protocol will not work
if the content provider is given any control of TTL and metrics. The only
way web caching *really* works is when people get aggressive and ignore the
expire tags from a network administrator point of view, not a content
company's. From what I remember, that was the only way the some Australian
isps were able to make very aggressive caching work for them. Further, the
more you rely on L4 implementations for caching, the more it seems you would
be open to broken implementations... Although that is a broad statement...
> -----Original Message-----
> From: Vadim Antonov [SMTP:firstname.lastname@example.org]
> Sent: Tuesday, June 15, 1999 4:23 PM
> To: Brett_Watson@enron.net; email@example.com
> Subject: Re: Is anyone actually USING IP QoS?
> 99% of Web content is write-once. It does not need any fancy management.
> The remaining 1% can be delivered end-to-end.
> (BTW, i do consider intelligent cache-synchronization development efforts
> seriously misguided; there's a much simpler and much more scalable
> to the cache performance problem. If someone wants to invest, i'd like
> to talk about it :)
> >even if i assume caching is as efficient, or
> >more so, than multicast, i'm still just trading one set of
> >security/scalability concerns for others. caching is no more a silver
> >bullet than multicast.
> It is not that caching is a silver bullet, it is rather that multicating
> is unuseable at a large scale.
> >i won't deny the potential scalability problems but i think your
> >generalizing/oversimplifying to say caching just works and has no
> >or scalability concerns.
> Well, philosophical note: science is _all_ about generalizing. For an
> of perpetuum mobile the flat refusal of a modern physicist to look into
> details to assert that it will not work sure looks as an oversimplifying.
> After all, the details of actual construction sure are a lot more complex
> the second law of thermodynamics.
> In this case, i just do not care to go into details of implementations.
> L2/L3 mcasting is not scalable and _cannot be made_ scalable for reasons
> nothing to do with deficiencies of protocols.
> Caching algorithms do not have similar limitations, solely because they do
> not rely on distributed computations. So they have a chance of working.
> Of course, nothing "just works".
> PS To those who point that provider ABC already sells mcast service:
> there's an
> old saying at NASA that with enough thrust even pigs can fly. However,
> reactively propulsed hog is likely to make it to an orbit all on its