North American Network Operators Group|
Date Prev | Date Next |
Date Index |
Thread Index |
Author Index |
Re: Is anyone actually USING IP QoS?
- From: Vadim Antonov
- Date: Tue Jun 15 13:18:10 1999
Danny McPherson <email@example.com> wrote:
>Several providers have deployed/are deploying NATIVE multicast today on their
>"production" IP networks today (many have had intra-domain enabled for years),
Do i miss something or was the problem of letting end-users to inject
routing information w/o opening backbone to very "interesting" attacks was
>and deploying inter-domain mulicast via existing direct interconnects and the
>MIXs. Not only is there a b/w savings, there's a huge savings on the source
>side as well.
Please. Caching is _at least_ as efficient as multicasting (multicasting
_is_ caching, with zero retention time) - w/o associated security and
scalability problems. Presenting L2/L3 multicasting as the best or the only
or even a meaningful way to reduce transmission duplication is quite wrong.
>A primary concern is the ability of existing and new router
>vendors platforms to do this efficiently.
A primary concern is the absense (and most likely, impossibility) of any L2/L3
multicast routing scheme capable of supporting any significant number of mcast trees.
Scalability on the Internet pretty much means that algorithms should run in
O(log(N)**M) where N is the total number of end-points and M is constant.
(Note that non-CIDR unicast routing doesn't fit this criterion, but CIDR does).
>The benefits are obivous though and router vendors are definitely progressing,
>but as with any technology, debugging and getting the protocols to a usable
>state, one to which SLA/SLGs can be assoicated, takes time.
The benefits of mining cheap cheese on the Moon are quite obvious. If you're
willing to overlook the small fact that the Moon isn't made from cheese.
_No_ technological advances can help the fact that L2/L3 multicasts cannot
be routed in a scalable fashion. Think what happens when there is 1mil
multicast trees in the network.
I think blaming vendors for inability to build products which run faster
than the proven lower boundaries for the required kind of algorithms is,