North American Network Operators Group|
Date Prev | Date Next |
Date Index |
Thread Index |
Author Index |
Re: Reducing Usenet Bandwidth
- From: Robert E. Seastrom
- Date: Sat Feb 02 20:37:21 2002
Simon Lyall <firstname.lastname@example.org> writes:
> On Sun, 3 Feb 2002, Marco d'Itri wrote:
> > The major drawback of that protocol is that it limits articles size to
> > 64KB, so it does not reduce binary traffic, which is the largest part
> > of a newsfeed.
> In which case you just modify the protocol (or roll your own) to have
> articles spread across multiple packets. It's not that hard and our guys
> did this and I expect it's been done (at least) half a dozen times by
> other people.
It was trivial when I did it as a testbed for my previous boss (sorry,
If you compress the article before sending (we used libz, and at the
time a 1 GHz PIII (the fastest machine we could get) kept up with the
28mbps full feed just fine), a truly startling percentage of articles
(I want to say 78%) fit in a single packet on the ethernet... 1500
bytes minus udp encap and non-compressed header data, which includes
stuff like local article sequence number, part number for multipacket
articles, md5 checksum of article, and message-id. Of course, those
little articles are not the ones that are eating your bandwidth, but
even so I found that to be quite interesting.
Our software was nowhere near as complex as the UUnet software (just
point-to-multipoint), and probably quite similar to Cidera's software.
Worked pretty well over satellite too.
Of course, the dirty little not-so-secret is that for all the hacks
that people have done over the years, inter-AS multicast that doesn't
get hosed at the drop of a hat remains an elusive goal(1). Thinking
about doing multicast Usenet feeds as a way to cut down the bandwidth
in and out of your ISP overlooks the fact that reliable transport for
such things doesn't exist. I'm sorry if I've offended anyone with
this (uncharitable) assessment.