Merit Network
Can't find what you're looking for? Search the Mail Archives.
  About Merit   Services   Network   Resources & Support   Network Research   News   Events   Home

Discussion Communities: Merit Network Email List Archives

North American Network Operators Group

Date Prev | Date Next | Date Index | Thread Index | Author Index | Historical

Re: Comparing an old flow snapshot with some packet size data

  • From: Curtis Villamizar
  • Date: Wed Aug 07 20:16:42 1996

In message <199608072217.AA02662@interlock.ans.net>, "Daniel W. McRobb" writes:
> > 
> > Persistant connections is a prominant feature of HTTP 1.1, now in
> > draft.  Maybe someone who follows that WG can comment on its progress.
> > If on average there are 2-3 inline images per page (reasonable
> > estimate IMO, though I have no data to back this up), then the average
> > transfer size will increase.  I've heard (verbal at NANOG) that
> > Netscape has promised to support persistant connections, with the only
> > caveat that they will open one connection for the page itself and
> > another for all the inlines so they can start rendering the first
> > inline while a long page is being read.  They can probably avoid this
> > for short pages.  This could lead to a significant improvement in the
> > ability of the Internet traffic to respond to low levels of packet
> > drop and make good use of TCP congestion control, plus it will
> > significantly improve the speed of transfer on uncongested paths where
> > currently TCP never gets out of the initial slow start.
> > 
> > Curtis
> 
> I did some analysis of the FIX-West traces a while back and posted it to
> the nlanr mailing list.  It's been so long that I don't remember what I
> posted, but I seem to recall trying to make a judgement as to how many
> packets we'd eliminate with a mass migration to HTTP 1.1 and/or HTTP
> with T/TCP.  I recall a figure around 14%, but that's just from memory.

It is not a question of eliminating packets it is a question of
whether TCP ever gets out of slow start and gets to a reasonable
window size.  It takes 4 RTT to send 1+2+4+1 for 8 data segments, then
you need to go through FIN handshake and SYN.  If you do three images
you start with a new window for each or try to do them all at one you
can have 3 * 8 segments you have 1+2+4+1 either in sequence or at the
same time or 1+2+4+8+9 if you have a persistent connection.  If the
pipe is overfull it is much easier to TCP rate limit the latter case
so you need to drop much fewer packets to keep things under control.

At 552, 8 segments is about 4K, which is about the average image size.
Of course it is one segment at FDDI MTU.  Then it really pays to have
a persistant connection so you don't degrade to UDP characteristics.

> At any rate, in our case, a significant reduction in HTTP traffic (by
> caches, HTTP 1.1, T/TCP, whatever) would be nice since HTTP represents a
> large chunk of our backbone traffic.

Caches are a big win too.  If more clickers pointed HTTP proxy
somewhere we'd be a lot better off.  Persistent connections between
cahces is the only thing that makes any sense at all.

Curtis
- - - - - - - - - - - - - - - - -




Discussion Communities


About Merit | Services | Network | Resources & Support | Network Research
News | Events | Contact | Site Map | Merit Network Home


Merit Network, Inc.