Merit Network
Can't find what you're looking for? Search the Mail Archives.
  About Merit   Services   Network   Resources & Support   Network Research   News   Events   Home

Discussion Communities: Merit Network Email List Archives

North American Network Operators Group

Date Prev | Date Next | Date Index | Thread Index | Author Index | Historical

Re: Testing Bandwidth performance

  • From: todd glassey
  • Date: Wed Jun 26 09:24:04 2002

Hi All - Just to prove to the list's management that I am a techie too I
submit the following -
----- Original Message -----
From: "Martin Hannigan" <hannigan@fugawi.net>
To: "Jared Mauch" <jared@puck.Nether.net>
Cc: "Wojtek Zlobicki" <wojtekz@idirect.com>; "Alan Sato" <asato@altrio.net>;
<nanog@trapdoor.merit.edu>
Sent: Tuesday, June 25, 2002 10:17 PM
Subject: Re: Testing Bandwidth performance


>
> At 01:02 AM 6/26/2002 -0400, Jared Mauch wrote:
>
> >         I think they are talking about generating an OC3s worth
> >of traffic.  while you could fill it all up w/ ntp packets
> >as one method, I do not beleive it will create the desired result.
> >
> >         but yes, if you are wanting to measure
> >latency across your network or a circuit, ntp when properly
> >synchronized can be quite a useful tool.
>
>
>
> I'm not sure what you are saying regarding filling up an OC3 with NTP,

Unicast or Multicast/Broadcast? Either way it will be difficult, and take
any number of computers to do.

> but
> i do know you can calculate simple latency I believe, measurements from
> normal NTP  = but strategically analyzed from remote sources and correctly
> configured i.e. all s/1 or s/2 with drift.

This is not so true for you folks though, NTP assumes that the outbound trip
is identical to the inbound trip and with routing agreements  (hot potato
and otherwise) and the Routing Arbiter, this simply is not true anymore. So
the 1/2 time alg. doesn't quite work right these days.

>
>
> I wont' say I'm the expert at this, I'm the bb wiretap guy :), not the ntp
> latency guy.......but I've been around a bit.

Well - I am one of the NTP latency guy's, and I personally operate three (3)
of NIST's stratum-1 public access time servers... And I would not use NTP.
As it happens most all NTP users really don't understand the NTP weighting
or the physic's of impulse time propagation. Besides what you are looking
for is not "exact information" as to the Time of Day, but the elapsed time
between any two nodes of a network from the point of the commencement of
your test. What you want is really a precision heartbeat and there is no
better place to get one than from GPS unless you have your own local
oscillator backed clocks and a regimen to keep them properly tuned and
synched.

My favorite way to achieve this is to get two GPS-backed clocks together and
then count the bytes... Remember that packet sizes must vary too otherwise
the routers get lazy. BTW - GPS offers lousy reliability as an absolute
timebase because of how easy it is to spoof or shutdown in
denial-of-service, and that it is physically impossible to prove anything
from a GPS source for what should be obvious reasons (i.e. there are a
number of passive beams of data that are correlated by the receiver and so
never reproducible), but when it (GPS) is operating properly it provides the
coolest 1PPS heartbeat. The heartbeat of the US Government so to speak. And
as it happens, most all of the GPS Birds in orbit (24 of them) have at least
Datum Cesium Beam Atomic Clocks or Datum Rubidium Clocks.

So if you need critically accurate and provable time of day, what you do is
to take what is called an "Initialization Event" from ACTS or other reliable
timesetting instance, and that is jam-set into the clock's control register
and you run from there. From that point on there are any number of methods
of disciplining the clock base.

Oh and use something like a SNIFFER to generate the traffic. Most of what we
know of as commercial computer's cannot generate more than 70% to 80%
capacity on whatever network they are on because of driver overhead and OS
latency etc etc etc. It was funny, but I remember testing FDDI on a UnixWARE
based platform and watching the driver suck 79% of the system into the
floor.

Yehaaaaaaaaah!
Todd Glassey

>
> -M
>
>
>
>
>
>





Discussion Communities


About Merit | Services | Network | Resources & Support | Network Research
News | Events | Contact | Site Map | Merit Network Home


Merit Network, Inc.