Re: Going the distance
- To: "Simon L. Sabato" <simons@xxxxxxxxxx>
- Subject: Re: Going the distance
- From: Roy Bynum <rabynum@xxxxxxxxxxx>
- Date: Sat, 03 Jul 1999 12:59:43 -0500
- CC: "rmarsland@xxxxxxxxxxxx" <rmarsland@xxxxxxxxxxxx>, "stds-802-3-hssg@xxxxxxxx" <stds-802-3-hssg@xxxxxxxx>
- Organization: .
- References: <01BEC465.65CE4330.rmarsland@xxxxxxxxxxxx> <377D48D1.23E13DB7@xxxxxxxxxx>
- Reply-To: rabynum@xxxxxxxxxxx
- Sender: owner-stds-802-3-hssg@xxxxxxxxxxxxxxxxxx
Simon,
As a large data network architect and implementor, the minimum specification is a
hard boundry. If a vendor's equipment does not meet the minimum published
specification 100% of the time, it is considered unreliable. When building
large, high bandwidth networks, it is too expensive in terms of lost data, down
time, and support people to be "baby sitting" a vendor's equipment. This is even
more critical as equipment ages, because most of the installations are made at
the same time for any one vendor.
10GbE will be used almost exclusively in large, complex, high bandwidth data
networks. A very few 10GbE implementations will be with realtime graphic, data
storage, or control environments that are contained within small installations.
This makes reliability to minimum specification critical to successful deployment
of 10GbE.
I have seen more than one instance of re-cabling buildings with better grades of
fiber to get the benefit of greater distances, or high bandwidth. The savings in
common equipment and interfaces often overcomes the cost of re-cabling. This was
true of the Cat 3 to Cat 5 conversion that was made by many business when going
from 10BaseT to 100BaseT. It will be true of implementing 10GbE.
I agree with you, Simon, that over designing a specification is a necessity for
long term success of 10GbE. Statistical anomolies at specification boundrys are
fine for theoretical physics or experimental aircraft, but not for 802.3 data
networks.
Roy Bynum
MCI WorldCom
"Simon L. Sabato" wrote:
> Rob,
>
> While I understand that certain aspects of engineering are always
> statistical, including the very operation of the transistor and laser, I
> think that designing in this way should be avoided where possible and
> where it doesn't significantly lower cost. Why assign MAC addresses,
> for example? Why not just select random ones? The chance of two
> stations on a LAN having the same address is sufficiently remote (2 ^^
> 40). If it cost a huge amount to distribute those addresses, we
> probably wouldn't bother. It doesn't cost much, and the comfort factor
> of knowing that the MACs are unique is worth it.
>
> If a completely worst case situation resulted in 10G operating only to
> 10km, to determine whether it was "safe enough" to operate over 15km
> wouldn't we *at least* need to profile the quality distributions for
> each component along the way? How often do parts hit worst case, how
> often do they exceed it, and by how much... is this even something that
> vendors are willing to share? ("Sure! We throw away 95% of what we
> build and the rest is within 2% of the worst case") :)
>
> This also opens up a whole new dimension to manufacturing and test. You
> always check that your measurement is between A and B, and for process
> control you probably measure how often it falls in various places.
> Would you have to throw away parts that were within spec because your
> process wasn't following the "profile"? If your manufacturing guy says
> that he can make a part for cheaper because, due to better controls, he
> can hit closer to the limit, wouldn't you want to be able to do that?
>
> At the extreme, if everyone were to tune their processes such that they
> could build dirt cheap parts that 100% of the time hit the worst case
> specs, wouldn't you be back at 10km? How many places will it end up
> costing more because we don't have a simple answer to the question: "how
> good is good enough?". Perhaps this is a lot easier to do when you
> control all the pieces of a system, rather than bringing pieces from
> different vendors together (be it at the box or network level). In past
> systems I have worked on, for example, I've used the assumption that one
> chip cannot be at maximum voltage when the one next to it is at
> minimum. A sure bet within a box, probably not so safe across them.
>
> Finally, would we assume that everyone got equipment of "random"
> quality, or would we account for the fact that the more, ahem,
> "cost-sensitive" network manager will probably end up with more than
> his/her fair share of marginal equipment? What about other correlating
> factors like temperature, humidity, and age, that will affect whole
> systems together?
>
> If you want to use statistics, why not give them to your sales people?
> They can show a nice graph with a miniscule failure rate at 10km (and a
> red line that says "specified limit") which moves up to 0.01% at 15km,
> 10% at 20km, and so on. Then the network manager can pick his/her own
> cost/benefit point along this curve. This would also help in another
> way -- they can be more confident that the next technology (100G?) would
> have a good looking curve up to, but maybe not beyond, that little red
> line. Even if the curve looks great up to 300% of the little red line,
> as the mature Gig might, this would be an important datapoint for some
> to consider.
>
> Judging by the silence on this topic most people either agree with one
> view or the other, though I honestly can't say which is preferred. I
> just hope that anyone working on the plane I fly next think like
> myself.
>
> -Simon L. Sabato
> -Naive Engineer
> -Level One Communications
>
> Rob Marsland wrote:
> >
> > In reply to:
> > The latter is simply not a tolerable situation
> > to me. It's simple math, if you want 10,000,000 trouble free
> > installations, then you're going to have to ensure that the
> > one-in-a-million combination of worst-case devices still works.
> >
> > Is there really *any* other way to go about this?
> >
> > ------------
> > Yes there is. Its called statistics. Instead of designing for worst case,
> > you design for 3 sigma, or 5 sigma, or whatever. It makes a lot more sense
> > even if you are designing atom bombs. Well, ok, maybe worst case is
> > necessary for that one.............
> >
> > Rob
> >
> > Robert A. Marsland
> > Focused Research, Inc. (a New Focus company)
> > 555 Science Dr.
> > Madison, WI 53711
> > (608) 238 2455
> > (608) 238 2656 FAX