Re: Wide Area Networking for the Rest of US - the debate on BER and other issues
- To: bill.st.arnaud@xxxxxxxxxx
- Subject: Re: Wide Area Networking for the Rest of US - the debate on BER and other issues
- From: Roy Bynum <rabynum@xxxxxxxxxxx>
- Date: Fri, 28 May 1999 20:29:28 -0500
- CC: bin.guo@xxxxxxx, rtaborek@xxxxxxxxxxxxxxxx, dwmartin@xxxxxxxxxxxxxxxxxx, stds-802-3-hssg@xxxxxxxx, sachs@xxxxxxxxxxxxxx, widmer@xxxxxxxxxx, widmer@xxxxxxxxxx, widmer@xxxxxxxxxx
- Organization: .
- References: <NBBBJIMEPHPGCNGAHPMFKEKPEMAA.bill.st.arnaud@xxxxxxxxxx>
- Reply-To: rabynum@xxxxxxxxxxx
- Sender: owner-stds-802-3-hssg@xxxxxxxxxxxxxxxxxx
Bill,
I work with a lot of customers that have IPX networks. Quite a few use
trunked Frame Relay circuits with inverse muxing to provide "transparent
LAN extension" for their networks. Some of them will migrate to
TCP/IP. The limited available IP address space tends to limit the
numbers that will do so.
Because there are a lot of upper layer protocols that will run on 802.3,
do not assume that they will all be TCP/IP on 10GbE. It will limit the
customer base that can effectively use make use of 10GbE.
I agree that the majority of the routers that are currently deployed do
not have the same quality of service at the transmission system. The
largest router that is currently installed to support the Internet can
not support multiple 10GbE interfaces at wire speed. Most have a
problem supporting wire speed at 2.5 Gb with POS. There is a push on
improve the quality of future routers as well as the agregate wire speed
routing. Systems that will be able to handle the data through put of
multiple 10GbE will not be the ones that are currently installed.
Routers or L2/L3 data switches that will be deployed to support agregate
wire speed rates in excess of 100s of Gigabits should be sized to have
less data loss than current ones.
Thank you,
Roy Bynum
Bill St. Arnaud wrote:
>
> All:
> I have been following the interesting debate about BER. Let me bring some
> further issues into the debate.
>
> I am assuming that on WAN and long haul GbE the upper layer protocol will
> only be IP.
>
> On most IP links, even ones with BERs of 10^-15 there is about 1-3% packet
> loss and retransmission. This is due to a number of factors but most
> typically it relates to TCP flow control mechanism from server bound
> congestion (not network congestion) and the use of WRED in routers.
>
> So, on most IP links the packet loss due to BER is significantly less than
> that due to normal TCP congestion. As long as that ratio is maintained it
> is largely irrelevant what the absolute BER value is. There will be many
> more retransmissions from the IP layer than there will be at the physical
> layer due to BER.
>
> Other protocols like Frame Relay and SNA are a lot more sensitive to high
> BERs. IP ( in particular TCP/IP) is significantly more robust and can work
> quite effectively in high BER environments e.g. TCP/IP over barbed wire.
>
> I would like to suggest that the 802.3 HSSG group consider an 2 solutions
> for 10xGbE WAN:
> (1) native 10xGbE using 8b/10b; and
> (2)10xGbE mapped to a SONET STS OC-192 frame
>
> For extreme long haul solutions SONET makes a lot of sense as a transport
> technology. However for intermediate long haul (up to 1000 km) and WAN
> native 10xGbE is more attractive. Native GbE can be either transported on a
> transparent optical network or carried directly on a CWDM system with
> transceivers. In medium range networks coding efficiency is not as important
> as it is in long haul networks. If coding efficiency is important then in my
> opinion, it does not make sense to invent a new coding scheme for 10xGbE
> when it would be just as easy to map it to a SONET frame.
>
> The attraction of native 10xGbE for the WAN is that it is a "wide area
> networking solution for the rest of us". You don't need to hire specialized
> SONET engineers to run and manage your networks. The 18 year old kid who is
> running your LAN can now easily learn to operate and manage a WAN.
>
> In Canada and the US, there are several vendors who are willing to sell dark
> fiber at a very reasonable cost. Right now the cost of building a WAN with
> 10xGbE and CWDM is substantially less (for comparable data rates) than using
> SONET equipment.
>
> Bill
>
> -------------------------------------------
> Bill St Arnaud
> Director Network Projects
> CANARIE
> bill.st.arnaud@xxxxxxxxxx
> http://tweetie.canarie.ca/~bstarn
>
>
>
>
>
> > -----Original Message-----
> > From: owner-stds-802-3-hssg@xxxxxxxxxxxxxxxxxx
> > [mailto:owner-stds-802-3-hssg@xxxxxxxxxxxxxxxxxx]On Behalf Of
> > bin.guo@xxxxxxx
> > Sent: Thursday, May 27, 1999 7:28 PM
> > To: rtaborek@xxxxxxxxxxxxxxxx; dwmartin@xxxxxxxxxxxxxxxxxx
> > Cc: stds-802-3-hssg@xxxxxxxx; sachs@xxxxxxxxxxxxxx; "widmer@xxxxxxxxxx
> > widmer@xxxxxxxxxx widmer"@us.ibm.com
> > Subject: RE: 1000BASE-T PCS question
> >
> >
> >
> > Rich,
> >
> > The DC balance can be directly translated into jitter (when timing is
> > concerned) and offset (when threshold slicing is concerned). You
> > only need
> > to deal with the former if the signal is 2-level NRZI, while you need to
> > deal with both if multi-level signal modulation is used.
> >
> > For long term DC imbalance, it translates into low frequency jitter and if
> > it's low enough(<1 KHz ?), it's called baseline wonder. For
> > short term, it
> > relates to Data Dependent Jitter, which is more difficult for timing
> > recovery to handle since it's not from system or channel imparity, and
> > therefore it's harder to compensate.
> >
> > When you have a lot of jitter margin, for example in lower speed clocking,
> > the amount of jitter, translated from DC drift resulted from data
> > imbalance
> > coupled by AC circuit, percentage wise is a small portion of the clock
> > period and therefore does not contribute to much of the eye
> > closing. On the
> > other hand, for high speed clocking at 10G (100 ps?), the jitter
> > translated
> > from the same amount of DC drift can be a significant portion of the clock
> > period, so contributes to much large percentage wise jitter which
> > results in
> > reduced eye opening -- higher BER.
> >
> > Dave said in his mail that "The limiting factor is enough RX optical power
> > to provide a sufficiently open eye." but you still have to deal with the
> > data dependent jitter due to DC imbalance generated after O/E, that can
> > close the eye further again.
> >
> > Bin
> >
> > ADL, AMD
> >
> > > -----Original Message-----
> > > From: Rich Taborek [SMTP:rtaborek@xxxxxxxxxxxxxxxx]
> > > Sent: Thursday, May 27, 1999 3:23 PM
> > > To: David Martin
> > > Cc: HSSG_reflector; Sachs,Marty; Widmer,Albert_X
> > > Subject: Re: 1000BASE-T PCS question
> > >
> > >
> > > Dave,
> > >
> > > Do you know of any research or other proofs in this area? You say that
> > > lower speed SONET links regularly achieves BERs of < 10 E-15. I have
> > > substantial experience with mainframe serial links such as ESCON(tm)
> > > where the effective system BERs are in the same ballpark. SONET uses
> > > scrambling with long term DC balance and ESCON uses 8B/10B with short
> > > term DC balance. The following questions come to mind:
> > >
> > > - How important is DC balance?
> > > - How does this importance scale in going to 10 Gbps?
> > >
> > > I'll see if I can get some 8B/10B experts to chime in here if you can
> > > get scrambling experts to bear down on the same problem.
> > >
> > > --
> > >
> > > >(text deleted)
> > > >
> > > >The point here is that the SONET scrambler is not the limiting issue in
> > > >achieving low error rates. The issue is having enough photons/bit, or
> > > >optical SNR (eye-Q) to accurately recover the data.
> > > >
> > > >...Dave
> > > >
> > > >David W. Martin
> > > >Nortel Networks
> > > >+1 613 765-2901
> > > >+1 613 763-2388 (fax)
> > > >dwmartin@xxxxxxxxxxxxxxxxxx
> > > >========================
> >