10GbE in Long-haul vs. LAN
Judging from much of the recent reflector traffic, in which LAN and
long-haul applications of 10GbE are being discussed together, I think it is
very important to remind people that while the 10 GbE protocols for LAN and
long-haul may be very similar, the physical layer solutions will be very
different in these very different markets.
For long-distance applications, where EDFA's and DWDM will undoubtedly be used,
the optical transceivers will require narrow lines, isolation, temperature
control, and even wavelength locking for very dense WDM applications. Optical
components along the lines of today's OC192 transmitters and receivers will be
required, in particular, externally modulated lasers will be needed.
Optical component cost is not the driving factor in this market. In
addition, to my knowledge, nobody has proposed a CWDM (or WWDM) solution
for this market. I am assuming that a 10-Gb/s serial 1550nm solution,
using a cooled, isolated, externally modulated laser will be the choice
here.
The LAN application space, in which Ethernet has always operated, is very
different. Gigabit Ethernet is defined for multimode fiber links up to
550m and for single-mode links up to 6 km. Cost is THE driving factor in
this market (as evidenced by the fact that 1000SX transceivers have to date
outsold the higher performance 1000LX transceivers by an overwhelming
margin). Optical amplification is irrelevant, and Class I eye-safety is
sacred. The optical solution for this market will be the one that delivers
the necessary performance (distance, fiber type, BER, reliability, package
size) at the lowest cost. Solutions proposed for this market (up to 10km)
include WWDM, 10-Gb/s serial, and multilevel modulation. Since each of
these solutions has the potential to offer identical electrical I/O (in
some cases with SERDES integrated in the transceiver), there are no network
management or protocol issues relating to the choice of transceiver. The
decision will be made based on which types of fibers are to be supported,
what distances are required, and which solution(s) offer(s) the best
performance at the lowest cost.
While it might be nice to think that one solution should be chosen to
satisfy both LAN and long-haul applications, there is no way that this can
happen, since an optical solution suitable for long-distance transmission
can never be cost-effective for the LAN. I realize that this is just
stating the obvious, but I think it is important that people not mix the
two markets when making arguments for or against a particular physical
solution.
- Brian Lemoff
lemoff@xxxxxxxxxx
______________________________ Reply Separator _________________________________
Subject: RE: Re[2]: 1310nm vs. 1550nm -> Eye Safety + Attenuation
Author: Non-HP-bill.st.arnaud (bill.st.arnaud@xxxxxxxxxx) at
HP-PaloAlto,mimegw2
Date: 5/6/99 9:46 AM
Bryan:
Regardless of whether a 1550 would meet the eye safety recommendations I
would strongly recommend that a protocol be established to turn off the
laser in the event off a fiber cut. Network operators routinely add
pre/post amps to boost the laser power and range. So although the original
laser may meet safety requirements, the output of the laser at the other end
may not.
However, as I suggested before, I think it is also important not to depend
on a signaling protocol that assumes no Rx path. In CA*net 3, we use a
propriety out of band signaling system for laser safety and network
management. My humble suggestion is that we use the IP layer for out of
band management and signaling.
Bill
-------------------------------------------
Bill St Arnaud
Director Network Projects
CANARIE
bill.st.arnaud@xxxxxxxxxx
http://tweetie.canarie.ca/~bstarn
> -----Original Message-----
> From: bgregory@xxxxxxxxx [mailto:bgregory@xxxxxxxxx]
> Sent: Thursday, May 06, 1999 12:32 PM
> To: David W Dolfi; Bill St. Arnaud
> Cc: stds-802-3-hssg@xxxxxxxx; dolfi@xxxxxxxxxx; twhitlow@xxxxxxxxx
> Subject: Re[2]: 1310nm vs. 1550nm -> Eye Safety + Attenuation
>
>
> In response to Bill's email... regarding the EDFA issue, I'd imagine
> that this would only be used in a small number of cases with
> a serial
> 10GbE approach. I don't think it needs to be a core concern of the
> group, but in some dark fiber trunking applications it can
> be useful.
>
> I am most concerned about wavelengths vs. eye safety, and
> wavelengths
> vs. fiber attenuation. This could end up being a real killer. Four
> lasers @ 850nm or 1310nm put out quite a bit of light in an eye
> sensitive range. As I remember, four lasers at 1550nm offer a lot
> more margin. A single source at 1550nm could be very strong
> and still
> meet the eye safe requirements. This increase in power
> combined with
> lower fiber attenuation would reduce some of the link distance
> problems that we're bound to run into.
>
> Also, long term I can't see how [4 lasers and an optical mux] + [4
> photodiodes and an optical de-mux] would be better than a single
> source and photodiode. There is a lot of difficult
> packaging involved
> in the CWDM approach. I think the CWDM solution offers a
> quicker path
> to market because most of that technology is available
> today. But long
> term a single 10 Gb source (uncooled DFB without isolator) has a lot
> of advantages. It is intrinsically much simpler. I think the board
> layout and chip-sets will eventually support this as well. If the
> standard wanted to be able to scale beyond 10 gigs, even the serial
> 10Gb solution could allow further CWDM scaling.
>
> Regards,
> Bryan Gregory
> bgregory@xxxxxxxxx
> 630/512-8520
>
>
> ______________________________ Reply Separator
> _________________________________
> Subject: RE: 1310nm vs. 1550nm window for 10GbE
> Author: "Bill St. Arnaud" <bill.st.arnaud@xxxxxxxxxx> at INTERNET
> Date: 5/6/99 10:38 AM
>
>
> Hmmm. I just assumed that 802.3 HSSG would be looking at 1550
> solutions as
> well as 1310 and 850
>
> I agree with you on longer haul links it makes a lot more sense
> to operate
> at 1550
>
> I am not a big fan of EFFA pumping. It significantly raises the overall
> system cost. It only makes sense in very dense wave long haul systems
> typically deployed by carriers.
>
> CWDM with 10xGbE transcivers should be significantly cheaper. That is
> another reason why I think there will be a big market for 10xGbE with all
> those transceivers every 30-80km on a CWDM system. However there is a
> tradeoff. There is greater probablity of laser failure with many
> transceivers and the need for many spares. I figure somewhere
> between 4-8
> wavelengths on a CWDM and transceivers is the breakpoint where it is
> probably more economical to go to DWDM with EDFA. Also EDFA is
> protocol and
> bit rate transparent.
>
> An EDFA will ..(edited)..... But EDFA window is very small, so
> wavelength
> spacing is very tight requiring expensive filters and very stable,
> temperature compensated lasers at each repeater site. Also laser
> power has
> to be carefully maintained within 1 db otherwise you will get
> gain tilt in
> EDFAs. A loss of a signal laser can throw the whole system off,
> that is why
> you need SONET protection swicthing. But companies are developing
> feedback
> techniques to adjust power on remaining lasers to solve this problem.
>
> A single 10xGbE transceiver will .(edited)....??? Probably less. So 6
> 10xGbE transceivers will equal one EDFA. No problems with gain tilt. If
> you lose one laser you only lose that channel, not the whole system.
> Protection switching not as critical, etc
>
> Bill
>
>
>
> -------------------------------------------
> Bill St Arnaud
> Director Network Projects
> CANARIE
> bill.st.arnaud@xxxxxxxxxx
> http://tweetie.canarie.ca/~bstarn
>
>
>
>
>
>