WWDM vs. 10Gb/s serial
- TO: bgregory@xxxxxxxxx
- Subject: WWDM vs. 10Gb/s serial
- From: BRIAN_LEMOFF@xxxxxxxxxxxxxxxxxxxxxxxxxx
- Date: Thu, 6 May 1999 18:08:34 -0700
- CC: dolfi@xxxxxxxxxxxxxxxx, bill.st.arnaud@xxxxxxxxxx, stds-802-3-hssg@xxxxxxxx, dolfi@xxxxxxxxxx, twhitlow@xxxxxxxxx
- In-Reply-To: <00101B8D.C21030@xxxxxxxxx>
- Sender: owner-stds-802-3-hssg@xxxxxxxxxxxxxxxxxx
I will try to respond to some of Bryan Gregory's remarks regarding
CWDM vs. 10-Gb serial. By the way, I will refer to it as WWDM
(SpectraLAN is HP's implementation of WWDM), since CWDM is apparently
used to refer to 400-GHz spaced telecom systems, and this has caused
some confusion among some people on this reflector.
First, let me say that I agree that long-term (say 6-10 years out) a
low-cost 10-Gb/s serial solution may be the simplest and lowest cost
solution. That having been said, I think that with today's technology
(and for several years out) WWDM will be the lowest cost and most
useful technology for 10-GbE LAN applications.
Fiber: A 4 x 2.5-Gb/s WWDM module in the 1300nm band should still
support useful distances of up to 300m on the installed base of 62.5
micron core fiber. The SpectraLAN approach, like 1000LX, will
simultaneously support multimode and single mode applications (up to
10-km) with a single transceiver. All 10-Gb/s serial approaches that
have been proposed (excluding multilevel logic) will require new fiber
to be installed in premises applications.
Laser Cost: At 2.5-Gb/s, low-cost uncooled, unisolated DFB lasers can be
used with no side-mode suppression requirement (double moded lasers are
okay) up to 10km. These lasers are readily available today in die form at
costs not that much higher than the FP lasers used in 1000LX. Linewidth,
RIN, and Jitter requirements at 2.5-Gb/s are MUCH easier to realize with
high yield and low-cost electrical packaging than they are at 10-Gb/s (not
to mention 12.5 Gbaud). Optical isolation will probably be required to
achieve the necessary noise and linewidth requirements for a 10-km, 10-Gb/s
serial link (Lucent presented an unisolated FP solution for 1km. The data
they showed for a 10km uncooled DFB link required isolation). Given this,
I believe that the 4 lasers required for WWDM will be many times lower cost
than the single laser required for serial.
Optical Packaging Cost: The 1000LX standard has forced transceiver vendors
to develop low-cost automated alignment and precision die attach systems
for aligning edge-emitting lasers to single-mode fiber. In our WWDM
solution, we are leveraging such a system to robotically assemble and align
our 4 lasers and MUX in a fast, low-cost process. On the Rx side, only
multimode alignment tolerances are required to align the demux to the
detector array and glue it into place. The mux and demux optics themselves
are low-cost parts (many times lower cost than a micro-optical isolator).
The mux is a simple, unpolished, unpigtailed, silica waveguide chip
(several hundred devices on a standard 4" wafer). The demux is an
injection-molded plastic optical part, requiring minimal assembly. This
may sound complicated, but it is not expensive. As we get further into the
standards discussions, we'll provide more details that should help convince
the skeptics that this is a realistic and low-cost solution.
Electronics: WWDM at 2.5-Gb/s per channel works with existing low-cost Si
electronics. 10-Gb/s serial Tx and Rx IC's will require processes at least
4 times faster. Add to this the tighter jitter and noise requirements, the
poorer performance of dielectric circuit boards, the higher laser current
requirements (required to push relaxation oscillation frequencies 4 times
further out), and you have a difficult electrical problem to solve. The
cost associated with the electronics and electrical packaging is likely to
be much higher than that for 4ch WWDM for several years.
Scalability: Bryan made a good point that a 10-Gb/s serial solution
adopted now could be combined with WWDM later to provide even higher
capacity (e.g. 40 Gb/s). Why not adopt the WWDM (4 x 2.5 Gb/s) solution
now, when 10-Gb/s lasers and electronics are still very expensive, and then
in a few years, increase the channel rate to 10-Gb/s. Either solution for
10-GbE is scalable to 40-Gb/s when it is combined with the other.
Eye-safety: The proposed power budget for SpectraLAN meets the Class I
eye-safety requirement by a comfortable margin. At 1550nm it would be even
better, but increased fiber dispersion and the lack of well-characterized
fiber in the LAN make this a more difficult option. It should be noted
that 4 lasers means 6-dB less eye-safe power available per laser, but at 4
times the speed, for a given IC process, a typical receiver will be less
sensitive by at least 6 dB, negating the eye-safety advantage inherent in
the serial approach.
"Inherent Simplicity": A serial approach is "inherently simple". The
question which we must answer over the coming year is which approach makes
the most practical sense from a performance and cost perspective, given the
technologies that are available today.
I hope I have at least provided a few reasons why 4x2.5-Gb/s WWDM might be
better than a 10 Gb/s serial approach, at least in the near-term. There is
still a lot to be learned, a lot to be demonstrated, and an awful lot of
discussion to be had before one solution is chosen over another.
-Brian Lemoff
lemoff@xxxxxxxxxx
______________________________ Reply Separator _________________________________
Subject: Re[2]: 1310nm vs. 1550nm -> Eye Safety + Attenuation
Author: Non-HP-bgregory (bgregory@xxxxxxxxx) at HP-PaloAlto,mimegw2
Date: 5/6/99 9:32 AM
In response to Bill's email... regarding the EDFA issue, I'd imagine
that this would only be used in a small number of cases with a serial
10GbE approach. I don't think it needs to be a core concern of the
group, but in some dark fiber trunking applications it can be useful.
I am most concerned about wavelengths vs. eye safety, and wavelengths
vs. fiber attenuation. This could end up being a real killer. Four
lasers @ 850nm or 1310nm put out quite a bit of light in an eye
sensitive range. As I remember, four lasers at 1550nm offer a lot
more margin. A single source at 1550nm could be very strong and still
meet the eye safe requirements. This increase in power combined with
lower fiber attenuation would reduce some of the link distance
problems that we're bound to run into.
Also, long term I can't see how [4 lasers and an optical mux] + [4
photodiodes and an optical de-mux] would be better than a single
source and photodiode. There is a lot of difficult packaging involved
in the CWDM approach. I think the CWDM solution offers a quicker path
to market because most of that technology is available today. But long
term a single 10 Gb source (uncooled DFB without isolator) has a lot
of advantages. It is intrinsically much simpler. I think the board
layout and chip-sets will eventually support this as well. If the
standard wanted to be able to scale beyond 10 gigs, even the serial
10Gb solution could allow further CWDM scaling.
Regards,
Bryan Gregory
bgregory@xxxxxxxxx
630/512-8520
______________________________ Reply Separator
_________________________________
Subject: RE: 1310nm vs. 1550nm window for 10GbE
Author: "Bill St. Arnaud" <bill.st.arnaud@xxxxxxxxxx> at INTERNET
Date: 5/6/99 10:38 AM
Hmmm. I just assumed that 802.3 HSSG would be looking at 1550 solutions as
well as 1310 and 850
I agree with you on longer haul links it makes a lot more sense to operate
at 1550
I am not a big fan of EFFA pumping. It significantly raises the overall
system cost. It only makes sense in very dense wave long haul systems
typically deployed by carriers.
CWDM with 10xGbE transcivers should be significantly cheaper. That is
another reason why I think there will be a big market for 10xGbE with all
those transceivers every 30-80km on a CWDM system. However there is a
tradeoff. There is greater probablity of laser failure with many
transceivers and the need for many spares. I figure somewhere between 4-8
wavelengths on a CWDM and transceivers is the breakpoint where it is
probably more economical to go to DWDM with EDFA. Also EDFA is protocol and
bit rate transparent.
An EDFA will ..(edited)..... But EDFA window is very small, so wavelength
spacing is very tight requiring expensive filters and very stable,
temperature compensated lasers at each repeater site. Also laser power has
to be carefully maintained within 1 db otherwise you will get gain tilt in
EDFAs. A loss of a signal laser can throw the whole system off, that is why
you need SONET protection swicthing. But companies are developing feedback
techniques to adjust power on remaining lasers to solve this problem.
A single 10xGbE transceiver will .(edited)....??? Probably less. So 6
10xGbE transceivers will equal one EDFA. No problems with gain tilt. If
you lose one laser you only lose that channel, not the whole system.
Protection switching not as critical, etc
Bill
-------------------------------------------
Bill St Arnaud
Director Network Projects
CANARIE
bill.st.arnaud@xxxxxxxxxx
http://tweetie.canarie.ca/~bstarn