Re: [802.3_100GNGOPTX] Emerging new reach space
Chris
I have the front seat reserved for you, why wait a generation just hop on board and all your pains will go away.:)
Cheer,
Ali
On Nov 16, 2011, at 8:40 PM, Chris Cole wrote:
> Paul,
>
> The filter comparison results I summarized were for the cases studied in HSSG and 802.3ba TF, which involved either 4 or 10 wavelengths. I don't know what other implications come into play (other than a lot of loss) when a broader range of wavelengths is involved. Perhaps some of the optical filter experts on the reflector could chime in. In any case CWDM is only defined for 18 wavelengths, well short of the 64 that you are considering.
>
> With respect to >1T Ethernet, as much as all my fiber friends may be the sentimental favorites, I am afraid Ali will have the last laugh. >1T Ethernet optics will require high order modulation, large component count PICs, and dreadfully complex DSP ASICs. The best we can hope for is to hold off Ali for a couple more standards, after which we will be at the mercy of his chips. :)
>
> Chris
>
> -----Original Message-----
> From: Kolesar, Paul [mailto:PKOLESAR@xxxxxxxxxxxxx]
> Sent: Wednesday, November 16, 2011 6:51 PM
> To: STDS-802-3-100GNGOPTX@xxxxxxxxxxxxxxxxx
> Subject: Re: [802.3_100GNGOPTX] Emerging new reach space
>
> Chris,
> Are your statements on the small differences between 20 vs 4.5 nm spacing applicable across the broader range of wavelengths (1270 - 1600 nm) that would be required in some of Jack's scenarios?
>
> Even if they do apply across the full range of wavelengths, it seems a certainty that managing 64 wavelengths would drive up costs significantly, just on the basis of dividing the volume of lasers into so many bins, not to mention the handling issues.
>
> My main take away from Jack's analysis is that parallel fiber technologies appear inevitable at some point in the evolution of single-mode solutions. So the question becomes a matter of when it is best to embrace them.
>
> On multimode we have tried using WDM for short distances with limited success. The reason why WDM is successfully used on single-mode is due to the value of efficient usage of each fiber strand when deployed over long distances. Data center distances do not support that same rationale, so parallel solutions can be more optimal, just as they are for multimode.
>
> Paul
>
>
> -----Original Message-----
> From: Chris Cole [mailto:chris.cole@xxxxxxxxxxx]
> Sent: Wednesday, November 16, 2011 7:54 PM
> To: STDS-802-3-100GNGOPTX@xxxxxxxxxxxxxxxxx
> Subject: Re: [802.3_100GNGOPTX] Emerging new reach space
>
> Hello Jack,
>
> You really are on a roll; lots of insightful perspectives.
>
> Let me clarify a few of items so that they don't detract from your broader ideas.
>
> 1. CWDM leads to simpler optical filters versus "closer" WDM (LAN WDM)
>
> This claim may have had some validity in the past, however it has not been the case for many years. This claim received a lot of attention in 802.3ba TF during the 100GE-LR4 grid debate. An example presentation is http://www.ieee802.org/3/ba/public/mar08/cole_02_0308.pdf, where on pages 13, 14, 15, and 16 multiple companies showed there is no practical implementation difference between 20nm and 4.5nm spaced filters. Further, this has now been confirmed in practice with 4.5nm spaced LAN WDM 100GE-LR4 filters in TFF and Si technologies manufactured with no significant cost difference versus 20nm spaced CWDM 40GE-LR4 filters.
>
> If there is specific technical information to the contrary, it would be helpful to see it as a presentation in NG 100G SG.
>
> 2. CWDM leads to lower cost versus "closer" WDM because cooling is eliminated
>
> This claim has some validity at lower rates like 1G or 2.5G, but is not the case at 100G. This has been discussed at multiple 802.3 optical track meetings, including as recently as the last NG 100G SG meeting. We again agreed that the cost of cooling is a fraction of a percent of the total module cost. Even for a 40GE-LR4 module, the cost of cooling, if it had to be added for some reason, would be insignificant. Page 4 of the above cole_02_0308 presentation discusses why that is.
>
> This claim to some extent defocuses from half a dozen other cost contributors which are far more significant. Those should be at the top of the list instead of cooling. Further, if cooling happens to enable a technology which reduces by a lot a significant cost contributor, then it becomes a big plus instead of an insignificant minus.
>
> If there is specific technical information to the contrary, a NG 100G SG presentation would be a great way to introduce it.
>
> 3. CWDM is lower power than "closer" WDM power.
>
> The real difference between CWDM and LAN DWDM is that un-cooled is lower power. However how much lower strongly depends on the specific transmit optics and operating conditions. In 100G module context it can be 10% to 30%. However, for some situations it could be a lot more savings, and for others even less. No general quantification of the total power savings can be made; it has to be done on a case by case basis.
>
> Chris
>
> -----Original Message-----
> From: Jack Jewell [mailto:jack@xxxxxxxxxxxxxx]
> Sent: Wednesday, November 16, 2011 3:20 PM
> To: STDS-802-3-100GNGOPTX@xxxxxxxxxxxxxxxxx
> Subject: Re: [802.3_100GNGOPTX] Emerging new reach space
>
> Great inputs! :-)
> Yes, 40GBASE-LR4 is the first alternative to 100GBASE-LR4 that comes to
> mind for duplex SMF. Which begs the question: why are they different?? I
> can see advantages to either: (40G CWDM vs 100G closerWDM) - uncooled,
> simple optical filters vs better-suited-for-integration, and "clipping"
> off" the highest-temp performance requirement.
> It's constructive to look forward, and try to avoid unpleasant surprises
> of "future-proof" assumptions (think 802.3z and FDDI fiber - glad I wasn't
> there!). No one likes "forklift upgrades" except maybe forklift operators,
> who aren't well-represented here. Data centers are being built, so here's
> a chance to avoid short-sighted mistakes. How do we want 100GbE, 400GbE
> and 1.6TbE to look (rough guesses at the next generations)? Here are 3
> basic likely scenarios, assuming (hate to, but must) 25G electrical
> interface and no electrical mux/demux. Considering duplex SMF, 4+4parallel
> SMF, and 16+16parallel SMF:
> Generation
> 100GbE duplex-SMF / 4WDM 4+4parallel / no WDM
> 16+16parallel / dark fibers
> 400GbE duplex-SMF / 16WDM 4+4parallel / 4WDM
> 16+16parallel / no WDM
> 1.6TbE duplex-SMF / 64WDM 4+4parallel / 16WDM
> 16+16parallel / 4WDM
> The above is independent of distances in the 300+ meter range we're
> considering. Yes, there are possibilities of PAM encoding and electrical
> interface speed increases. Historically we've avoided the former, and the
> latter is expected to bring a factor of 2, at most, for these generations.
> Together, they might bring us forward 1 factor-of-4 generation further.
> For 40GbE or 100GbE, 20nm-spaced CWDM is nice for 4WDM (4 wavelengths). At
> 400GbE, 16WDM CWDM is a 1270-1590nm stretch, with 16 laser products
> (ouch!). 20nm spacing is out of the question for 64WDM (1.6TbE). CWDM does
> not look attractive on duplex SMF beyond 100GbE.
> OTOH, a 100GBASE-LR4 - based evolution on duplex SMF, with ~4.5nm spacing,
> is present at 100GbE. For 400GbE, it could include the same 4 wavelengths,
> plus 4-below and 12-above - a 1277.5-1349.5nm wavelength span, which is
> realistic. The number of "laser products" is fuzzy, as the same epitaxial
> structure and process (except grating spacing) may be used for maybe a
> few, but nowhere near all, of the wavelengths. For 1.6TbE 64WDM, LR4's
> 4.5nm spacing implies a 288nm wavelength span and a plethora of "laser
> products." Unattractive.
> On a "4X / generational speed increase," 4+4parallel SMF gains one
> generation over duplex SMF and 16+16parallel SMF gains 2 generations over
> duplex SMF. Other implementations, e.g. channel rate increase and/or
> encoding, may provide another generation or two of "future accommodation."
> The larger the number of wavelengths that are multiplexed, the higher the
> loss budget that must be applied to the laser-to-detector (TPlaser to
> TPdetector) link budget. More wavelengths per fiber means more power per
> channel, i.e. more power/Gbps and larger faceplate area. While duplex SMF
> looks attractive to systems implementations, it entails significant(!!)
> cost implications to laser/transceiver vendors, who may not be able to
> bear "cost assumptions," and additional power requirements, which may not
> be tolerable for systems vendors.
> I don't claim to "have the answer," rather attempt to frame the question
> pointedly "How do we want to architect the next few generations of
> Structured Data Center interconnects?" Insistence on duplex SMF works for
> this-and-maybe-next-generation, then may hit a wall. Installation of
> parallel SMF provides a 1-or-2-generation-gap of "proofing," with higher
> initial cost, but with lower power throughout, and pushing back the need
> for those abominable "forklift upgrades."
> Jack
>
>
> On 11/16/11 1:00 PM, "Kolesar, Paul" <PKOLESAR@xxxxxxxxxxxxx> wrote:
>
>> Brad,
>> The fiber type mix in one of my contributions in September is all based
>> on cabling that is pre-terminated with MPO(MTP)array connectors. Recall
>> that single-mode fiber represents about 10 to 15% of those channels.
>> Such cabling infrastructure provides the ability to support either
>> multiple 2-fiber or parallel applications by applying or removing
>> fan-outs from the ends of the cables at the patch panels. The fan-outs
>> transition the MPO terminated cables to collections of LC or SC
>> connectors. If fan-outs are not present, the cabling is ready to support
>> parallel applications by using array equipment cords. As far as I am
>> aware this pre-terminated cabling approach is the primary way data
>> centers are built today, and has been in practice for many years. So
>> array terminations are commonly used on single-mode cabling
>> infrastructures. While that last statement is true, it could leave a
>> distorted impression if I also did not say that virtually the entire
>> existing infrastructure e!
>> mploys fan-outs today simply because parallel applications have not been
>> deployed in significant numbers. But migration to parallel optic
>> interfaces is a matter of removing the existing fan-outs. This is what I
>> tried to describe at the microphone during November's meeting.
>>
>> Regards,
>> Paul
>>
>> -----Original Message-----
>> From: Brad Booth [mailto:Brad_Booth@xxxxxxxx]
>> Sent: Wednesday, November 16, 2011 11:34 AM
>> To: STDS-802-3-100GNGOPTX@xxxxxxxxxxxxxxxxx
>> Subject: Re: [802.3_100GNGOPTX] Emerging new reach space
>>
>> Anyone have any data on distribution of parallel vs duplex volume for
>> OM3/4 and OS1?
>>
>> Is most SMF is duplex (or simplex) given the alignment requirements?
>>
>> It would be nice to have a MMF version of 100G that doesn't require
>> parallel fibers, but we'd need to understand relative cost differences.
>>
>> Thanks,
>> Brad
>>
>>
>>
>> -----Original Message-----
>> From: Ali Ghiasi [aghiasi@xxxxxxxxxxxx<mailto:aghiasi@xxxxxxxxxxxx>]
>> Sent: Wednesday, November 16, 2011 11:04 AM Central Standard Time
>> To: STDS-802-3-100GNGOPTX@xxxxxxxxxxxxxxxxx
>> Subject: Re: [802.3_100GNGOPTX] Emerging new reach space
>>
>> Jack
>>
>> If there is another LR4 PMD out there the best starting point would be
>> 40Gbase-LR4, look at its cost structure, and build a 40G/100G compatible
>> PMD.
>>
>> We also need to understand the cost difference between parallel MR4 vs
>> 40Gbase-LR4 (CWDM). The 40Gbase-LR4 cost with time could be assumed
>> identical to the new 100G MR4 PMD. Having this baseline cost then we can
>> compare its cost with 100GBase-LR4 and parallel MR4. The next step is to
>> take
>> into account higher cable and connector cost associated with parallel
>> implementation then identify at what reach it gets to parity with 100G
>> (CWDM) or
>> 100G (LAN-WDM).
>>
>> In the mean time we need to get more direct feedback from end users if
>> the parallel SMF is even an acceptable solution for reaches of 500-1000 m.
>>
>> Thanks,
>> Ali
>>
>>
>>
>> On Nov 15, 2011, at 8:41 PM, Jack Jewell wrote:
>>
>> Thanks for this input Chris.
>> I'm not "proposing" anything here, rather trying to frame the challenge,
>> so that we become better aligned in how cost-aggressive we should be,
>> which guides the technical approach. As for names, "whatever works" :-)
>> It would be nice to have a (whatever)R4, be it nR4 or something else, and
>> an english name to go with it. The Structured Data Center (SDC) links you
>> describe in your Nov2011 presentation are what I am referencing, except
>> for the restriction to "duplex SMF." My input is based on use of any
>> interconnection medium that provides the overall lowest-cost,
>> lowest-power solution, including e.g. parallel SMF.
>> Cost comparisons are necessary, but I agree tend to be dicey. Present
>> 10GbE costs are much better defined than projected 100GbE NextGen costs,
>> but there's no getting around having to estimate NextGen costs, and
>> specifying the comparison. Before the straw poll, I got explicit
>> clarification that "LR4" did NOT include mux/demux IC's, and therefore
>> did not refer to what is built today. My assumption was a "fair" cost
>> comparison between LR4 and (let's call it)nR4 - at similar stage of
>> development and market maturity. A relevant stage is during delivery of
>> high volumes (prototype costs are of low relevance). This does NOT imply
>> same volumes. It wouldn't be fair to project ER costs based on SR or
>> copper volumes. I'm guessing these assumptions are mainstream in this
>> group. That would make the 25% cost target very aggressive, and a 50%
>> cost target probably sufficient to justify an optimized solution. Power
>> requirements are a part of the total cost of ownership, and should be
>> consider!
>> ed, but perhaps weren't.
>> The kernel of this discussion is whether to pursue "optimized solutions"
>> vs "restricted solutions." LR4 was specified through great scrutiny and
>> is expected to be a very successful solution for 10km reach over duplex
>> SMF. Interoperability with LR4 is obviously desirable, but would a
>> 1km-spec'd-down version of LR4 provide sufficient cost/power savings over
>> LR4 to justify a new PMD and product development? Is there another duplex
>> SMF solution that would provide sufficient cost/power savings over LR4 to
>> justify a new PMD and product development? If so, why wouldn't it be
>> essentially a 1km-spec'd-down version of LR4? There is wide perception
>> that SDC's will require costs/powers much lower than are expected from
>> LR4, so much lower that it's solution is a major topic in HSSG. So far,
>> it looks to me like an optimized solution is probably warranted. But I'm
>> not yet convinced of that, and don't see consensus on the issue in the
>> group, hence the discussion.
>> Cheers, Jack
>>
>> From: Chris Cole <chris.cole@xxxxxxxxxxx<mailto:chris.cole@xxxxxxxxxxx>>
>> Reply-To: Chris Cole
>> <chris.cole@xxxxxxxxxxx<mailto:chris.cole@xxxxxxxxxxx>>
>> Date: Tue, 15 Nov 2011 17:33:17 -0800
>> To:
>> <STDS-802-3-100GNGOPTX@xxxxxxxxxxxxxxxxx<mailto:STDS-802-3-100GNGOPTX@LIST
>> SERV.IEEE.ORG>>
>> Subject: Re: [802.3_100GNGOPTX] Emerging new reach space
>>
>> Hello Jack,
>>
>> Nice historical perspective on the new reach space.
>>
>> Do I interpret your email as proposing to call the new 150m to 1000m
>> standard 100GE-MR4? ☺
>>
>> One of the problems in using today’s 100GE-LR4 cost as a comparison
>> metric for new optics is that there is at least an order of magnitude
>> variation in the perception of what that cost is. Given such a wide
>> disparity in perception, 25% can either be impressive or inadequate.
>>
>> What I had proposed as reference baselines for making comparisons is
>> 10GE-SR (VCSEL based TX), 10GE-LR (DFB laser based TX) and 10GE-ER (EML
>> based TX) bit/sec cost. This not only allows us to make objective
>> relative comparisons but also to decide if the technology is suitable for
>> wide spread adoption by using rules of thumb like 10x the bandwidth
>> (i.e. 100G) at 4x the cost (i.e. 40% of 10GE-nR cost) at similar high
>> volumes.
>>
>> Using these reference baselines, in order for the new reach space optics
>> to be compelling, they must have a cost structure that is referenced to a
>> fraction of 10GE-SR (VCSEL based) cost, NOT referenced to a fraction of
>> 10GE-LR (DFB laser based) cost. Otherwise, the argument can be made that
>> 100GE-LR4 will get to a fraction of 10GE-LR cost, at similar volumes, so
>> why propose something new.
>>
>> Chris
>>
>> From: Jack Jewell [mailto:jack@xxxxxxxxxxxxxx]
>> Sent: Tuesday, November 15, 2011 3:06 PM
>> To:
>> STDS-802-3-100GNGOPTX@xxxxxxxxxxxxxxxxx<mailto:STDS-802-3-100GNGOPTX@LISTS
>> ERV.IEEE.ORG>
>> Subject: [802.3_100GNGOPTX] Emerging new reach space
>>
>> Following last week's meetings, I think the following is relevant to
>> frame our discussions of satisfying data center needs for low-cost
>> low-power interconnections over reaches in the roughly 150-1000m range.
>> This is a "30,000ft view,"without getting overly specific.
>> Throughout GbE, 10GbE, 100GbE and into our discussions of 100GbE
>> NextGenOptics, there have been 3 distinct spaces, with solutions
>> optimized for each: Copper, MMF, and SMF. With increasing data rates,
>> both copper and MMF specs focused on maintaining minimal cost, and their
>> reach lengths decreased. E.g. MMF reach was up to 550m in GbE, then 300m
>> in 10GbE (even shorter reach defined outside of IEEE), then 100-150m in
>> 100GbE. MMF reach for 100GbE NextGenOptics will be even shorter unless
>> electronics like EQ or FEC are included. Concurrently, MMF solutions have
>> become attractive over copper at shorter and shorter distances. Both
>> copper and MMF spaces have "literally" shrunk. In contrast, SMF solutions
>> have maintained a 10km reach (not worrying about the initial 5km spec in
>> GbE, or 40km solutions). To maintain the 10km reach, SMF solutions
>> evolved from FP lasers, to DFB lasers, to WDM with cooled DFB lasers. The
>> 10km solutions increasingly resemble longer-haul telecom solutions. T!
>> here is an increasing cost disparity between MMF and SMF solutions. This
>> is an observation, not a questioning of the reasons behind these trends.
>> The increasing cost disparity between MMF and SMF solutions is
>> accompanied by rapidly-growing data center needs for links longer than
>> MMF can accommodate, at costs less than 10km SMF can accommodate. This
>> has the appearance of the emergence of a new "reach space," which
>> warrants its own optimized solution. The emergence of the new reach space
>> is the crux of this discussion.
>> Last week, a straw poll showed heavy support for "a PMD supporting a 500m
>> reach at 25% the cost of 100GBASE-LR4" (heavily favored over targets of
>> 75% or 50% the cost of 100GBASE-LR4). By heavily favoring the most
>> aggressive low-cost target, this vote further supports the need for an
>> "optimized solution" for this reach space. By "optimized solution" I mean
>> one which is free from constraints, e.g. interoperability with other
>> solutions. Though interoperability is desirable, an interoperable
>> solution is unlikely to achieve the cost target. In the 3 reach spaces
>> discussed so far, there is NO interoperability between copper/MMF,
>> MMF/SMF, or copper/SMF. Copper, MMF and SMF are optimized solutions. It
>> will likely take an optimized solution to satisfy this "mid-reach" space
>> at the desired costs. To repeat: This has the appearance of the emergence
>> of a new "reach space," which warrants its own optimized solution. Since
>> the reach target lies between "short reach" and "long reach," "mid!
>> reach" is a reasonable term
>> Without discussing specific technical solutions, it is noteworthy that
>> all 4 technical presentations last week for this "mid-reach" space
>> involved parallel SMF, which would not interoperate with either
>> 100GBASE-LR4, MMF, or copper. They would be optimized solutions, and
>> interest in their further work received the highest support in straw
>> polls. Given the high-density environment of datacenters, a solution for
>> the mid-reach space would have most impact if its operating power was
>> sufficiently low to be implemented in a form factor compatible with MMF
>> and copper sockets.
>> Cheers, Jack
>