Thread Links | Date Links | ||||
---|---|---|---|---|---|
Thread Prev | Thread Next | Thread Index | Date Prev | Date Next | Date Index |
To me your current (and prior) sentence suggests that the validity of “the assumption that optics costs scales linearly with number of wavelengths” won’t
be determined until next year. And I’m not sure where you get you 3.5W for a 100G per wavelength solution… the actually number will be far less (around 1.25-1.5 W). Brian From: Chris Cole [mailto:chris.cole@xxxxxxxxxxx]
Ian, Here are the last two sentences of my email: “So a credible PSM4 proposal has to show why it is significantly lower cost than what will flood the market next year. We already know that with current generation
CMOS it is significantly higher module power. Projected 14nm CMOS ASIC power, will best case result in comparable module power. That is not a compelling story.” My conclusion is that the story is not compelling because it does not offer lower cost and is higher power. Today, the assumption that optics cost scales linearly
with number of wavelengths (as articulated in your comment that CAPEX is 4x or 2x) is no longer valid. This used to be the case when every wavelength was a separate, discrete OSA, for example as in Gen1 100G LR4 which used four discrete EMLs in gold boxes.
It has nothing to do with reality for datacenter WDM optics being shipped in volume today like 40G LR4 and 100G LR4, and 100G CWDM4 next year. That is because
the optics are integrated. And the cost of WDM integration is on a steep cost decline curve as inexplicably large number of vendors are jumping into the fray to offer WDM optics. It makes for a miserable existence for optics suppliers, and life in the easy
lane for system OEMs. Your comment that 1W doesn’t matter in client optics is not accurate. If we have 50G/lambda at 1W per lane, which with similar architectural approach as 25G/lambda
except 2x speed is a credible projection. (Next year, 4x25G optics will be 3W to 3.5W, which is ~0.8W per 25G.) This will eventually enable 2x50G/lambda solution at ~2W, so we can pack two 100G channels into a 200G QSFP module (with 50G electrical I/O lanes
enabling two CAUI-2 interfaces). A 100G/lambda solution is projected at 3.5W with 14nm CMOS ASIC, which means it will remain one 100G optical interface per QSFP, exactly the same as we get with LR4 or CWDM4. So 50G/lambda will enable doubling 100G port density
(and quadrupling 40G port density) over today’s 100G port density. After a huge R&D investment 100G/lambda will best case duplicate today’s 100G port density. Chris From: Dedic, Ian [mailto:Ian.Dedic@xxxxxxxxxxxxxx]
Hi Chris Saying that comparable module power with a 14nm ASIC is not a compelling story is only addressing the power issue, not the cost one. Regardless of the progress with photonic integration (assuming issues like yield, reliability, and manufacturing in volume can be dealt with), it’s difficult
to see how the optics for a higher-channel-count solution (16x25G or 8x50G) can end up cheaper than an 4x100G solution, especially if the bandwidth requirements are similar like they are with DMT. If optical integration brings down the cost of 16x25G or 8x50G, it will bring down the cost of 4x100G even more (and DMT more than PAM4 because of lower bandwidth). This may be more in the interests of the end customers than the component suppliers, but that’s a problem the industry has to face up to. There is exactly the same supplier/customer conflict regarding number of wavelengths – several large customers are saying they really want 100G/lambda because
they perceive this as not only being the lowest cost solution with the longest lifetime but also allowing more dense multiplexing for longer reaches; so as long as power is acceptable they would much prefer this to 8x50G or 16x25G. In the end cost matters; 1W/ch power difference per 100G may sound like a lot, but I calculate this saves about $1/year on OPEX. What do you think the additional
CAPEX cost is for 2x or 4x the number of optical channels, even allowing for photonic integration? Many people might think that this *is* a compelling story. Cheers Ian From: Chris Cole [mailto:chris.cole@xxxxxxxxxxx]
Hi Brian, The good news for the PSM4 Economic Feasibility baseline is that we don’t need to model or speculate. We have a real baseline: CWDM4. Next year we will see many CWDM4 QSFP28 products, using a variety of technical approaches, most of which will show that the preoccupation in .bs with number
of lanes or lasers inside a module is excessive. Ability to integrate WDM is becoming commonplace, as for example reported by many presenters at last week’s ECOC’14 Conference who reported 4x25G WDM (or greater lane count) optical chips, as well as novel packaging
approaches. So a credible PSM4 proposal has to show why it is significantly lower cost than what will flood the market next year. We already know that with current generation CMOS it is significantly higher module power. Projected 14nm CMOS ASIC power, will
best case result in comparable module power. That is not a compelling story. Chris
From: Brian Welch [mailto:bwelch@xxxxxxxxxxx]
Chris, On the 500m economic feasibility question. How much cost increase are you modeling when you double the number of lanes, double the number of lasers, double
the number of photodiodes, and add in Mux & Demux filters for the 50G per lane proposal? Brian From: Chris Cole [mailto:chris.cole@xxxxxxxxxxx]
Hi Gary, All good comments, and a great exercise of quantifying transition to 100G LR4. Which existing PMD to compare against proposed PMDs depends on which of the 802.3 Five Criteria we are addressing.
For 400G duplex SMF (2km and 10km reach objectives) we are determining Technical Feasibility. We have a lot of experience with 100GBASE-LR4, so it’s the right
technical benchmark. Significantly harder specifications than LR4 raise a Technical Feasibility concern.
For 400G PSM4 (500m reach) objective, I had done the exercise of comparing against LR4 and all the proposed SMF PMDs are fine within the limitations of simple
link budget analysis. (This assumed high coding gain FEC like BCH for 100G per lambda PMDs). The next step is to determine Economic Feasibility. CWDM4 uses KR4 FEC to significantly relax the TX and RX optical specifications therefore reducing cost, and will
determine the 100G SMF market dynamics starting next year. This makes it the right economic benchmark. Significantly harder specifications than CWDM4 raises an Economic Feasibility concern.
When comparing Gen1 100GBASE-LR4 against a predecessor, the appropriate comparison is against 40G Serial OC768 (later became 40GBASE-FR), which was the highest
speed existing optic at the time. Gen1 100GBASE-LR4 EML TX based specs. used OC768 optics as the starting point.
http://www.ieee802.org/3/bs/public/14_07/cole_3bs_02a_0714.pdf#page=5 Subsequent generation 100GBASE-LR4 DML based specs. used 10GBASE-LR as the starting point.
For EML based LR4 (Gen1), the relative receiver sensitivity is much harder than OC768. However, that had improved dramatically in the decade since the OC768
spec. was written (as can be seen in the receiver comparison column against LR). For the DML based LR4 (subsequent generations), the TX power is higher. This was not a major issue because some optical power has to be thrown away in 10G LR TOSAs to stay under
the max. power limit. The main take away point from the comparison is that the right use of DSP is to relax optical specifications as was done in going from LR4 to CWDM4. When DSP
significantly increases the difficulty of the optical specifications, for example as seen in going from LR4 or CWDM4 to the 100G/lambda proposals in .bs, that seems like the wrong direction.
Chris From: Gary Nicholl (gnicholl) [mailto:gnicholl@xxxxxxxxx]
Chris, First thanks for pulling this presentation together. I may still be trying to get my head around all of the numbers, but it is the analysis that ultimately we all need (and need
to agree upon). I also like your comparison to existing 100G solutions . I think your intent here is to try and quantify the complexity/difficulty of the different 400GbE PMD proposals in relation
to something that we are all familiar with, i.e. developing and delivering the first 100G solutions. I was about to make a point here at the end of the meeting but my phone died! The point I was going to make is that I think we should use the same basis for
comparison for all of the 400G PMD objectives, i.e. 400G 500m PSM4, 400G 2km Duplex and 400G 10km Duplex. Using a common basis for comparison makes it possible to not only compare between different options within a single PMD objective, but also between solutions
for different PMD objectives, i.e. how much harder is 2km duplex versus PSM4, or how much harder is 10km duplex versus 2km duplex, etc. With this in mind I would propose using 100G-LR4 as the basis for all comparisons. What we are investigating here are 1st
generation 400GbE PMDs solutions and it makes sense to me to compare against a similar stage in the 100G project, i.e. the first generation 100GbE SMF PMD. Personally It doesn’t make a lot of sense to be comparing against a 3rd generation 100G-CWDM4 solution
(and especially not for a single PMD objective, and then using 100G-LR4 for the others). We next need to agree on how to interpret the comparison data. In your presentation you show ‘total delta dB’ numbers ranging from 1.5dB to 7.9dB. How do we interpret these numbers
? Is there a specific dB number (or range of numbers) we should be targeting for an optimal solution, below which we are not trying hard enough, and beyond which we are pushing the technology too hard ? To provide some insight here I went back and tried to carry out a similar analysis for the transition from 10G-LR to 100G-LR4, to see "how hard we were pushing" when we choose
100G-LR4 as part of the 802.3ba project. The numbers I came up with are as follows (I encourage others to run their own analysis in case I made an error somewhere!): Tx OMA delta (pre mux): 5.9dB Rx Sen OMA delta (post demux): 3.9dB TOTAL delta: 9.8dB A delta of 9.8dB is significantly larger than for any of the 400GbE options on the table. Does this mean we were pushing the technology much harder for 100GbE than we are now
for any of the 400GbE options? I suspect not. Perhaps it means that as we move to higher and higher speeds we are starting to pushing against some fundamental limits and extra dBs are harder to come by ? Bottom line is likely that while a comparison may provide
a useful insight, but the devil is in the interpretation of the numbers. Gary From:
Ali Ghiasi <aghiasi@xxxxxxxxx> Chris
There was couple of questions on your presentation today in regard what to use for optical Mux/De-Mux looking at couple of supplier Cubo, AFOP, Oplink indicate what you are using are reasonable 2 dB - 4 Channel Mux/De-Mux 3 dB - 8 Channel Mux/De-Mux The AFOP CWDM 4 channel mux has loss of 1.5 and for 8 channel only 2 dB. Other suppliers losses are little higher, more in line with what you have 2 dB for 4 channels and 3 dB
for 8 channels. On slide 5 you referenced http://www.ieee802.org/3/bs/public/14_07/bhatt_3bs_01a_0714.pdf
which has an error floor of 4E-4 for 106.25 Gb/s PAM4. Bhatt result were based on to be published ECOC paper M. Poulin. However the published ECOC results are little worse than what was published in IEEE. Here are the BER results published in ECOC: - 53 GBd PAM4 BER=2.9E-3 - 40 GBd PAM4 BER=2.4E-4 - 30 GBd PAM4 BER=1E-6 It looks like if you give PAM4 enough bandwidth as in the case of 30 GBd then BER improves and error floor improves. |