Linear optics also either remove the possibility of having flexible ports for other media (including 10km SMF, MMF, copper), or would require that the same chips on the host can work with these other media at these other media’s performance/cost points.
Mike Dudek
QLogic Corporation
Director Signal Integrity
26650 Aliso Viejo Parkway
Aliso Viejo CA 92656
949 389 6269 - office.
Mike.Dudek@xxxxxxxxxx
When we talk about linear optics and modules, we should keep in mind that linear optics carry with them cost and power implications, and not all advanced modulation solutions require them. You can see in my presented work non-linear PAM4 transmitters (optical DAC approach), that use a very low headroom CMOS driver. Forcing a completely linear optical module would prohibit such techniques, and restrict implementations to high headroom, constant current driver approaches.
And, as I believe Chris was referring too, the SI impact of the additional routing from electronics to optics (ie, host trace to module connector, module connector, and module trace) would need to be accounted for in link/jitter budget planning… their impacts could be large.
Thanks,
Brian
Hi Ian,
You deserve a lot of credit for providing us with credible power estimates for DSP ASICs in several CMOS nodes, even though these contribute to higher total PMD power estimates than one might wish for. It has enabled informed debate.
With respect to linear module interface, I agree with you that in the long term this is an area worth investigating as an approach to lowering module power. However, it is not clear that the link budget for the proposed 400G DMT PMD can be made to work with a linear interface. Current analysis shows that with re-timed interface and conventional optics, 400G DMT has difficulty closing the 2km link budget, and can’t close the 10km link budget, even with use of strong FEC. The linear interface will add significant impairments. Are you planning to bring in a detailed link budget showing how 400G DMT will work with a linear interface?
With respect to end users, Brad Booth has been listed as a supporter of 100G PAM-4, although not 100G DMT. Since you are citing his support, it would be helpful to get clarification from Brad that he is supporting all 100G/lambda proposals, and that he will not deploy any non-100G lambda PMD. My impression is that his company will deploy whatever is cheapest regardless of what is standardized, and presumably regardless of what may or may not have been supported earlier.
Brad and Tom have done a great job representing the interests of their company, and driving the standards to provide solutions optimized for their data center needs. However, as we have seen their needs are not necessarily the same as others. Brad and Tom have been strong supporters of PSM4, but other data center and central office operators have a preference not to use PSM4. Brad is satisfied with link budget performance of the 100G/lambda proposals, yet many end users, including mega data center operators, require higher loss, either because their reach is >2km, or because their reach is <2km but incurs high loss in other ways.
Chris
From: Dedic, Ian [mailto:Ian.Dedic@xxxxxxxxxxxxxx]
Sent: Monday, October 06, 2014 9:51 AM
To: Chris Cole; IEEE_listserver
Subject: RE: [STDS-802-3-400G] Cole Presentation
Hi Chris
I didn’t mean that 1W per channel didn’t matter in client optics from a power (heat) point of view, I meant that it didn’t matter from a cost point of view because the OPEX adder is negligible compared to the CAPEX. What we have to remember is that we’re trying to define a standard for a market in 2017, not today, and cost will be critically important or it will never succeed in the market – of course power consumption has to be low enough to fit in the required module sizes, but the numbers being put forward show that all the systems proposed (from 16x25G NRZ to 4x100G DMT) will be able to do this in the required timescales.
If your premise is that integrated photonics will allow a large number of optical channels at very low marginal cost -- which is a big and unproven assumption! -- then we don't need a new standard, just use 4 100G 4x25G links with or without WDM (16 or 4 or 1 fiber), avoid the need for any "complex DSP" (PAM4 or DMT), and integrate all the 25G NRZ optics. This is "today's solution", but if I understand correctly this is not what the task force is being asked to come up with.
If the incremental cost of optical channels is not so small then the cheapest solution by 2017 will be 4 X 100G using DSP, with DMT allowing the use of cheaper optics and less fiber bandwidth for WDM applications than PAM4. Even if integrated photonics delivers on its promises, I believe that this will be a cheaper solution in volume than an 8-channel or a 16-channel one – and if it doesn’t deliver this will certainly be the case. This is "tomorrow's solution", which is what the task force is being asked to come up with.
For really high density 4x100G DMT also allows host-based solutions, which would enable 400G in QSFP28 (4x the bits per inch of front panel!) at much lower cost per bit than 100G (and the same bandwidth for 4x the data) since the 4-ch 25G-class optics are similar, however linear not NRZ – but there is resistance to host-based solution for many reasons, even though moving the DSP out of the optics makes a lot of sense from the thermal point of view as well as density.
The in-between 50G/lambda solutions are neither fish nor fowl, they'll either be too late to beat 16 X 25G on price (which will have slid down the cost curve by then, especially when integrated photonics delivers!) or don’t offer the density and cost leap forward that we need for 400G and which a 100G/lambda solution can deliver by then.
Another issue – which may not apply to all reaches, or more strongly for longer reach WDM – is that some large customers have said that either they strongly prefer a 100G/lambda solution or have even gone as far as saying they will *only* accept 100G/lambda for some applications – for example, I believe this is what Brad Booth (Microsoft) said in Ottawa before the 50G/100G straw poll, and I have heard the same view privately from others.
DSP or not DSP, that is the question ☺
Cheers
Ian
From: Chris Cole [mailto:chris.cole@xxxxxxxxxxx]
Sent: 02 October 2014 19:16
To: Dedic, Ian; IEEE_listserver
Subject: RE: [STDS-802-3-400G] Cole Presentation
Ian,
Here are the last two sentences of my email:
“So a credible PSM4 proposal has to show why it is significantly lower cost than what will flood the market next year. We already know that with current generation CMOS it is significantly higher module power. Projected 14nm CMOS ASIC power, will best case result in comparable module power. That is not a compelling story.”
My conclusion is that the story is not compelling because it does not offer lower cost and is higher power. Today, the assumption that optics cost scales linearly with number of wavelengths (as articulated in your comment that CAPEX is 4x or 2x) is no longer valid. This used to be the case when every wavelength was a separate, discrete OSA, for example as in Gen1 100G LR4 which used four discrete EMLs in gold boxes.
It has nothing to do with reality for datacenter WDM optics being shipped in volume today like 40G LR4 and 100G LR4, and 100G CWDM4 next year. That is because the optics are integrated. And the cost of WDM integration is on a steep cost decline curve as inexplicably large number of vendors are jumping into the fray to offer WDM optics. It makes for a miserable existence for optics suppliers, and life in the easy lane for system OEMs.
Your comment that 1W doesn’t matter in client optics is not accurate. If we have 50G/lambda at 1W per lane, which with similar architectural approach as 25G/lambda except 2x speed is a credible projection. (Next year, 4x25G optics will be 3W to 3.5W, which is ~0.8W per 25G.) This will eventually enable 2x50G/lambda solution at ~2W, so we can pack two 100G channels into a 200G QSFP module (with 50G electrical I/O lanes enabling two CAUI-2 interfaces). A 100G/lambda solution is projected at 3.5W with 14nm CMOS ASIC, which means it will remain one 100G optical interface per QSFP, exactly the same as we get with LR4 or CWDM4. So 50G/lambda will enable doubling 100G port density (and quadrupling 40G port density) over today’s 100G port density. After a huge R&D investment 100G/lambda will best case duplicate today’s 100G port density.
Chris
From: Dedic, Ian [mailto:Ian.Dedic@xxxxxxxxxxxxxx]
Sent: Thursday, October 02, 2014 9:25 AM
To: Chris Cole; IEEE_listserver
Subject: RE: [STDS-802-3-400G] Cole Presentation
Hi Chris
Saying that comparable module power with a 14nm ASIC is not a compelling story is only addressing the power issue, not the cost one.
Regardless of the progress with photonic integration (assuming issues like yield, reliability, and manufacturing in volume can be dealt with), it’s difficult to see how the optics for a higher-channel-count solution (16x25G or 8x50G) can end up cheaper than an 4x100G solution, especially if the bandwidth requirements are similar like they are with DMT.
If optical integration brings down the cost of 16x25G or 8x50G, it will bring down the cost of 4x100G even more (and DMT more than PAM4 because of lower bandwidth).
This may be more in the interests of the end customers than the component suppliers, but that’s a problem the industry has to face up to.
There is exactly the same supplier/customer conflict regarding number of wavelengths – several large customers are saying they really want 100G/lambda because they perceive this as not only being the lowest cost solution with the longest lifetime but also allowing more dense multiplexing for longer reaches; so as long as power is acceptable they would much prefer this to 8x50G or 16x25G.
In the end cost matters; 1W/ch power difference per 100G may sound like a lot, but I calculate this saves about $1/year on OPEX. What do you think the additional CAPEX cost is for 2x or 4x the number of optical channels, even allowing for photonic integration?
Many people might think that this *is* a compelling story.
Cheers
Ian
Hi Brian,
The good news for the PSM4 Economic Feasibility baseline is that we don’t need to model or speculate. We have a real baseline: CWDM4.
Next year we will see many CWDM4 QSFP28 products, using a variety of technical approaches, most of which will show that the preoccupation in .bs with number of lanes or lasers inside a module is excessive. Ability to integrate WDM is becoming commonplace, as for example reported by many presenters at last week’s ECOC’14 Conference who reported 4x25G WDM (or greater lane count) optical chips, as well as novel packaging approaches. So a credible PSM4 proposal has to show why it is significantly lower cost than what will flood the market next year. We already know that with current generation CMOS it is significantly higher module power. Projected 14nm CMOS ASIC power, will best case result in comparable module power. That is not a compelling story.
Chris
Chris,
On the 500m economic feasibility question. How much cost increase are you modeling when you double the number of lanes, double the number of lasers, double the number of photodiodes, and add in Mux & Demux filters for the 50G per lane proposal?
Brian
Hi Gary,
All good comments, and a great exercise of quantifying transition to 100G LR4.
Which existing PMD to compare against proposed PMDs depends on which of the 802.3 Five Criteria we are addressing.
For 400G duplex SMF (2km and 10km reach objectives) we are determining Technical Feasibility. We have a lot of experience with 100GBASE-LR4, so it’s the right technical benchmark. Significantly harder specifications than LR4 raise a Technical Feasibility concern.
For 400G PSM4 (500m reach) objective, I had done the exercise of comparing against LR4 and all the proposed SMF PMDs are fine within the limitations of simple link budget analysis. (This assumed high coding gain FEC like BCH for 100G per lambda PMDs). The next step is to determine Economic Feasibility. CWDM4 uses KR4 FEC to significantly relax the TX and RX optical specifications therefore reducing cost, and will determine the 100G SMF market dynamics starting next year. This makes it the right economic benchmark. Significantly harder specifications than CWDM4 raises an Economic Feasibility concern.
When comparing Gen1 100GBASE-LR4 against a predecessor, the appropriate comparison is against 40G Serial OC768 (later became 40GBASE-FR), which was the highest speed existing optic at the time. Gen1 100GBASE-LR4 EML TX based specs. used OC768 optics as the starting point.
http://www.ieee802.org/3/bs/public/14_07/cole_3bs_02a_0714.pdf#page=5
Subsequent generation 100GBASE-LR4 DML based specs. used 10GBASE-LR as the starting point.
| 100G LR4 (EML) -OC768 (FR) specs. | 100G LR4 (DML) -10G LR specs. | 100G CWDM4 (DML) -10G LR specs. |
TX OMA delta (pre-Mux) dB | -1.0 | 4.9 | 3.0 |
RX Sens. delta (post-DeMux) dB | -4.5 | -1.2 | 0.0 |
Total delta dB | 3.5 | 6.1 | 3.0 |
For EML based LR4 (Gen1), the relative receiver sensitivity is much harder than OC768. However, that had improved dramatically in the decade since the OC768 spec. was written (as can be seen in the receiver comparison column against LR). For the DML based LR4 (subsequent generations), the TX power is higher. This was not a major issue because some optical power has to be thrown away in 10G LR TOSAs to stay under the max. power limit.
The main take away point from the comparison is that the right use of DSP is to relax optical specifications as was done in going from LR4 to CWDM4. When DSP significantly increases the difficulty of the optical specifications, for example as seen in going from LR4 or CWDM4 to the 100G/lambda proposals in .bs, that seems like the wrong direction.
Chris
First thanks for pulling this presentation together. I may still be trying to get my head around all of the numbers, but it is the analysis that ultimately we all need (and need to agree upon).
I also like your comparison to existing 100G solutions . I think your intent here is to try and quantify the complexity/difficulty of the different 400GbE PMD proposals in relation to something that we are all familiar with, i.e. developing and delivering the first 100G solutions. I was about to make a point here at the end of the meeting but my phone died! The point I was going to make is that I think we should use the same basis for comparison for all of the 400G PMD objectives, i.e. 400G 500m PSM4, 400G 2km Duplex and 400G 10km Duplex. Using a common basis for comparison makes it possible to not only compare between different options within a single PMD objective, but also between solutions for different PMD objectives, i.e. how much harder is 2km duplex versus PSM4, or how much harder is 10km duplex versus 2km duplex, etc. With this in mind I would propose using 100G-LR4 as the basis for all comparisons. What we are investigating here are 1st generation 400GbE PMDs solutions and it makes sense to me to compare against a similar stage in the 100G project, i.e. the first generation 100GbE SMF PMD. Personally It doesn’t make a lot of sense to be comparing against a 3rd generation 100G-CWDM4 solution (and especially not for a single PMD objective, and then using 100G-LR4 for the others).
We next need to agree on how to interpret the comparison data. In your presentation you show ‘total delta dB’ numbers ranging from 1.5dB to 7.9dB. How do we interpret these numbers ? Is there a specific dB number (or range of numbers) we should be targeting for an optimal solution, below which we are not trying hard enough, and beyond which we are pushing the technology too hard ?
To provide some insight here I went back and tried to carry out a similar analysis for the transition from 10G-LR to 100G-LR4, to see "how hard we were pushing" when we choose 100G-LR4 as part of the 802.3ba project. The numbers I came up with are as follows (I encourage others to run their own analysis in case I made an error somewhere!):
Tx OMA delta (pre mux): 5.9dB
Rx Sen OMA delta (post demux): 3.9dB
A delta of 9.8dB is significantly larger than for any of the 400GbE options on the table. Does this mean we were pushing the technology much harder for 100GbE than we are now for any of the 400GbE options? I suspect not. Perhaps it means that as we move to higher and higher speeds we are starting to pushing against some fundamental limits and extra dBs are harder to come by ? Bottom line is likely that while a comparison may provide a useful insight, but the devil is in the interpretation of the numbers.
Chris
There was couple of questions on your presentation today
in regard what to use for optical Mux/De-Mux looking at couple of supplier Cubo, AFOP, Oplink indicate what you are using are reasonable
2 dB - 4 Channel Mux/De-Mux
3 dB - 8 Channel Mux/De-Mux
The AFOP CWDM 4 channel mux has loss of 1.5 and for 8 channel only 2 dB. Other suppliers losses are little higher, more in line with what you have 2 dB for 4 channels and 3 dB for 8 channels.
However the published ECOC results are little worse than what was published in IEEE. Here are the BER results published in ECOC:
It looks like if you give PAM4 enough bandwidth as in the case of 30 GBd then BER improves and error floor improves.