Thread Links | Date Links | ||||
---|---|---|---|---|---|
Thread Prev | Thread Next | Thread Index | Date Prev | Date Next | Date Index |
Ian, I believe the approach I suggested will get to QSFP100, when the 4x100G serial electrical interface is ready, with cost/power per unit throughput lower than
what is achieved with 100G solutions. The electrical interface is still the open question in that, although I think extending to a 52GBD-PAM4 interface can work there as well (from a likely 26GBD-PAM4 for 50 Gbps). You would likely give up some reach (ie,
maybe 5” reach vs. 10” for 50 Gbps links), although there we could take a page from the PCIE playbook and spec a repeater function for longer lengths. Doing so you could maintain a high percentage of direct ASIC to Transceiver links (ie, server side and inner
switch ports), and use the repeaters in limited instances (ie, outer switch ports). Of course I leave it to the electrical I/O guys to determine what is best there, and will keep my focus on the optical I/O. I agree with you on the flexibility vs. density argument, and when you limit density you invariably drive up cost (imagine how much copper would cost if it
were limited to 1-4 I/O’s per silicon IC). Going with higher density integrated optics could drive down cost considerably, however it’s not for everyone (at least not yet). I’m curious how much loss you feel is inherent in the laboratory test setups evaluated thus far? How much impact do you think this has on measurement results
shown to date? Thanks, Brian From: Dedic, Ian [mailto:Ian.Dedic@xxxxxxxxxxxxxx]
Hi all Mike, I agree that this is one issue with a host-based solution. However there is a basic question here – which is more important, flexibility or density? As
in so many other applications, there is a price to be paid for flexibility, and in this case the price is lower density and probably higher cost. In applications where density is paramount and flexibility is not – because the life of the system is less than
the TTM of the next standard – then a host-based solution makes sense. In others where density is less important and longevity and flexibility is more important it doesn’t. You pays yer money and you takes yer choice
J Brian, you’re correct that going to linear optics forces the use of a linear driver. However can you suggest any other way to get 400G into a QSFP28 with cost
not so much higher (even with the host DSP) than 100G in a QSFP28? There is of course a loss budget needed in such a system; but bear in mind that it’s almost certainly lower than the working systems already being demonstrated
using DMT (and PAM4) which use ADC and DAC chips each mounted on a PCB with significant loss (relatively long tracks), with connectors and cables across to another PCB with the optics. I can provide the measured loss figures for such demo systems, other people
with more knowledge about interfaces to pluggable optical modules can presumably provide figures for comparison with this, but I’d be very surprised if they’re worse. I’m not suggesting this as a one-size-fits-all solution – but what I’m hearing is that it may fit some high-density applications very well indeed, and can be
realized with exactly the same components as a 100G/lambda DSP-in-module realization, assuming the modulation format and DSP can deal with channel imperfections – which has been demonstrated in hardware, see above. Ignoring the market demand for such a solution risks that customers who want it will go their own way with a proprietary one, thus reducing the market size
for standardized 400GE and putting the cost up for everybody else. The same applies with the increasing demand for solutions up to 40km or above driven by customers like China Telecom and backhaul/fronthaul applications, because these are telecom not data-driven
markets and the IEEE is focused on data. (maybe “ignoring” is too strong a word – perhaps “dismissing” is better, or saying “this isn’t within the task force remit” – which may be another way of saying
“we don’t want to think about this right now”, or “not our market segment”, or “who do these telecom guys think they are, telling the IEEE what to do?”…) Chris, regarding link budgets, you already said that the link budgets are simplified, approximate and incomplete, and so it would be currently unwise to take
these as the final metric about whether a system will work or not regardless of how much people like to see them. Since working systems up to 10km (or even 40km) with added optical attenuation to give link margin have been demonstrated with 100G/lambda using
both DMT and PAM4, might I suggest that if the link budgets say that these don’t work, the link budgets are wrong? In my experience when real life and calculation/simulations disagree, real life is usually correct
J I think that Brad already said what you’re suggesting he should clarify, you could ask him to restate his position on this. And regarding reach and link budget,
I guess his need for 100G/lambda may be driven by the need to maximize WDM fiber capacity for >2km reaches between data centers, which means he must also be happy that these applications can be served by 100G/lambda regardless of currently published loss budgets. You’re correct that Brad’s needs are not the same as everybody else; however I have seen exactly the same 100G/lambda request/requirement (and enthusiasm for
dense host-based solutions) from other very large consumers of optical links, so we’re certainly not talking about a minority opinion here. I have one other comment about the need for “gearboxes” if the client and line side rates are different, which is often put forward as a big problem by the
“same rate” proponents of 25G/25G or 50G/50G (and a reason against 100G/lambda). All proposals under consideration are retimed and regenerated to separate out the client-side and line-side losses, in other words there is always a CDR on the RX side and a driver
on the TX side, and a chip is needed to do this. Once you have this the cost (power or dollars) of an integer rate change in either direction is negligible because it’s just a register bank inside the chip – and in fact if the data is transferred from RX to
TX in parallel internally (which will usually be the case) the cost may literally be zero if the internal interfaces are defined to be the same width. Cheers Ian From: Mike Dudek [mailto:mike.dudek@xxxxxxxxxx]
Linear optics also either remove the possibility of having flexible ports for other media (including 10km SMF, MMF, copper), or would require that the same
chips on the host can work with these other media at these other media’s performance/cost points. Mike Dudek
QLogic Corporation Director Signal Integrity 26650 Aliso Viejo Parkway Aliso Viejo CA 92656 949 389 6269 - office. From: Brian Welch [mailto:bwelch@xxxxxxxxxxx]
When we talk about linear optics and modules, we should keep in mind that linear optics carry with them cost and power implications, and not all advanced modulation
solutions require them. You can see in my presented work non-linear PAM4 transmitters (optical DAC approach), that use a very low headroom CMOS driver. Forcing a completely linear optical module would prohibit such techniques, and restrict implementations
to high headroom, constant current driver approaches. And, as I believe Chris was referring too, the SI impact of the additional routing from electronics to optics (ie, host trace to module connector, module connector,
and module trace) would need to be accounted for in link/jitter budget planning… their impacts could be large. Thanks, Brian From: Chris Cole [mailto:chris.cole@xxxxxxxxxxx]
Hi Ian, You deserve a lot of credit for providing us with credible power estimates for DSP ASICs in several CMOS nodes, even though these contribute to higher total
PMD power estimates than one might wish for. It has enabled informed debate. With respect to linear module interface, I agree with you that in the long term this is an area worth investigating as an approach to lowering module power.
However, it is not clear that the link budget for the proposed 400G DMT PMD can be made to work with a linear interface. Current analysis shows that with re-timed interface and conventional optics, 400G DMT has difficulty closing the 2km link budget, and can’t
close the 10km link budget, even with use of strong FEC. The linear interface will add significant impairments. Are you planning to bring in a detailed link budget showing how 400G DMT will work with a linear interface?
Brad and Tom have done a great job representing the interests of their company, and driving the standards to provide solutions optimized for their data center
needs. However, as we have seen their needs are not necessarily the same as others. Brad and Tom have been strong supporters of PSM4, but other data center and central office operators have a preference not to use PSM4. Brad is satisfied with link budget performance
of the 100G/lambda proposals, yet many end users, including mega data center operators, require higher loss, either because their reach is >2km, or because their reach is <2km but incurs high loss in other ways.
Chris From: Dedic, Ian [mailto:Ian.Dedic@xxxxxxxxxxxxxx]
Hi Chris I didn’t mean that 1W per channel didn’t matter in client optics from a power (heat) point of view, I meant that it didn’t matter from a cost point of view
because the OPEX adder is negligible compared to the CAPEX. What we have to remember is that we’re trying to define a standard for a market in 2017, not today, and cost will be critically important or it will never succeed in the market – of course power consumption
has to be low enough to fit in the required module sizes, but the numbers being put forward show that all the systems proposed (from 16x25G NRZ to 4x100G DMT) will be able to do this in the required timescales. If your premise is that integrated photonics will allow a large number of optical channels at very low marginal cost -- which is a big and unproven assumption! -- then
we don't need a new standard, just use 4 100G 4x25G links with or without WDM (16 or 4 or 1 fiber), avoid the need for any "complex DSP" (PAM4 or DMT), and integrate all the 25G NRZ optics. This is "today's solution", but if I understand correctly this is
not what the task force is being asked to come up with. If the incremental cost of optical channels is not so small then the cheapest solution by 2017 will be 4 X 100G using DSP, with DMT allowing the use of cheaper optics
and less fiber bandwidth for WDM applications than PAM4. Even if integrated photonics delivers on its promises, I believe that this will be a cheaper solution in volume than an 8-channel or a 16-channel one – and if it doesn’t deliver this will certainly be
the case. This is "tomorrow's solution", which is what the task force is being asked to come up with. For really high density 4x100G DMT also allows host-based solutions, which would enable 400G in QSFP28 (4x the bits per inch of front panel!) at much lower cost per bit
than 100G (and the same bandwidth for 4x the data) since the 4-ch 25G-class optics are similar, however linear not NRZ – but there is resistance to host-based solution for many reasons, even though moving the DSP out of the optics makes a lot of sense from
the thermal point of view as well as density. The in-between 50G/lambda solutions are neither fish nor fowl, they'll either be too late to beat 16 X 25G on price (which will have slid down the cost curve by then,
especially when integrated photonics delivers!) or don’t offer the density and cost leap forward that we need for 400G and which a 100G/lambda solution can deliver by then. Another issue – which may not apply to all reaches, or more strongly for longer reach WDM – is that some large customers have said that either they strongly prefer a 100G/lambda
solution or have even gone as far as saying they will *only* accept 100G/lambda for some applications – for example, I believe this is what Brad Booth (Microsoft) said in Ottawa before the 50G/100G straw poll, and I have heard the same view privately
from others. DSP or not DSP, that is the question
☺ Cheers Ian From: Chris Cole [mailto:chris.cole@xxxxxxxxxxx]
Ian, Here are the last two sentences of my email: “So a credible PSM4 proposal has to show why it is significantly lower cost than what will flood the market next year. We already know that with current generation
CMOS it is significantly higher module power. Projected 14nm CMOS ASIC power, will best case result in comparable module power. That is not a compelling story.” My conclusion is that the story is not compelling because it does not offer lower cost and is higher power. Today, the assumption that optics cost scales linearly
with number of wavelengths (as articulated in your comment that CAPEX is 4x or 2x) is no longer valid. This used to be the case when every wavelength was a separate, discrete OSA, for example as in Gen1 100G LR4 which used four discrete EMLs in gold boxes.
It has nothing to do with reality for datacenter WDM optics being shipped in volume today like 40G LR4 and 100G LR4, and 100G CWDM4 next year. That is because
the optics are integrated. And the cost of WDM integration is on a steep cost decline curve as inexplicably large number of vendors are jumping into the fray to offer WDM optics. It makes for a miserable existence for optics suppliers, and life in the easy
lane for system OEMs. Your comment that 1W doesn’t matter in client optics is not accurate. If we have 50G/lambda at 1W per lane, which with similar architectural approach as 25G/lambda
except 2x speed is a credible projection. (Next year, 4x25G optics will be 3W to 3.5W, which is ~0.8W per 25G.) This will eventually enable 2x50G/lambda solution at ~2W, so we can pack two 100G channels into a 200G QSFP module (with 50G electrical I/O lanes
enabling two CAUI-2 interfaces). A 100G/lambda solution is projected at 3.5W with 14nm CMOS ASIC, which means it will remain one 100G optical interface per QSFP, exactly the same as we get with LR4 or CWDM4. So 50G/lambda will enable doubling 100G port density
(and quadrupling 40G port density) over today’s 100G port density. After a huge R&D investment 100G/lambda will best case duplicate today’s 100G port density. Chris From: Dedic, Ian [mailto:Ian.Dedic@xxxxxxxxxxxxxx]
Hi Chris Saying that comparable module power with a 14nm ASIC is not a compelling story is only addressing the power issue, not the cost one. Regardless of the progress with photonic integration (assuming issues like yield, reliability, and manufacturing in volume can be dealt with), it’s difficult
to see how the optics for a higher-channel-count solution (16x25G or 8x50G) can end up cheaper than an 4x100G solution, especially if the bandwidth requirements are similar like they are with DMT. If optical integration brings down the cost of 16x25G or 8x50G, it will bring down the cost of 4x100G even more (and DMT more than PAM4 because of lower bandwidth). This may be more in the interests of the end customers than the component suppliers, but that’s a problem the industry has to face up to. There is exactly the same supplier/customer conflict regarding number of wavelengths – several large customers are saying they really want 100G/lambda because
they perceive this as not only being the lowest cost solution with the longest lifetime but also allowing more dense multiplexing for longer reaches; so as long as power is acceptable they would much prefer this to 8x50G or 16x25G. In the end cost matters; 1W/ch power difference per 100G may sound like a lot, but I calculate this saves about $1/year on OPEX. What do you think the additional
CAPEX cost is for 2x or 4x the number of optical channels, even allowing for photonic integration? Many people might think that this *is* a compelling story. Cheers Ian From: Chris Cole [mailto:chris.cole@xxxxxxxxxxx]
Hi Brian, The good news for the PSM4 Economic Feasibility baseline is that we don’t need to model or speculate. We have a real baseline: CWDM4. Next year we will see many CWDM4 QSFP28 products, using a variety of technical approaches, most of which will show that the preoccupation in .bs with number
of lanes or lasers inside a module is excessive. Ability to integrate WDM is becoming commonplace, as for example reported by many presenters at last week’s ECOC’14 Conference who reported 4x25G WDM (or greater lane count) optical chips, as well as novel packaging
approaches. So a credible PSM4 proposal has to show why it is significantly lower cost than what will flood the market next year. We already know that with current generation CMOS it is significantly higher module power. Projected 14nm CMOS ASIC power, will
best case result in comparable module power. That is not a compelling story. Chris
From: Brian Welch [mailto:bwelch@xxxxxxxxxxx]
Chris, On the 500m economic feasibility question. How much cost increase are you modeling when you double the number of lanes, double the number of lasers, double
the number of photodiodes, and add in Mux & Demux filters for the 50G per lane proposal? Brian From: Chris Cole [mailto:chris.cole@xxxxxxxxxxx]
Hi Gary, All good comments, and a great exercise of quantifying transition to 100G LR4. Which existing PMD to compare against proposed PMDs depends on which of the 802.3 Five Criteria we are addressing.
For 400G duplex SMF (2km and 10km reach objectives) we are determining Technical Feasibility. We have a lot of experience with 100GBASE-LR4, so it’s the right
technical benchmark. Significantly harder specifications than LR4 raise a Technical Feasibility concern.
For 400G PSM4 (500m reach) objective, I had done the exercise of comparing against LR4 and all the proposed SMF PMDs are fine within the limitations of simple
link budget analysis. (This assumed high coding gain FEC like BCH for 100G per lambda PMDs). The next step is to determine Economic Feasibility. CWDM4 uses KR4 FEC to significantly relax the TX and RX optical specifications therefore reducing cost, and will
determine the 100G SMF market dynamics starting next year. This makes it the right economic benchmark. Significantly harder specifications than CWDM4 raises an Economic Feasibility concern.
When comparing Gen1 100GBASE-LR4 against a predecessor, the appropriate comparison is against 40G Serial OC768 (later became 40GBASE-FR), which was the highest
speed existing optic at the time. Gen1 100GBASE-LR4 EML TX based specs. used OC768 optics as the starting point.
http://www.ieee802.org/3/bs/public/14_07/cole_3bs_02a_0714.pdf#page=5 Subsequent generation 100GBASE-LR4 DML based specs. used 10GBASE-LR as the starting point.
For EML based LR4 (Gen1), the relative receiver sensitivity is much harder than OC768. However, that had improved dramatically in the decade since the OC768
spec. was written (as can be seen in the receiver comparison column against LR). For the DML based LR4 (subsequent generations), the TX power is higher. This was not a major issue because some optical power has to be thrown away in 10G LR TOSAs to stay under
the max. power limit. The main take away point from the comparison is that the right use of DSP is to relax optical specifications as was done in going from LR4 to CWDM4. When DSP
significantly increases the difficulty of the optical specifications, for example as seen in going from LR4 or CWDM4 to the 100G/lambda proposals in .bs, that seems like the wrong direction.
Chris From: Gary Nicholl (gnicholl) [mailto:gnicholl@xxxxxxxxx]
Chris, First thanks for pulling this presentation together. I may still be trying to get my head around all of the numbers, but it is the analysis that ultimately we all need (and need
to agree upon). I also like your comparison to existing 100G solutions . I think your intent here is to try and quantify the complexity/difficulty of the different 400GbE PMD proposals in relation
to something that we are all familiar with, i.e. developing and delivering the first 100G solutions. I was about to make a point here at the end of the meeting but my phone died! The point I was going to make is that I think we should use the same basis for
comparison for all of the 400G PMD objectives, i.e. 400G 500m PSM4, 400G 2km Duplex and 400G 10km Duplex. Using a common basis for comparison makes it possible to not only compare between different options within a single PMD objective, but also between solutions
for different PMD objectives, i.e. how much harder is 2km duplex versus PSM4, or how much harder is 10km duplex versus 2km duplex, etc. With this in mind I would propose using 100G-LR4 as the basis for all comparisons. What we are investigating here are 1st
generation 400GbE PMDs solutions and it makes sense to me to compare against a similar stage in the 100G project, i.e. the first generation 100GbE SMF PMD. Personally It doesn’t make a lot of sense to be comparing against a 3rd generation 100G-CWDM4 solution
(and especially not for a single PMD objective, and then using 100G-LR4 for the others). We next need to agree on how to interpret the comparison data. In your presentation you show ‘total delta dB’ numbers ranging from 1.5dB to 7.9dB. How do we interpret these numbers
? Is there a specific dB number (or range of numbers) we should be targeting for an optimal solution, below which we are not trying hard enough, and beyond which we are pushing the technology too hard ? To provide some insight here I went back and tried to carry out a similar analysis for the transition from 10G-LR to 100G-LR4, to see "how hard we were pushing" when we choose
100G-LR4 as part of the 802.3ba project. The numbers I came up with are as follows (I encourage others to run their own analysis in case I made an error somewhere!): Tx OMA delta (pre mux): 5.9dB Rx Sen OMA delta (post demux): 3.9dB TOTAL delta: 9.8dB A delta of 9.8dB is significantly larger than for any of the 400GbE options on the table. Does this mean we were pushing the technology much harder for 100GbE than we are now
for any of the 400GbE options? I suspect not. Perhaps it means that as we move to higher and higher speeds we are starting to pushing against some fundamental limits and extra dBs are harder to come by ? Bottom line is likely that while a comparison may provide
a useful insight, but the devil is in the interpretation of the numbers. Gary From:
Ali Ghiasi <aghiasi@xxxxxxxxx> Chris
There was couple of questions on your presentation today in regard what to use for optical Mux/De-Mux looking at couple of supplier Cubo, AFOP, Oplink indicate what you are using are reasonable 2 dB - 4 Channel Mux/De-Mux 3 dB - 8 Channel Mux/De-Mux The AFOP CWDM 4 channel mux has loss of 1.5 and for 8 channel only 2 dB. Other suppliers losses are little higher, more in line with what you have 2 dB for 4 channels and 3 dB
for 8 channels. On slide 5 you referenced http://www.ieee802.org/3/bs/public/14_07/bhatt_3bs_01a_0714.pdf
which has an error floor of 4E-4 for 106.25 Gb/s PAM4. Bhatt result were based on to be published ECOC paper M. Poulin. However the published ECOC results are little worse than what was published in IEEE. Here are the BER results published in ECOC: - 53 GBd PAM4 BER=2.9E-3 - 40 GBd PAM4 BER=2.4E-4 - 30 GBd PAM4 BER=1E-6 It looks like if you give PAM4 enough bandwidth as in the case of 30 GBd then BER improves and error floor improves. |