Vipul
Beside the 50 Gb/s economy of scale and cost advantages you raised one must also consider the application space.
We have been discussing 50 GbE breakout even in early days of .BS project but now that we are defining 50 GbE becomes even greater application with QSFP56-4xSFP56. Next generation servers are going to 50 GbE and not 100 GbE, that is why the market need 200Gbase-DR4 and not 200Gbase-DR2 as far as breakout concerns.
We definitely need to define 200G-DR4 to address 4 ports of 50 GbE for breakout applications.
Now the question is how do we best address leaf-spine applications requiring a radix of 64x100GbE. Should we address the 100 GbE leaf-spine by leveraging 1/2 of 200G-DR4 or do we need to define 100 GbE based on single lane?
Thanks, Ali Ghiasi Ghiasi Quantum LLC
Picking up a sub-thread here, I want to express my support for the views of Chris and Helen. We are better off approaching 100G as 2x50G than as DR4 divided by 4.
In yesterday's ad hoc call, we discussed work partitioning of 50G/100G/200G -- how to combine efforts, avoid schedule delay, simplify editorial challenges, etc. All well and good. However, our bigger obligation is to write specs for a successful standard. The success of a standard is gauged by how widely it is adopted in the market, not by how efficiently we manage the IEEE process. Widespread adoption happens only if per-port cost of an Ethernet solution is competitively low. In practical terms, this boils down to enabling low-cost optics. Therefore, our partitioning decision should be subservient to the goal of achieving the lowest cost of optics, not hinder it. (If we can align the two, all the better.)
To achieve low-cost optics, the biggest lever we have is extensive re-use of 50G (25 GBaud PAM4) optics -- Nx50G, where N can be 1, 2, and 4. The case of N=1 will provide a huge volume base that the component ecosystem can amortize cost over.
Why not use Mx100G where M can be 1 and 2? Wouldn't 100G (50 GBaud PAM4) optics be lower in cost because it uses only one wavelength? No, not yet, despite it being a seductive idea.
Since the .bm days, some colleagues (including me) have been drawn to the idea of single-wavelength 100G. Later, as I supported all three .bs baselines (FR8, LR8 as well as DR4), I had an opportunity to look more closely at both 25 GBaud PAM and 50 GBaud PAM4 product development. My view is now more balanced, from a technical as well as economic perspective.
50 GBaud PAM will certainly have its day in the sun. We will see that day arriving in the form of reports of improving and real measurements at leading edge conferences like OFC and ECOC.
But in IEEE, we should write specs based on performance we can prove. For broad market success, 50 GBaud PAM still has several hurdles to cross -- design, packaging, signal integrity, yields -- with no immediate timeline in sight. Some may claim to cross these hurdles soon, and that's great, but that's not the same thing as broad supplier base.
It is one thing to show a forward-looking lab result; it is quite another to have manufactured components that give us characterization data we can use for developing PMD specs with correctly allocated margins. From that perspective, we are on more solid ground with 25 GBaud PAM than with 50 GBaud PAM. Let's walk before we run.
The DR4 argument -- "We voted for DR4 in 400G, so let's carry the underlying risks and assumptions forward" -- sounds like unnecessarily doubling down on an avoidable risk.
With best regards, Vipul
|