Brad,
Developing cost models for 50Gb/s technology solutions, for example for 1x50G, 2x50G, and 4x50G applications is a reasonable request. That’s because there is
a lot of 50Gb/s technology that is now being productized including ASIC SerDes, PHY ICs, optical components, test instruments, i.e. an entire ecosystem.
However it will be impossible to compare this to the cost of 100Gb/s technology solutions, for example for 4x100G 400G applications because those have no real
cost models. Viable components will not exist for many years. Given that there is no real data, everyone is free to pick whatever numbers they like. Unsurprisingly, this has led to cost projections which are seductively low. Comparing such numbers to real
50Gb/s implementation numbers is comparing apples to oranges.
Chris
From: Brad Booth [mailto:bbooth@xxxxxxxx]
Sent: Tuesday, May 12, 2015 2:59 PM
To: STDS-802-3-DIALOG@xxxxxxxxxxxxxxxxx
Subject: Re: [802.3_DIALOG] Future 50 Gigabit Ethernet CFI
Depends on what you refer to as a data center and what you're assuming is going to be built for them.
You're correct that CDFP and CFP2 are not interesting inside the data center, but it is incorrect to assume those form factors are the only options available.
To the folks that truly believe we need a 200G solution (not a 4x50G breakout), then please bring the data. Prove that 200G is lower cost per bit even it you ignore the form factor. Prove that there are end users asking for this inside
the data center. We'll need the information to answer the economic feasibility and broad market potential in the 5 criteria.
On Tue, May 12, 2015 at 2:28 PM, Ali Ghiasi <aghiasi@xxxxxxxxx> wrote:
Brad
As the industry moves from 28 nm CMOS to 16 nm doubling the switch capacity comes naturally for the given number of ports, the doubling of the capacity will come by moving from 25G NRZ to 50G PAM4.
In effect 50G I/O will become the lowest cost for servers interconnect as 25 GbE would require FlexE/MLG gearbox the break out the traffic. The combination of SFP56 (50 GbE) and QSFP56 (200 GbE) will deliver the lower cost per bit than
CFP2 and CDFP (400 GbE) solutions. Routers and OTNs do require 400 GbE no question about it, but due to cost and availability the 400 GbE will have limited applications for the next 5 years in the data centers. We will see 50/200 GbE volume deployment in
the data centers before 400 GbE!
As one who works or has worked in both those market spaces, cost is always a factor. As Geoff mentioned, which I was trying to highlight, is that if the working group generates speeds too fast, then it is very difficult to show a good ROI.
If the only justification for 200G is that it provides our traditional 4x breakout to 50G, then we're not providing 200G but a 4x50G solution. If 200G native cannot compete against 100G or 400G on time-to-market, time-to-standard or cost,
then we should seriously be questioning the time and economic investment to pursue it.
On Tue, May 12, 2015 at 8:46 AM, Ali Ghiasi <aghiasi@xxxxxxxxx> wrote:
Paul/Brad/Geoff
The Ethernet network of today is not your network of a decade ago, which was driven by the Enterprise LAN. Two distinct network have emerged:
Cloud data center - where Ethernet fabric is commonly used to build very large Clos network this segment is leading the Ethernet speed-feed
Traditional Enterprise LAN - Still exist with greater volume than Cloud data centers but speed requirements are lagged by at least 5 years
Cloud data center are more aligned with CMOS node and CPU cycle, where they want to take advantage of Morse Law efficiency to increase performance and reduce operating expenses on a shorter cycle than traditional LAN network where longevity
is desired.
As the ASIC/Switch IO migrated from 10 Gb/s to 25 Gb/s/lane, 25 GbE merged as natural break-out solution instead of more complex MLG transport scheme. I expect as we move from 25G to 50G I/O, 50 GbE will be the natural break out and with
minimum cost.
You raise good points. But while a trend of smaller rate increments does raise those questions, it is also true that life duration is extending for each rate.
This is because as the Ethernet market continues to expand, the user needs are becoming more spread out. ROI projections need to also consider that the higher rates composed of multiple lanes will evolve towards fewer lanes over time. All this speaks to
future solution sets that have increasing variety, even if IEEE is able to put standards in place before the market fragments via MSAs. The picture is undoubtedly becoming more complex and more difficult to manage well.
Thanks for throwing some additional real factors into the decision.
There is another one that I would like to two in.
Developing both the standard and the new hard ware for each speed step is not free.
In order for high speed Ethernet to remain a viable business, each speed has to have a long enough market life to recoup the up-front investment and make some profit.
This discussion is interesting in that on one hand we're conversing about 50G which is bleeding edge today but then we're shooting for aggregated bandwidth links in 3-5 years that show no sign of being on the edge.
50G serial (or even 100G serial) is interesting as a server link if it provides better economics that the existing solution. The same applies to uplinks. If 200G is economically competitive compared to 100G or 400G or 1.6T, then it will
gain traction in the market. But it's not just the cost of the optical module, it's the cost of the whole ecosystem. The uplink bandwidth has an impact on FIB which has an impact on switch memory requirements.
There are a couple of factors that I believe need to be considered: the laws of physics (how much bandwidth can we put down a single lane) and the laws of economics (how do you make sure there's sufficient market to justify the solution).
When Ethernet operated at the 10x speed increments, it was much simpler to ensure the laws of economics were being met. Does 200G satisfy the laws of economics? Does 800G satisfy it?
All of this is directly impacted by the time it takes to create a standard. Is it two years? Three years? Or four years? Or, would it be wiser for the working group to reconsider how it does projects? Should we look at a project that decouples
the speed of the MAC (which only takes mere seconds to change for each new project speed) from the speed of the PHY (which, as we all know, is where the lion's share of the work occurs)? This could permit the speed of the MAC to merely be an aggregate of similar
speed PHYs in a base2 scale (1, 2, 4, 8, 16, etc.).
I see a different and much more prolific progression for 1RU switches.
The switch Vineet mentions is based on a 64-port ASIC while higher density switches are using 128 port ASICs today. This exceeds the port density of SFP (my
first form factor standard that I worked on) and pushes us towards my beloved QSFP family.
Here is a progression with 128 Port ASIC in 1RU Switch
Today = 32 x QSFP+ with 10G downlinks and 40G uplinks – End users decide the ratio of up to downlinks with breakout cables.
2015/2016 = 32 x QSFP28 with 10/25G downlinks and 40/100G uplinks.
50G era – probably deployed in 2019 = 32 x QSFP56 with 10/25/50G downlinks and 40/100/200G uplinks. Do you want 1, 2 or 4 lanes at 10,25 or 50G?
Future (dream for mid 2020s) = 32 x QSFP100 with 25/50/100G downlinks and 100/200/400G uplinks. Do you want 1, 2 or 4 lanes 25, 50 or 100G? Maybe we can still
support 10G on each port as well. This shows the versatility that ASICs will hopefully support and the roadmap that Fibre Channel has supported for years.
You can see a vision for the future in the 2015 Ethernet Roadmap in exquisite detail at www.ethernetalliance.org/roadmap/.
The Ethernet Alliance will be giving out free printed copies of the 18” X24” roadmap in Pittsburgh. There will also be a special gift related to the roadmap at the social on Tuesday night – don’t miss it.
Are we limited to 128 port ASICs? No.
Higher port count ASICs and multi-ASIC configurations are driving COBO and other embedded solutions that will surpass the capability of the venerable QSFP.
Maybe the uQSFP will be useful in matching the needs of these higher port count ASICs. The future is dense!
These are the port configurations for “1RU fixed switches” (Top of Rack) that will be enabled by 50G / 200G ports.
The uplink / downlink bandwidth ratio is 3:1 or 2:1, depending on 4 versus 6 QSFPs.
Note that this applies to any 1RU box, including Aggregation Switches, Routers (not just Server connections).
Today = 48 x SFP 10G downlinks + 6 x QSFP 40G uplinks.
Soon = 48 x SFP 25G downlinks + 6 x QSFP 100G uplinks.
Future = 48 x SFP 50G downlinks + 6 x QSFP 200G uplinks
Future (dream) = 48 x SFP 100G downlinks + 6 x QSFP 400G uplinks
I agree there is a lot of merit to standardize 200G as a partner with 50G serial IO and continue the factor of 4 down / uplink – especially given that the SI
and module challenges seem relatively do-able.
One additional thought – if we agree that 50/200 makes sense, would it follow that 100 / 400 would also pair up? That would enable a two lane twinax DAC server
interconnect paired with a 400G uplink. The 400G would be already covered in .bs, and the 100G may “come for free” with 200G, just less lanes?
So it would seem in my opinion that 50, 100 and 200G based on 50G IO would be relatively mainstream PMDs, and would merit discussion for inclusion (at the risk
of project overload!).
From: Vineet Salunke (vineets) [mailto:vineets@xxxxxxxxx]
Sent: Thursday, May 07, 2015 4:03 PM
To: Chris Cole;
Cc: Vineet Salunke (vineets)
Subject: RE: [802.3_DIALOG] Future 50 Gigabit Ethernet CFI
And 50G SFP / 200G QSFP for Ethernet will have nice alignment and re-use with Fiber Channel roadmap for 64GFC SFP / 256GFC QSFP ….
These are great examples.
Standardizing 50G and 200G PMDs will continue the successful progression of single and quad channel devices for high volume datacenter applications.
Per lane rate
Gb/s
|
Single lane rate
Form factor
|
Quad lane rate
Form factor
|
Quad data rate
Gb/s
|
10
|
SFP+
|
QSFP+
|
40
|
25
|
SFP28
|
QSFP28
|
100
|
50
|
SFP56
|
QSFP56
|
200
|
Another great example of multi-lane 50G technology application was cited in your SMF Ad Hoc presentation survey of relevant papers from OFC 2015.
In this post-deadline paper Cisco authors presented a 2x50G PAM-4 (optical) 100Gb/s QSFP28 transceiver using Cisco 50G PAM-4 optics and Broadcom 50G PAM-4 (line
side) PHY. Measurement results were for 10km SMF and 100m OM3.
I see opportunity for full spectrum of PMDs for both 50 GbE and 200 GbE including popular break out option with combination of QSFP56 and SFP56:
I would like to request clarification of your stated intent below. You state the CFI will focus on single lane 50Gb/s Ethernet. While I realize you are initiating
this effort – in my opinion the discussion that I am seeing happen is essentially “n” by 50Gb/s per lane with 50GbE and 200GbE being discussed.
As this is a consensus building process, will you be allowing interested parties to bring presentations forward to state justification for why 200GbE should
also be considered? Based on my conversations, I believe there are a number of individuals who would like these topics discussed together.
Could you also provide any more insight into what you are proposing for single lane 50GbE? Will this be like the .3by project – Backplane, Cu Twin-as, and
MMF? Or is that a TBD in your mind that you hope to address during consensus building?
Thanks in advance for your answers.
I wanted to let everyone know that a number of people have started preliminary discussions that would lead towards having a Call-for-Interest on the topic of single lane
50 Gigabit/s Ethernet at a future plenary meeting of 802.3. If anyone is interested in helping and contributing, please let me know or talk to me In Pittsburgh. As we get further along, we will be sharing some of the plans and data we are gathering to support
the CFI.
|