Thread Links Date Links
Thread Prev Thread Next Thread Index Date Prev Date Next Date Index

Re: [802.3_NGECDC] Input Requested for Beyond 400 GbE CFI



Chris,

 

You cannot find that I said “400GbE is a low volume interface with today’s 50G electrical I/O” because I never said so.

 

Jeff

 

 

Non-Juniper

From: Chris Cole <chris.cole@xxxxxxxxxxx>
Sent: Monday, August 10, 2020 6:00 PM
To: STDS-802-3-NGECDC@xxxxxxxxxxxxxxxxx
Subject: Re: [802.3_NGECDC] Input Requested for Beyond 400 GbE CFI

 

[External Email. Be cautious of content]

 

Hi Jeff,

 

In multiple public forums, Cedric and his colleagues have been clear and consistent about what they want; 4 lane electrical and optical interfaces.

 

That’s why they want 100G electrical I/O before deploying 400GbE.

 

Generously this can be argued to be half your point. I am not able to find the other half that 400GbE is a low volume interface with today’s 50G electrical I/O.

 

Chris

 

From: Jeffery Maki <jmaki@xxxxxxxxxxx>
Sent: Monday, August 10, 2020 5:01 PM
To: Chris Cole <chris.cole@xxxxxxxxxxx>; STDS-802-3-NGECDC@xxxxxxxxxxxxxxxxx
Subject: RE: [802.3_NGECDC] Input Requested for Beyond 400 GbE CFI

 

Chris,

 

Cedric just made my essential point. Customers didn’t want it. They were willing to wait for what they wanted. This is that not so obvious thing known as “broad market potential.”

 

Jeff

 

From: Chris Cole <chris.cole@xxxxxxxxxxx>
Sent: Thursday, August 6, 2020 10:24 AM
To: STDS-802-3-NGECDC@xxxxxxxxxxxxxxxxx
Subject: Re: [802.3_NGECDC] Input Requested for Beyond 400 GbE CFI

 

Hi Cedric,

 

You are nicely illustrating the quandary faced by the optics industry (or any horizontally partitioned industry).

 

When defining products, it’s all about performance, and the potential for >50% cost savings can be dismissed as of very little added benefit.

 

Unfortunately, when the hardware is available, it’s only about cost. Telling a buyer that the cost is higher because the design engineers asked for extra design features gets very little sympathy.

 

We have examples when there is a real choice, cost always wins out; 40G LR4 vs. 40G FR is one. When the volume of greenfield links is comparable to brownfield links, two types of optics result in significantly lower overall cost.

 

However, I understand why in your deployment model, two lane solution is not of interest. You have no gearboxes, and the ratio of greenfield to brownfield is favorable.

 

Chris

 

From: Cedric Lam ( ) <clam@xxxxxxxxxx>
Sent: Thursday, August 06, 2020 8:16 AM
To: Chris Cole <chris.cole@xxxxxxxxxxx>
Cc: STDS-802-3-NGECDC@xxxxxxxxxxxxxxxxx
Subject: Re: [802.3_NGECDC] Input Requested for Beyond 400 GbE CFI

 

All:

 

When the 4x lane implementation has been deployed in volume in an earlier generation, there is an incentive to keep it in the upcoming generations especially if we can avoid a gear box, e.g. 4x25 in both electrical and optical.  At least on the optical side, when one can keep the 4x lanes, during network evolution, it is easier to maintain backward compatibility and enable a smooth transition.  So the 2x lane approach just complicates operation with very little added benefit.  

 

That said, an optical Auto-Negotiation feature will be useful for implementations with the same number of optical lanes across generations, like for electrical ethernets.


--

Cedric F. Lam

Cell: +1 (949) 351-2766

 

 

On Wed, Aug 5, 2020 at 10:18 PM Chris Cole <chris.cole@xxxxxxxxxxx> wrote:

I am delighted that there is at least one person that pays attention to the numbers in my emails. A corrected gearbox ratio is in green, in the first bullet below.

 

Chris

 

From: Chris Cole <chris.cole@xxxxxxxxxxx>
Sent: Wednesday, August 05, 2020 7:15 PM
To: STDS-802-3-NGECDC@xxxxxxxxxxxxxxxxx
Subject: Re: [802.3_NGECDC] Input Requested for Beyond 400 GbE CFI

Hi Jeff,

400G LR8 is an early low-volume telecom module. It’s been the first to be used in test equipment and high-end transport applications. Exactly the same thing happened at every previous new rate. Further, there is no point in comparing it to technology developed years later.

Your characterization that the industry has been “choosing to wait for lower cost 400G” paints an image of buyers agonizing each day whether to fill a desperate need with high-cost optics, or hold their breath just a little bit longer for promised massive savings. As satisfying as this picture is to justify past decisions, the reality is much simpler. There has been and there is no significant demand for 400GbE.

This is one of those rare situation where we don’t have to speculate and make abstract arguments about cost that can never be settled. We will have real, volume production cost and power of the following deployed products to enable us to make apples-to-apples comparison that will give us black and white answers.

  • 4x 100G CWDM4 QSFP28 + 1x 8:16 Gearbox
  • 2x 200G FR4 QSFP56
  • 1x 400G DR4 QSFP-DD (w/ internal 1x 8:4 Gearbox)

It won’t be pretty.

Chris

From: Jeffery Maki <jmaki@xxxxxxxxxxx>
Sent: Wednesday, August 05, 2020 12:52 PM
To: Chris Cole <chris.cole@xxxxxxxxxxx>; STDS-802-3-NGECDC@xxxxxxxxxxxxxxxxx
Subject: RE: [802.3_NGECDC] Input Requested for Beyond 400 GbE CFI

Chris,

The point was that it was not a decision based on numerology.

We worked hard on 400GBASE-FR8/LR8 and it was implemented in QSFP-DD yet the decision by most of the industry is to wait for 400GBASE-FR4/LR4-6 or 400G-LR4-10. How was the cost attractive for 100G Ethernet using 50G-lambda optics yet unattractive for 400G Ethernet using 50G-lambda optics. This is a serious question. It involves the cost in breaking interop and the cost of 2 times the number of lasers or not for first use of a given speed of Ethernet (i.e., 400G).

Jeff

From: Chris Cole <chris.cole@xxxxxxxxxxx>
Sent: Monday, August 3, 2020 2:49 PM
To: STDS-802-3-NGECDC@xxxxxxxxxxxxxxxxxSubject: Re: [802.3_NGECDC] Input Requested for Beyond 400 GbE CFI

Jeff

It may be carefully considered but it’s cost a fortune.

Chris

From: Jeffery Maki <00000d5963b8071f-dmarc-request@xxxxxxxxxxxxxxxxx>
Sent: Monday, August 03, 2020 9:18 AM
To: STDS-802-3-NGECDC@xxxxxxxxxxxxxxxxx
Subject: Re: [802.3_NGECDC] Input Requested for Beyond 400 GbE CFI

All,

When we moved to 50G lane technology, electrical and optical, we declined to define new 100G Ethernet PMDs other than 100GBASE-SR2. The interest was to preserve interoperation with 4 x 25G-lambda optics and wait for 100G-lambda optics to break interoperation and move to a new standard for 500 meter support and beyond. This was a carefully made decision about industry interoperability.

We have the same issue now with 400G Ethernet when we move to 200G lane technology. Do we just define 400GBASE-SR2? Indeed, these are decisions for a study group but any CFI might point out what happened with 100G Ethernet and question whether there is likely to be anything different this time with 200G lanes.

Jeff

From: John D'Ambrosia <jdambrosia@xxxxxxxxx>
Sent: Monday, August 3, 2020 8:20 AM
To: STDS-802-3-NGECDC@xxxxxxxxxxxxxxxxx

All

Remember we are looking at getting a study group started , not making the  baseline decisions out f a task force.

Sent from my iPhone

On Aug 3, 2020, at 11:08 AM, Chris Cole <chris.cole@xxxxxxxxxxx> wrote

Hi Steve,

To help resolve your inner struggle, you may consider rereading Ali’s email, in which he describes how to 200 Gb/s electrical lane signaling is not part of the next project. It is part of a separate project. This doesn’t require a great deal of imagination because we standardized 25 Gb/s electrical lane signaling in 802.3bj and are standardizing 100 Gb/s electrical lane signaling in 802.3ck, both separate electrical lane signaling projects. If we did 25 Gb/s electrical lane signaling in 802.3ba or 100 Gb/s electrical lane signaling in 802.3bs, it would have been a mess. That’s what we will get if the 800GbE project includes 200 Gb/s electrical lane signaling.

While many attribute “magic” to 4-lane solutions, others attribute “magic” to 1-lane solutions. Whatever the faith, it’s a prescription for bad decisions, as almost happened in 802.3ba and 802.3bm. 4-lane “magic” comes from it being a reasonable yield point for hybrid integration of discrete optical components. There is nothing “magic” about it for monolithic integration.

Unfortunately because of this reliance on numerology, today the industry doesn’t have lowest possible cost 100GbE optical interconnect solution based on 2x50G WDM. Since the only choices are 4x or 1x solutions, they require 1:2 reverse or 2:1 forward gearboxes to match to the 50 Gb/s electrical lane rate of Switch ASICs.

Chris

From: Trowbridge, Steve (Nokia - US) <steve.trowbridge@xxxxxxxxx>
Sent: Monday, August 03, 2020 6:13 AM
To: STDS-802-3-NGECDC@xxxxxxxxxxxxxxxxx
Subject: Re: [802.3_NGECDC] Input Requested for Beyond 400 GbE CFI

Hi John,

I struggle to see how 200 Gb/s electrical lane signaling could not be part of the next project.

As you know, the last two “next rate” projects ended up doing two speeds:

  • They did the headline “next speed” everybody claimed they wanted or needed (P802.3ba->100 Gb/s, P802.3bs->400 Gb/s)
  • They did a lower speed that was practical to do at the time of the standard with 4-lane electrical and optical signaling (P802.3ba->40 Gb/s, P802.3bs->200 Gb/s)

While the higher speed in each of these projects served an early adopter market, the wider market for these speeds didn’t emerge until you could get the higher speed implementation down to a 4-lane electrical and optical implementation. In each case, some follow-on projects were required to do that. Since P802.3ba already had 4-lane optics, the missing piece was the work of P802.3bj and P802.3bm to add the 4-lane electrical behind it.

Some attribute the “magic” leading to the success of 100GbE from 2016 onward to finally having the electrical and optical lane rate being the same. Another possible explanation would be that the market likes 4-lane solutions and true QSFP implementations.

Because when we look at 400GbE in P802.3bs, OK, there we start out with 8 lanes electrical and optical, so the lane rates are the same, but it’s 8 lanes, and it isn’t real QSFP. Market volumes are low, which would seem to indicate we are serving an early adopter market. My guess as to when this moves into mass market is after the completion of the P802.3ck and P802.3cu projects, and we have 4-lane electrical and optical and true QSFP. So even though 400GBASE-LR8 electrical and optical lane rates that are the same, they don’t have that magic number of 4 for the lane count, and hence market opportunity is limited.

So now to the next “next rate”: As we’ve discussed privately, it is hard to look at the BWA report and conclude that 800GbE is anywhere near enough. But we don’t have a lot of ideas for practical implementations of 1.6T today, so some reason to think we end up with yet another dual-rate project specifying 800GbE and 1.6TbE. Based on history, the lower of these two rates should be ready for mass market adoption soon after the standard is released, while the higher rate will serve an early adopter market and need one or more follow-on projects to get the lane count, which partially drives the overall size/cost/power of implementations down to something reasonable.

Can you really imagine that you come out with a new dual-rate standard like this, and the lower of the two rates needs an 8-lane electrical interface and the higher one needs a 16-lane electrical interface? If that were the case, neither interface is ready for mass-market adoption and both need follow-on projects before they would serve anything other than an early adopter market.

So I would think 200 Gb/s per lane electrical is a gating technology for any 800GbE implementation ready for mass-market deployment. If we can’t achieve that before 2026-2027, then that’s when we should be aiming for completing the project.

Regards,

Steve

From: John D'Ambrosia <jdambrosia@xxxxxxxxx>
Sent: Saturday, August 1, 2020 3:56 PM
To: STDS-802-3-NGECDC@xxxxxxxxxxxxxxxxx
Subject: Re: [802.3_NGECDC] Input Requested for Beyond 400 GbE CFI

Chris,

My 25 Gb/s optical signaling research was focused by your comment –“…. , i.e. 11 years after the CFI.” 😊 so you will need to share the blame in directing my response.

It doesn’t look like we are that much in disagreement.  The big question  is – will 200 Gb/s electrical be part of this effort?  Matt’s timeline approximates a 200 Gb/s std being completed in 2026 / 27 based on his trend line. If this project takes the approximate 5 years – then maybe.  If people want to see it included, then we need material to get added into the deck. 

Or we look at spinning out the electrical portion of the project at a later date – if the optics begins to accelerate ahead  of it.

And as I often say – great discussion

And

Good ? 4 a SG

John  

From: Chris Cole <chris.cole@xxxxxxxxxxx>
Sent: Saturday, August 1, 2020 5:38 PM
To: STDS-802-3-NGECDC@xxxxxxxxxxxxxxxxx
Subject: Re: [802.3_NGECDC] Input Requested for Beyond 400 GbE CFI

Hi John

In the 1st part of your email, your research into 25 Gb/s optical signaling in HSSG is too narrowly focused on 2006 HSSG CFI deck. This leads to your unfortunate characterization of my 25G I/O timeline as misleading. In 2006, 40G Gb/s per lane optics had been shipping for several years into Telecom applications from multiple suppliers. This enabled optics suppliers to have confidence in a low risk approach to the first 100G SMF optical interfaces based on 5x20 Gb/s or 4 x 25 Gb/s optical signaling, using de-rated 40 Gb/s optics.

In Nov. 2006, during the 2nd HSSG, we showed real measurements for 20G TX eyes and BER waterfall curves using production optics that were similar (C-band changed to O-band) to what shipped in the first 100GbE-LR4 modules several years later.

http://www.ieee802.org/3/hssg/public/nov06/cole_01_1106.pdf#page=10

I fully agree with you that Cedric, Xiang, and Hong do an excellent job. However, they are unlikely anytime soon to be showing 200Gb/s TX eyes and BER waterfall curve measurements using production grade optics.

In the 2nd part of your email, you recover brilliantly by identify the most important historical driver for high volume datacenter optics shipment; matching electrical and optical lane rates. 10G, 40G, and 100G hit 1st million units shortly after the appearance of Switch ASICs with electrical I/O matching optical lane rates.


The next 1st million optics to ship will be 200G with 4x 50G I/O, matching 50G I/O on Switch ASICs, some in OPSF form factor and most in QSFP56 form factor. Hong was one of the earliest to have this insight.

You correctly point out that 100G 10:4 Gearbox restricted the initial CFP modules to low-volume high-end applications. Similarly, 400G 8:4 Gearbox restricts the initial modules to modest volumes. 400G will ship the 1st million units when ASICs with 100G I/O ship in volume, mostly in QSFP112 form factor.

I fully agree with your conclusion that 200G per lane optics won’t see high-volume until we see 200 Gb/s I/O on Switch ASICs. Which tells us that any 200G optics we define in this project will have modest volume.

Chris

From: jdambrosia@xxxxxxxxx <jdambrosia@xxxxxxxxx>
Sent: Saturday, August 01, 2020 12:32 PM
To: Chris Cole <chris.cole@xxxxxxxxxxx>; STDS-802-3-NGECDC@xxxxxxxxxxxxxxxxx
Subject: Re: [802.3_NGECDC] Input Requested for Beyond 400 GbE CFI

Chris,

Thank you for bringing up this topic – it raises a lot of really questions, which requires us to have a frank discussion.

First – one of the things many of us have been saying is that the sweet spot for solutions are 1x and 4x lane rates.  This is a very important point.  In the consensus deck I make use of this slide – which I know others have also used some variant of. 

Is there general agreement that 200 Gb/s is the next data rate? (mind you I don’t see baud rate as the modulation discussion is clearly already starting to happen.)  I believe there is general industry agreement and focus on this.

Now the next question in my mind is while 1x and 4x are the sweetspots, however, we are seeing 8x packages emerge.  So this is important to consider when we consider how we will address the next rate or rates.  Maybe this project will build off the developing 100 Gb/s electrical interface and that is how 800 will be achieved.  Or maybe the group will decide to do 200 Gb/s per lane and recognize what I said above and decide on 800 and 1.6?

<image002.jpg>

You made this comment

We know that 25G I/O based optics shipped the 1st million units in 2017, i.e. 11 years after the CFI.

I assume you are referring to the 2006 HSSG CFI - http://www.ieee802.org/3/cfi/0706_1/CFI_01_0706.pdf

If you go and look at this presentation and do a search on “25” you will see that there are 8 findings, and none of them are about 25 Gb/s optical signaling. The two optics examples provided were based on 10 Gb/s signaling. The only reference to 25 Gb/s signaling is on page 32 of the file, and this is about electrical signaling.  So the statement you made is a bit misleading, but it also informative in that it suggests it took us 11 years to get to the sweet spot, i.e. optics matched electrical and 4x25 Gb/s, referenced above.

I also take this as meaning if we want to minimize the development time to sweet spot time – we need a x4 electrical / optical solution as soon as possible for networking applications.  At this point Cedric, Xiang, and Hong did an excellent job exploring 200 Gb/s lane optics (http://www.ieee802.org/3/ad_hoc/ngrates/public/calls/20_0727/lam_nea_01_200727.pdf).  However, we have very little info on 200 Gb/s SerDes.  Matt has started to look at this (http://www.ieee802.org/3/ad_hoc/ngrates/public/calls/20_0604/brown_nea_01a_200604.pdf) from a historical and high level, but I think we need to get some info on the electrical signaling and the channel.  Given the challenges that .3ck is facing, this is not an issue that I think we should take lightly.

Also we need to look at the 200G electrical to 200G optical to make sure that the complexity of the total solution is reasonable.  I remember, as do many, that the 10 to 4 mux that .3ba finalized on, turned out to be harder than originally thought.  

So if we are going to have a discussion about timings – it really needs to reference the right efforts.  While 4x25 G optics were finalized in .3ba, the 4x25 electrical interface wasn’t solved until .3bm, which published in 2015.  Looking at our LightCounting #’s I see that things really took off in 2016 as the #’s indicated a huge jump, driven by very large volumes in QSFP28.  So the electrical interface and the form factor were pivotal in the quick rampup – so I don’t think tying it back to the 2006 CFI is completely fair.

But it raises the question – how do we get to the sweet spot solution as quickly as possible to minimize churn of solutions that don’t necessarily meet the customers needs  - and what does that mean to the questions that we really need to answer as we look at starting this effort.

I hope this is coming off as me trying to frame the problem appropriately.  As one person put it – we are interested in the next economical speed of Ethernet, not just the next speed.  [Those are not necessarily the same thing]

With that said – yes we have a lot of technical work in front of us.  But I am sure you, like me and others, remembers .3ba.  There are a lot of debates that we are going to need to have to frame this project properly and to do it effectively.  And I do remember.3ba and can understand the need to start having these discussions sooner rather than later.

Thanks for using the reflector to start this discussion.  In today’s COVID world, unfortunately, the ability for us to discuss this informally over refreshments or one of your famous “low-cost” dinners is extremely limited – THINK GENEVA May 2007 😊

John

PS to all – please feel free to jump in.  There is some meaty discussion here, and we will have limited opportunities for teleconferences if we wish to do a cfi in November.  It is important to identify the key concerns to move this effort along.

Anyone with 200 Gb/s serdes info – please feel free to contact me and propose a presentation slot.

From: Chris Cole <chris.cole@xxxxxxxxxxx>
Sent: Saturday, August 1, 2020 2:11 PM
To: jdambrosia@xxxxxxxxx; STDS-802-3-NGECDC@xxxxxxxxxxxxxxxxx
Subject: RE: [EXTERNAL]: Re: [802.3_NGECDC] Input Requested for Beyond 400 GbE CFI

Hi John,

You are exactly right, the question of when 100G I/O based optics will ship the 1st million units is also important, as is the related question of when 50G I/O based optics will ship the 1st million units. We know that 25G I/O based optics shipped the 1st million units in 2017, i.e. 11 years after the CFI.

By these milestones, it will tell us whether the objective is initial low volume transport and inter-datacenter links, or high volume intra-datacenter links. This doesn’t make a difference to the logic layer specification, but it makes a huge difference to the physical layer specification and the associated understanding objectives.

Chris

From: John D'Ambrosia <jdambrosia@xxxxxxxxx>
Sent: Saturday, August 01, 2020 3:49 AM
To: STDS-802-3-NGECDC@xxxxxxxxxxxxxxxxx
Subject: [EXTERNAL]: Re: [802.3_NGECDC] Input Requested for Beyond 400 GbE CFI

Chris,

In the past the question you asked below has been used to justify the next speed, not the justification for the speed in question itself.  So I am trying to understand your question.  It would seem the question you want to ask would be related to 100G, not 200G.

Just trying to understand what you are getting at to see if additional data is needed.

Thanks

John

From: Chris Cole <chris.cole@xxxxxxxxxxx>
Sent: Saturday, August 1, 2020 1:20 AM
To: STDS-802-3-NGECDC@xxxxxxxxxxxxxxxxx
Subject: Re: [802.3_NGECDC] Input Requested for Beyond 400 GbE CFI

Hi Cedric

When do you think the 1st million optical transceivers with 200G I/O will ship? It can be any configuration; Nx200G, Nx400G, 800G, etc.

Chris

From: Cedric Lam ( ) <000011675c2a7243-dmarc-request@xxxxxxxxxxxxxxxxx>
Sent: Friday, July 31, 2020 9:35 AM
To: STDS-802-3-NGECDC@xxxxxxxxxxxxxxxxx
Subject: Re: [802.3_NGECDC] Input Requested for Beyond 400 GbE CFI

I can see 1x200G as something useful for server to TOR connections in the future and might be easy to add to the Ethernet family.  I agree with you on the 2x200G.   Also, bear in mind the limited distances that 200G lane can cover and the use cases.  We see it mostly in the intra-DC applications.

--

Cedric F. Lam

On Fri, Jul 31, 2020 at 8:05 AM John D'Ambrosia <jdambrosia@xxxxxxxxx> wrote:

All,

I received a question after this week’s NEA meeting that I would like to get some feedback on from others.

The question was –

If 200 Gb/s per lane signaling were developed could efforts to define 200 GbE based on 1x200 Gb/s and 400 GbE based on 2x200 Gb/s be addressed.

I think it is actually a good question and important for me in developing the CFI Consensus deck and defining the SG chartering motion.  As shown by the slide below - 200 Gb/s signaling is applicable to 200 and 400 GbE. 400 Gb/s serial signaling might also be applicable to 400 GbE. 

My own personal opinion is that the whole 1x / 2x lanes would then need to be examined on a PHY basis – as we have seen some instances where 2x lanes don’t see market adoption.

This also raises the question as to whether the study group would define more than one PAR.  Based on the above text – I think there is an opportunity for that or another project that spins out efforts based on consideration of schedule.

So I would some feedback from individuals as it impacts the consensus deck

Thanks in

John

 


To unsubscribe from the STDS-802-3-NGECDC list, click the following link: https://listserv.ieee.org/cgi-bin/wa?SUBED1=STDS-802-3-NGECDC&A=1


To unsubscribe from the STDS-802-3-NGECDC list, click the following link: https://listserv.ieee.org/cgi-bin/wa?SUBED1=STDS-802-3-NGECDC&A=1