Ken,
Was that “4x3.125Gb/s XAUI=>XFI IC” free? If not free, what percentage of cost did it contribute?
Jeff
From: Kenneth Jackson [mailto:kjackson@xxxxxxxxxxxxxx]
Sent: Friday, July 11, 2014 1:16 PM
To: Chris Cole
Cc: STDS-802-3-400G@xxxxxxxxxxxxxxxxx
Subject: Re: [STDS-802-3-400G] [802.3_400G] Presentation for next week
Success is relative, but there was a 10GBASE-CX4 XFP module. Essentially it used the 4x3.125Gb/s XAUI=>XFI ICs and flipped them around.
From: Chris Cole <chris.cole@xxxxxxxxxxx>
To: <STDS-802-3-400G@xxxxxxxxxxxxxxxxx>
Date: 07/11/2014 02:51 PM
Subject: Re: [STDS-802-3-400G] [802.3_400G] Presentation for next week
Hi Mike,
For 10G, the reverse gearbox is specified in the OIF MLG standard. An application would be CFP4, or hopefully QSFP in the future, which takes 4x25G I/O (CAUI-4) and expands it to
10x10G.
I qualified my comment with “or will require”, so it is not possible yet to characterize this as successful implementation example; it is only an implementation example. Time will
tell about the success part.
Chris
From: Mike Dudek [mailto:mike.dudek@xxxxxxxxxx]
Sent: Friday, July 11, 2014 8:52 AM
To: STDS-802-3-400G@xxxxxxxxxxxxxxxxx
Subject: Re: [STDS-802-3-400G] [802.3_400G] Presentation for next week
Chris,
I am interested in your cut on the following statements from your e-mail.
If we look at 10G, 40G, and 100G, the all followed the same pattern
and follow on generations require (or will require) reverse mux (or gearbox) to enable legacy optical interfaces to plug into new electrical I/O ports.
While this is technically correct I can’t think of any examples of successful implementations of the reverse mux, (or were there 10GBASE-LX4 SFP or XFP modules that I don’t know
about).
Mike Dudek
QLogic Corporation
Director Signal Integrity
26650 Aliso Viejo Parkway
Aliso Viejo CA 92656
949 389 6269 - office.
Mike.Dudek@xxxxxxxxxx
From: Dedic, Ian [mailto:Ian.Dedic@xxxxxxxxxxxxxx]
Sent: Friday, July 11, 2014 8:38 AM
To: STDS-802-3-400G@xxxxxxxxxxxxxxxxx
Subject: Re: [STDS-802-3-400G] [802.3_400G] Presentation for next week
Chris
That’s fine if you can cascade links without retiming/data recovery (SERDES) functions. But I don’t see how that’s possible if you have a CEI-xx (any type) link at one end, a x00GbE
optical link (any type) in the middle, and a CEI-xx (any type, maybe different) link at the other end, since you have cascaded margins from (possibly) three different providers each with no control over the performance of the other two.
Once you need SERDES at the two ends of the optical link, it makes no real difference if the rates are the same or different between electrical and optical, the “cost” of the gearbox
function itself is negligible.
Ian
From: Chris Cole [mailto:chris.cole@xxxxxxxxxxx]
Sent: 11 July 2014 16:29
To: STDS-802-3-400G@xxxxxxxxxxxxxxxxx
Subject: Re: [STDS-802-3-400G] [802.3_400G] Presentation for next week
Jeff
Brad’s insights are exactly on the mark. Mark Nowell pointed out the same thing on the 25G CFI conference call on 6/27/14.
They also in no way conflict with the needs you identify. If we look at 10G, 40G, and 100G, the all followed the same pattern.
First generation required a mux (or gearbox) to match the higher optical rate to the lower existing electrical I/O (ASIC SerDes) rate. The second generation was optimized for cost
and volume and matched the optical and electrical rates, and follow on generations require (or will require) reverse mux (or gearbox) to enable legacy optical interfaces to plug into new electrical I/O ports.
Chris
From: Jeffery Maki [mailto:jmaki@xxxxxxxxxxx]
Sent: Friday, July 11, 2014 8:08 AM
To: STDS-802-3-400G@xxxxxxxxxxxxxxxxx
Subject: Re: [STDS-802-3-400G] [802.3_400G] Presentation for next week
Brad,
I was thinking that you could at least react to the portion of my presentation that you do not think provides the critical perspective to meet the needs of data centers. Indeed,
lets refine things into a consensus presentation for September.
Below in yellow, I mark your slide text that implies (says to me) that “cloud needs a new optical PMD for each SERDES rate.”
For electrical interfaces that are N wide and optical interfaces that are M wide, there is only one case for which N=M. When N does not equal M, a gearbox/mux is needed. To avoid
them then requires new optical interfaces to be defined so each new value of N can be matched by a new value of M. SERDES and optical lane speeds have to go in lock step; new optical PMD standard for each case.
As you point out (and I did in my presentation in May) the modulation should also likely match. I would update my presentation to say that they do not have to be defined at the
same time, that is the electrical and optical lanes of equal speed.
Considering a gearbox as an implementation fix implies only the one generation where the gearbox/mux is not needed is a proper implementation. Here I disagree. Some applications
need interoperation over system generation (i.e., form factor generation), especially among systems from competing system companies. Use of gearboxes preserve interoperation among systems of different SERDES generation and especially among systems from competing
system companies with different SERDES rates. Here is where you should separate whether you are talking about interconnects that are primarily for the data center versus for router-to-router or router-to-transport. For the data center, you could assert that
the system generation (SERDES) rate is well managed for uniformity among the interconnected systems.
Jeff
From: Brad Booth [mailto:bbooth@xxxxxxxx]
Sent: Thursday, July 10, 2014 10:41 AM
To: STDS-802-3-400G@xxxxxxxxxxxxxxxxx
Subject: Re: [STDS-802-3-400G] [802.3_400G] Presentation for next week
Jeff,
Your suggestion for going through your presentation and selecting what you got right is not viable for next week's meeting. If you'd like to get a group together to do that filtering, then we could consider that for the September task force meeting.
You'll have to help me understand where I state that cloud needs a new optical PMD for each SERDES rate. It's not like 802.3 hasn't done that in the past, but I don't see where I state that in my presentation.
Thanks,
Brad
On Wed, Jul 9, 2014 at 6:39 PM, Jeffery Maki <jmaki@xxxxxxxxxxx> wrote:
Brad,
In Norfolk, I made such a presentation of the “things the task force might wish to consider when selecting interfaces.” Thus, I think it would help if you highlighted what you think
I got correct in my presentation and what you would change in my presentation or add.
I see in your presentation is that cloud needs a new optical PMD standard for every new SERDES rate. Perhaps you should be discussing how distinct identity is driven by the SERDES
rate. Therefore, perhaps we should have multiple optical PMDs defined in the same project aligned to the different SERDES rates.
Jeff
From: Brad Booth [mailto:bbooth@xxxxxxxx]
Sent: Wednesday, July 09, 2014 12:52 PM
To: STDS-802-3-400G@xxxxxxxxxxxxxxxxx
Subject: Re: [STDS-802-3-400G] [802.3_400G] Presentation for next week
Jeff,
You're correct that answering that question would be great. It's not a simple answer though, and everyone is likely to have varying views. What I was hoping to capture in the presentation is things the task force might wish to consider when selecting interfaces.
I'm hoping that the presentation will permit the task force to discuss some of these issues in a open forum to gain broad consensus on a path forward.
Thanks,
Brad
On Tue, Jul 8, 2014 at 8:01 PM, Jeffery Maki <jmaki@xxxxxxxxxxx> wrote:
Brad,
We have multiple optical reach objectives in 802.3bs. I do not see that the longevity (or obsolescence) would be the same for them all nor that the high-volume market adoption time
frame would be the same for them all.
Could you make your presentation more granular with statements per each reach objective?
I don’t like gearboxes or I think you mean to say muxes either but if they do not appear in initial implementations then they will show up sooner as reverse muxes as electrical
interfaces progress in lane count reduction. I would presume you would want there to be no mux needed when 400G Ethernet is adopted in large volume in the mega datacenter. The question then is what SERDES rate on switch ASICs do you see 400G Ethernet being
adopted in large volume in the mega datacenter?
Jeff
From: Brad Booth [mailto:bbooth@xxxxxxxx]
Sent: Monday, July 07, 2014 1:13 PM
To: STDS-802-3-400G@xxxxxxxxxxxxxxxxx
Subject: [802.3_400G] Presentation for next week
All,
I'm attaching a first draft of a presentation I plan to make next week at the 802.3bs meeting. If you see any areas where I can provide greater clarification, please feel free to let me know.
If you'd like to be listed as a supporter of this material, I'd be honored to add your name.
Thanks,
Brad
________________________________________
Geschaeftsfuehrer/Managing Directors: Brendan Mc Kearney, Kagemasa Magaribuchi, Amane Inoue, Shinichi Machida
Sitz/Seat: Langen, Hessen; Registergericht/Commercial Register: Offenbach/Main HRB 32725
________________________________________
This e-mail and any attachment contains information
which is private and confidential and is intended
for the addressee only. If you are not an addressee,
you are not authorised to read, copy or use the e-mail
or any attachment. If you have received this e-mail
in error, please notify the sender by return e-mail
and then delete it.
________________________________________