Thread Links | Date Links | ||||
---|---|---|---|---|---|
Thread Prev | Thread Next | Thread Index | Date Prev | Date Next | Date Index |
Hi Mike, For 10G, the reverse gearbox is specified in the OIF MLG standard. An application would be CFP4, or hopefully QSFP in the future, which takes 4x25G I/O (CAUI-4) and expands it to 10x10G. I qualified my comment with “or will require”, so it is not possible yet to characterize this as successful implementation example; it is only an implementation example. Time will tell about the success part.
From: Mike Dudek [mailto:mike.dudek@xxxxxxxxxx] Chris, I am interested in your cut on the following statements from your e-mail. If we look at 10G, 40G, and 100G, the all followed the same pattern and follow on generations require (or will require) reverse mux (or gearbox) to enable legacy optical interfaces to plug into new electrical I/O ports. While this is technically correct I can’t think of any examples of successful implementations of the reverse mux, (or were there 10GBASE-LX4 SFP or XFP modules that I don’t know about). Mike Dudek QLogic Corporation Director Signal Integrity 26650 Aliso Viejo Parkway Aliso Viejo CA 92656 949 389 6269 - office. From: Dedic, Ian [mailto:Ian.Dedic@xxxxxxxxxxxxxx] Chris That’s fine if you can cascade links without retiming/data recovery (SERDES) functions. But I don’t see how that’s possible if you have a CEI-xx (any type) link at one end, a x00GbE optical link (any type) in the middle, and a CEI-xx (any type, maybe different) link at the other end, since you have cascaded margins from (possibly) three different providers each with no control over the performance of the other two. Once you need SERDES at the two ends of the optical link, it makes no real difference if the rates are the same or different between electrical and optical, the “cost” of the gearbox function itself is negligible. Ian From: Chris Cole [mailto:chris.cole@xxxxxxxxxxx] Jeff Brad’s insights are exactly on the mark. Mark Nowell pointed out the same thing on the 25G CFI conference call on 6/27/14. They also in no way conflict with the needs you identify. If we look at 10G, 40G, and 100G, the all followed the same pattern. First generation required a mux (or gearbox) to match the higher optical rate to the lower existing electrical I/O (ASIC SerDes) rate. The second generation was optimized for cost and volume and matched the optical and electrical rates, and follow on generations require (or will require) reverse mux (or gearbox) to enable legacy optical interfaces to plug into new electrical I/O ports.
From: Jeffery Maki [mailto:jmaki@xxxxxxxxxxx] Brad, I was thinking that you could at least react to the portion of my presentation that you do not think provides the critical perspective to meet the needs of data centers. Indeed, lets refine things into a consensus presentation for September. Below in yellow, I mark your slide text that implies (says to me) that “cloud needs a new optical PMD for each SERDES rate.” For electrical interfaces that are N wide and optical interfaces that are M wide, there is only one case for which N=M. When N does not equal M, a gearbox/mux is needed. To avoid them then requires new optical interfaces to be defined so each new value of N can be matched by a new value of M. SERDES and optical lane speeds have to go in lock step; new optical PMD standard for each case. As you point out (and I did in my presentation in May) the modulation should also likely match. I would update my presentation to say that they do not have to be defined at the same time, that is the electrical and optical lanes of equal speed. Considering a gearbox as an implementation fix implies only the one generation where the gearbox/mux is not needed is a proper implementation. Here I disagree. Some applications need interoperation over system generation (i.e., form factor generation), especially among systems from competing system companies. Use of gearboxes preserve interoperation among systems of different SERDES generation and especially among systems from competing system companies with different SERDES rates. Here is where you should separate whether you are talking about interconnects that are primarily for the data center versus for router-to-router or router-to-transport. For the data center, you could assert that the system generation (SERDES) rate is well managed for uniformity among the interconnected systems. Jeff From: Brad Booth [mailto:bbooth@xxxxxxxx] Jeff, Your suggestion for going through your presentation and selecting what you got right is not viable for next week's meeting. If you'd like to get a group together to do that filtering, then we could consider that for the September task force meeting. You'll have to help me understand where I state that cloud needs a new optical PMD for each SERDES rate. It's not like 802.3 hasn't done that in the past, but I don't see where I state that in my presentation. Thanks, On Wed, Jul 9, 2014 at 6:39 PM, Jeffery Maki <jmaki@xxxxxxxxxxx> wrote:
________________________________________ |