Thread Links | Date Links | ||||
---|---|---|---|---|---|
Thread Prev | Thread Next | Thread Index | Date Prev | Date Next | Date Index |
Hi Ian, CAUI-4 is a retimed interface, and will lead to low cost interfaces because of the matching of 25G electrical and optical lane rates. This leverages the huge industry investment in 25G technology for optical PMDs. The argument that CMOS (specifically gearbox in your comments) is free (cost is negligible) in optical transceivers has been made before and will continue to be an area of disagreement. CMOS is free when it is ubiquitous and in high volume. CMOS is not free when it is used for modest volume niche applications. Your assertion has never been the case yet in datacom transceivers, including XAUI at 10G, 16:1 and 4:1 mux/demux for 40G, and gearbox for 100G. The rate conversion function has always been major cost component. This doesn’t mean that this won’t change in the future, but suggests that the claim is not self-evident. Chris From: Dedic, Ian [mailto:Ian.Dedic@xxxxxxxxxxxxxx] Chris That’s fine if you can cascade links without retiming/data recovery (SERDES) functions. But I don’t see how that’s possible if you have a CEI-xx (any type) link at one end, a x00GbE optical link (any type) in the middle, and a CEI-xx (any type, maybe different) link at the other end, since you have cascaded margins from (possibly) three different providers each with no control over the performance of the other two. Once you need SERDES at the two ends of the optical link, it makes no real difference if the rates are the same or different between electrical and optical, the “cost” of the gearbox function itself is negligible. Ian From: Chris Cole [mailto:chris.cole@xxxxxxxxxxx] Jeff Brad’s insights are exactly on the mark. Mark Nowell pointed out the same thing on the 25G CFI conference call on 6/27/14. They also in no way conflict with the needs you identify. If we look at 10G, 40G, and 100G, the all followed the same pattern. First generation required a mux (or gearbox) to match the higher optical rate to the lower existing electrical I/O (ASIC SerDes) rate. The second generation was optimized for cost and volume and matched the optical and electrical rates, and follow on generations require (or will require) reverse mux (or gearbox) to enable legacy optical interfaces to plug into new electrical I/O ports.
From: Jeffery Maki [mailto:jmaki@xxxxxxxxxxx] Brad, I was thinking that you could at least react to the portion of my presentation that you do not think provides the critical perspective to meet the needs of data centers. Indeed, lets refine things into a consensus presentation for September. Below in yellow, I mark your slide text that implies (says to me) that “cloud needs a new optical PMD for each SERDES rate.” For electrical interfaces that are N wide and optical interfaces that are M wide, there is only one case for which N=M. When N does not equal M, a gearbox/mux is needed. To avoid them then requires new optical interfaces to be defined so each new value of N can be matched by a new value of M. SERDES and optical lane speeds have to go in lock step; new optical PMD standard for each case. As you point out (and I did in my presentation in May) the modulation should also likely match. I would update my presentation to say that they do not have to be defined at the same time, that is the electrical and optical lanes of equal speed. Considering a gearbox as an implementation fix implies only the one generation where the gearbox/mux is not needed is a proper implementation. Here I disagree. Some applications need interoperation over system generation (i.e., form factor generation), especially among systems from competing system companies. Use of gearboxes preserve interoperation among systems of different SERDES generation and especially among systems from competing system companies with different SERDES rates. Here is where you should separate whether you are talking about interconnects that are primarily for the data center versus for router-to-router or router-to-transport. For the data center, you could assert that the system generation (SERDES) rate is well managed for uniformity among the interconnected systems. Jeff From: Brad Booth [mailto:bbooth@xxxxxxxx] Jeff, Your suggestion for going through your presentation and selecting what you got right is not viable for next week's meeting. If you'd like to get a group together to do that filtering, then we could consider that for the September task force meeting. You'll have to help me understand where I state that cloud needs a new optical PMD for each SERDES rate. It's not like 802.3 hasn't done that in the past, but I don't see where I state that in my presentation. Thanks, On Wed, Jul 9, 2014 at 6:39 PM, Jeffery Maki <jmaki@xxxxxxxxxxx> wrote:
________________________________________ |