Thread Links | Date Links | ||||
---|---|---|---|---|---|
Thread Prev | Thread Next | Thread Index | Date Prev | Date Next | Date Index |
Hi Joel, I had to fish this email out of my junk folder where I was prompted to answer whether I ALWAYS wanted to trust email from
jgoergen@xxxxxxxxx:-) I think there is a simple answer and a complicated answer: The simple answer that Chris has advocated for some time is that since the industry investment to introduce a new (higher) rate is so massive that you wouldn’t
make this investment to just get a factor of 2.5, so surely the next rate beyond 400G would be at least 1.6T. The more complicated answer is that I do believe at some point that we are going to have to “build a better LAG”. We hear about the shortcomings of LAG at every
speed step, and sooner or later we are going to have to bite the bullet and define a physical layer aggregation mechanism that allows individual flows or “conversations” to exceed the size of a single group member and not be at the mercy of a hashing function
for how efficiently the traffic is distributed. I won’t comment in this email about the wisdom of introducing a new rate in the middle which is a factor of 2.5 above the next lower rate and a factor of 1.6
below the next higher rate since we are on the 400G reflector. Regards, Steve From: Joel Goergen (jgoergen) [mailto:jgoergen@xxxxxxxxx]
Chris I think I am missing what you are getting at. If you can, please clarify. 1) I think I heard you say we should only be standardizing technology within the short term 2 to 4 years. That we should not be long range planning or discussing
long range directions. If this is true, I seriously disagree. It implies that we will be short sighted in innovation and ultimately, giving the customers limited technology choices for their applications. Like I said, I don't think this is what you mean. 2) I keep hearing that you do not want 1TbE. What is it that you think we need going into 2018/2020? What is the technology plan and solution set you are offering?
I don't believe 400GbE LAG is going to work well. And I believe that if that is the direction you want to go in, that we better start a CFI to address LAG and the efficiency issues related to higher speed ports and associated traffic patterns. Take care Joel From:
Chris Cole <chris.cole@xxxxxxxxxxx> Ciao Marco, When presenting this diagram before, and again later this week, the accompanying remarks are that it is not the objective to claim with any certainty that 100G
linear I/O will be the ultimate 100G high volume interface. Just the opposite, the objective is to caution through example about claims that we have sufficient insight to define optimum architecture for >8 years out. A compelling aspect of this example is
that it shows an approach to get to ~1W/100G transceiver architecture. This is in contrast to a fully re-timed architecture with internal DAC/ADC/DSP/FEC PHY. Another example of a candidate ultimate architecture is DP-QPSK. A lot of telecom R&D is ongoing
in this area which could lead to breakthrough advances in the cost of coherent components. There are other examples of possible technology breakthrough areas. We should discount arguments in support of standardization proposals which claim to be solving >2020 problems. Our evaluation of PMD alternatives should be
based on solving current and next generation problems, using technology that we understand and have experience with. This enables objective decision making, in contrast to debating which technology will and will not arise in the far future.
Separately, I hope we never have to discuss 1Tb Ethernet. We should learn a lesson from the contortions we are going through now because of non-binary rate
increase like 2.5x. Chris From: Marco Mazzini (mmazzini) [mailto:mmazzini@xxxxxxxxx]
Hi Chris, if I look back from history, your flow chart applies to the 10G case: XAUI (X2,Xenpak) [Today] -> Serial retimed 10G (XFP) [Next] -> Linear (SFP+) [Ultimate],
but if we try to apply to the 40G case then we should end with a linear 40G interface as Ultimate. This can be possible, yet a re-timed 40G serial interface seems more feasible today than a linear one (as at this stage I’m not sure if a 25G linear interface
should be a simpler/cheaper solution with respect a re-timed one). If we follow the same concept and apply to current 25G path forward to 100G , then the next step of 1x100G Re-Timed module can become the Ultimate, at same
costs (yet less complexity) than a Linear 100G one (while discussing about 1Tb modules on 2020 … ). Marco From: Chris Cole [mailto:chris.cole@xxxxxxxxxxx]
Jeff We have at least two contributions for next week making proposals as to which SerDes rate should be matched. In his presentation, Vipul proposes that we match 100Gb/s SerDes rate: http://www.ieee802.org/3/bs/public/14_07/bhatt_3bs_01_0714.pdf#page=6 In my presentation, I propose that we match 50Gb/s SerDes rate: http://www.ieee802.org/3/bs/public/14_07/cole_3bs_02_0714.pdf#page=8 The important point made by Brad in his presentation is that this is an important consideration in selecting a mainstream PMD architecture. http://www.ieee802.org/3/bs/public/14_07/booth_3bs_01_0714.pdf Chris From: Jeffery Maki [mailto:jmaki@xxxxxxxxxxx]
Chris, What SERDES rate should be the case where no gearbox/mux is needed? That is the question. We need contributions answering this question. I’m sensing the answer
depends upon the reach. Jeff From: Chris Cole [mailto:chris.cole@xxxxxxxxxxx]
Jeff Brad’s insights are exactly on the mark. Mark Nowell pointed out the same thing on the 25G CFI conference call on 6/27/14.
They also in no way conflict with the needs you identify. If we look at 10G, 40G, and 100G, the all followed the same pattern. First generation required a mux (or gearbox) to match the higher optical rate to the lower existing electrical I/O (ASIC SerDes) rate. The second generation
was optimized for cost and volume and matched the optical and electrical rates, and follow on generations require (or will require) reverse mux (or gearbox) to enable legacy optical interfaces to plug into new electrical I/O ports.
From: Jeffery Maki [mailto:jmaki@xxxxxxxxxxx]
Brad, I was thinking that you could at least react to the portion of my presentation that you do not think provides the critical perspective to meet the needs of
data centers. Indeed, lets refine things into a consensus presentation for September. Below in yellow, I mark your slide text that implies (says to me) that “cloud needs a new optical PMD for each SERDES rate.” For electrical interfaces that are N wide and optical interfaces that are M wide, there is only one case for which N=M. When N does not equal M, a gearbox/mux
is needed. To avoid them then requires new optical interfaces to be defined so each new value of N can be matched by a new value of M. SERDES and optical lane speeds have to go in lock step; new optical PMD standard for each case. As you point out (and I did in my presentation in May) the modulation should also likely match. I would update my presentation to say that they do not have
to be defined at the same time, that is the electrical and optical lanes of equal speed. Considering a gearbox as an implementation fix implies only the one generation where the gearbox/mux is not needed is a proper implementation. Here I disagree.
Some applications need interoperation over system generation (i.e., form factor generation), especially among systems from competing system companies. Use of gearboxes preserve interoperation among systems of different SERDES generation and especially among
systems from competing system companies with different SERDES rates. Here is where you should separate whether you are talking about interconnects that are primarily for the data center versus for router-to-router or router-to-transport. For the data center,
you could assert that the system generation (SERDES) rate is well managed for uniformity among the interconnected systems. Jeff From: Brad Booth [mailto:bbooth@xxxxxxxx]
Jeff, Your suggestion for going through your presentation and selecting what you got right is not viable for next week's meeting. If you'd like to get a group together to do that filtering, then we could consider that
for the September task force meeting. You'll have to help me understand where I state that cloud needs a new optical PMD for each SERDES rate. It's not like 802.3 hasn't done that in the past, but I don't see where I state that in my presentation. Thanks, On Wed, Jul 9, 2014 at 6:39 PM, Jeffery Maki <jmaki@xxxxxxxxxxx> wrote:
|