Thread Links | Date Links | ||||
---|---|---|---|---|---|
Thread Prev | Thread Next | Thread Index | Date Prev | Date Next | Date Index |
Chris and all: I found that, we are hesitated to embrace a generation by generation Ethernet rate and different type PHYs base on evolving technologies.
It’s more wise to decouple the rate evolution with the evolving PHY technologies. Consider an general MAC rate agnostic common PHY architecture
suitable for 400G, future 1T or 1.6T. if you don’t like a generation by generation standardization for 400GE, 800GE 1TE, 1.6TE.. Like OIF MLG base on 802.3ba MLD architecture, now, 10GE or 40GE or 100GE are able to ride on 100GE PHYs. 100GE/40GE, as a typical evolving generation, takes a long time and heavy developing budgets for the industry to stand here today.
100GE/40GE relative works start from earlier 2006 in 802.3ba project, later 40Gbase-FR, 802.3bj, 802.3bm all are supplements for PHY types or enhancements.
While we considering the next rate beyond or 400G, Be Remember the argument for the rate of 100GE and 40GE during 802.3ba. Qiwen 发件人: Chris Cole [mailto:chris.cole@xxxxxxxxxxx]
Joel, 1)
Thank you for giving me the benefit of the doubt that I wasn’t advocating something silly. It is critical to have roadmaps, project future
technology, and put up straw men. This makes discussion specific and substantive rather than generic. While doing that, it is important to keep in mind that we rarely get it right when making projections so should maintain a healthy dose of skepticism about
how we use it. To make our exchange specific, my concern about adopting a 100G Serial architecture now is exactly the same as my concern
about adopting 40G Serial in .ba for 40GBASE duplex SMF PMD. It’s too soon. http://www.ieee802.org/3/ba/public/sep08/cole_02_0908.pdf#page=5 Agreeing that 40G Serial is the right long-term high-volume solution doesn’t mean that it is right when adopted too
early. 40G Serial was 15 years too early as 40G OC768 Telecom client, and it would have been 8 years too early if we adopted it as 40Gb/s Ethernet in .ba. 100G Serial will be the right answer >2020. It’s too soon to standardize it now and lock in today’s understanding
and technology as the right long term solution. 40G Serial offers another example why we should be cautious. In .ba we thought the ultimate 40G Serial architecture
is NRZ. However, we are now considering PAM-4 as a possible alternative, which at the time wasn’t even on anyone’s radar screen as a PMD alternative. Dave Chalupsky wonderfully qualifies his projections: “Current Forecast … surely accurate
J “. Great attitude to have. 2)
My view of the right next rate steps after 400Gb/s Ethernet are 1.6T for switches and 800G for servers, but not 1T.
Chris From: Joel Goergen (jgoergen) [mailto:jgoergen@xxxxxxxxx]
Chris I think I am missing what you are getting at. If you can, please clarify. 1) I think I heard you say we should only be standardizing technology within the short term 2 to 4 years. That we should not be long range planning
or discussing long range directions. If this is true, I seriously disagree. It implies that we will be short sighted in innovation and ultimately, giving the customers limited technology choices for their applications. Like I said, I don't think this is
what you mean. 2) I keep hearing that you do not want 1TbE. What is it that you think we need going into 2018/2020? What is the technology plan and solution
set you are offering? I don't believe 400GbE LAG is going to work well. And I believe that if that is the direction you want to go in, that we better start a CFI to address LAG and the efficiency issues related to higher speed ports and associated traffic
patterns. Take care Joel From:
Chris Cole <chris.cole@xxxxxxxxxxx> Ciao Marco, When presenting this diagram before, and again later this week, the accompanying remarks are that it is not the objective to claim with any certainty
that 100G linear I/O will be the ultimate 100G high volume interface. Just the opposite, the objective is to caution through example about claims that we have sufficient insight to define optimum architecture for >8 years out. A compelling aspect of this example
is that it shows an approach to get to ~1W/100G transceiver architecture. This is in contrast to a fully re-timed architecture with internal DAC/ADC/DSP/FEC PHY. Another example of a candidate ultimate architecture is DP-QPSK. A lot of telecom R&D is ongoing
in this area which could lead to breakthrough advances in the cost of coherent components. There are other examples of possible technology breakthrough areas. We should discount arguments in support of standardization proposals which claim to be solving >2020 problems. Our evaluation of PMD alternatives
should be based on solving current and next generation problems, using technology that we understand and have experience with. This enables objective decision making, in contrast to debating which technology will and will not arise in the far future.
Separately, I hope we never have to discuss 1Tb Ethernet. We should learn a lesson from the contortions we are going through now because of non-binary
rate increase like 2.5x. Chris From: Marco Mazzini (mmazzini) [mailto:mmazzini@xxxxxxxxx]
Hi Chris, if I look back from history, your flow chart applies to the 10G case: XAUI (X2,Xenpak) [Today] -> Serial retimed 10G (XFP) [Next] -> Linear (SFP+)
[Ultimate], but if we try to apply to the 40G case then we should end with a linear 40G interface as Ultimate. This can be possible, yet a re-timed 40G serial interface seems more feasible today than a linear one (as at this stage I’m not sure if a 25G linear
interface should be a simpler/cheaper solution with respect a re-timed one). If we follow the same concept and apply to current 25G path forward to 100G , then the next step of 1x100G Re-Timed module can become the Ultimate,
at same costs (yet less complexity) than a Linear 100G one (while discussing about 1Tb modules on 2020 … ). Marco From: Chris Cole [mailto:chris.cole@xxxxxxxxxxx]
Jeff We have at least two contributions for next week making proposals as to which SerDes rate should be matched. In his presentation, Vipul proposes that we match 100Gb/s SerDes rate: http://www.ieee802.org/3/bs/public/14_07/bhatt_3bs_01_0714.pdf#page=6 In my presentation, I propose that we match 50Gb/s SerDes rate: http://www.ieee802.org/3/bs/public/14_07/cole_3bs_02_0714.pdf#page=8 The important point made by Brad in his presentation is that this is an important consideration in selecting a mainstream PMD architecture. http://www.ieee802.org/3/bs/public/14_07/booth_3bs_01_0714.pdf Chris From: Jeffery Maki [mailto:jmaki@xxxxxxxxxxx]
Chris, What SERDES rate should be the case where no gearbox/mux is needed? That is the question. We need contributions answering this question. I’m sensing
the answer depends upon the reach. Jeff From: Chris Cole [mailto:chris.cole@xxxxxxxxxxx]
Jeff Brad’s insights are exactly on the mark. Mark Nowell pointed out the same thing on the 25G CFI conference call on 6/27/14.
They also in no way conflict with the needs you identify. If we look at 10G, 40G, and 100G, the all followed the same pattern. First generation required a mux (or gearbox) to match the higher optical rate to the lower existing electrical I/O (ASIC SerDes) rate. The second
generation was optimized for cost and volume and matched the optical and electrical rates, and follow on generations require (or will require) reverse mux (or gearbox) to enable legacy optical interfaces to plug into new electrical I/O ports.
From: Jeffery Maki [mailto:jmaki@xxxxxxxxxxx]
Brad, I was thinking that you could at least react to the portion of my presentation that you do not think provides the critical perspective to meet
the needs of data centers. Indeed, lets refine things into a consensus presentation for September. Below in yellow, I mark your slide text that implies (says to me) that “cloud needs a new optical PMD for each SERDES rate.” For electrical interfaces that are N wide and optical interfaces that are M wide, there is only one case for which N=M. When N does not equal M,
a gearbox/mux is needed. To avoid them then requires new optical interfaces to be defined so each new value of N can be matched by a new value of M. SERDES and optical lane speeds have to go in lock step; new optical PMD standard for each case. As you point out (and I did in my presentation in May) the modulation should also likely match. I would update my presentation to say that they
do not have to be defined at the same time, that is the electrical and optical lanes of equal speed. Considering a gearbox as an implementation fix implies only the one generation where the gearbox/mux is not needed is a proper implementation.
Here I disagree. Some applications need interoperation over system generation (i.e., form factor generation), especially among systems from competing system companies. Use of gearboxes preserve interoperation among systems of different SERDES generation and
especially among systems from competing system companies with different SERDES rates. Here is where you should separate whether you are talking about interconnects that are primarily for the data center versus for router-to-router or router-to-transport.
For the data center, you could assert that the system generation (SERDES) rate is well managed for uniformity among the interconnected systems. Jeff From: Brad Booth [mailto:bbooth@xxxxxxxx]
Jeff, Your suggestion for going through your presentation and selecting what you got right is not viable for next week's meeting. If you'd like to get a group together to do that filtering, then we could
consider that for the September task force meeting. You'll have to help me understand where I state that cloud needs a new optical PMD for each SERDES rate. It's not like 802.3 hasn't done that in the past, but I don't see where I state that in my presentation. Thanks, On Wed, Jul 9, 2014 at 6:39 PM, Jeffery Maki <jmaki@xxxxxxxxxxx> wrote:
|