Thread Links | Date Links | ||||
---|---|---|---|---|---|
Thread Prev | Thread Next | Thread Index | Date Prev | Date Next | Date Index |
Resending to 50G reflector.
From: Chris Cole [mailto:chris.cole@xxxxxxxxxxx]
Mike FEC is an important discussion area for 50G and 2x50G PMDs. I understand the arguments in favor of KP4, and am sympathetic because KP4 makes life easier for
optics. You have raised important concerns about the error rate on the copper interface with PAM4. On the other hand, there are advantages for seamless interoperability with 2x25G and 4x25G C-t-M I/O, which deserve a fair consideration in any technical discussion.
Also KR4 has lower latency than KP4. So a more accurate description of the starting point is that both KR4 and KP4 are candidates and a detailed investigation is required.
From: Mike Dudek [mailto:mike.dudek@xxxxxxxxxx]
Do you mean that we need discuss what the required performance and FEC should be for
50GBASE-R and 200GBASE-R. Although the distribution of FEC symbols will have to be different with one four lanes and the other only one lane, and interleaving will be less attractive for 50G because of longer latency.
I think our starting point would be that we use the KP4 FEC for both unless there are good reasons not to.
Mike Dudek
QLogic Corporation Director Signal Integrity 26650 Aliso Viejo Parkway Aliso Viejo CA 92656 949 389 6269 - office. From: Jeffery Maki [mailto:jmaki@xxxxxxxxxxx]
Steve, Different FEC between 50GBASE-R and 200GBASE-R. Performance has to be rationalized. _____________________________ Hi all, Indeed, the only thing “magic” about the DR reach was that it was felt that 500m was the greatest reach at which parallel fiber would be acceptable to the market,
and it provided the opportunity to do something different with technology by eliminating the mux/demux loss in a multi-lane interface. 50G breakout from a 200G module is neither a multi-lane interface (it is four single-lane interfaces) nor parallel fiber (you have four duplex fiber pairs that
connect to different places). For a single lane interface, there is no “cliff” that you drive over between 500m and 2km, and not even that much more needed as a budget for 10km. So you build a quad 50GBASE-FR or 50GBASE-LR module that plugs into the same socket as your 200GBASE-??? Module. The only reason you actually would build a
200GBASE-DR4 is if you decide that is the right technical and economic tradeoff for 200G. Regards, Steve From: Peter Stassar [mailto:Peter.Stassar@xxxxxxxxxx]
Ali, I fully agree with you regarding the “50 Gb/s economy of scale …..” I however disagree that we therefore need a 200GBASE-DR4 specification. What you are really asking for is a quad 50GBASE-xR module, for which we don’t need
a specification. And then we also don’t need to worry about potential interworking issues. Kind regards, Peter Peter Stassar,
施笪安
Technical Director,
技术总监
Huawei Technologies Ltd,
华为技术有限公司 European Research Center,
欧洲研究所 Karspeldreef 4, 1101CJ Amsterdam The Netherlands Tel:
+31 20 4300 832 Mob:
+31 6 21146286 From: Ali Ghiasi [mailto:aghiasi@xxxxxxxxx]
Vipul Beside the 50 Gb/s economy of scale and cost advantages you raised one must also consider the application space. We have been discussing 50 GbE breakout even in early days of .BS project but now that we are defining 50 GbE becomes even greater application with QSFP56-4xSFP56. Next generation servers are going to 50 GbE and not 100 GbE, that is why the market need 200Gbase-DR4 and not 200Gbase-DR2 as far as breakout concerns. We definitely need to define 200G-DR4 to address 4 ports of 50 GbE for breakout applications. Now the question is how do we best address leaf-spine applications requiring a radix of 64x100GbE. Should we address the 100 GbE leaf-spine by leveraging 1/2 of 200G-DR4 or do we need to define 100 GbE based on single lane? Thanks, Ali Ghiasi Ghiasi Quantum LLC
On Dec 10, 2015, at 11:03 AM, Vipul Bhatt <vbhatt@xxxxxxxxx> wrote: Picking up a sub-thread here, I want to express my support for the views of Chris and Helen. We are better off approaching 100G as 2x50G than as DR4 divided by 4. In yesterday's ad hoc call, we discussed work partitioning of 50G/100G/200G -- how to combine efforts, avoid schedule delay, simplify editorial challenges, etc. All well and good. However, our bigger obligation is to write specs for a successful
standard. The success of a standard is gauged by how widely it is adopted in the market, not by how efficiently we manage the IEEE process. Widespread adoption happens only if per-port cost of an Ethernet solution is competitively low. In practical terms,
this boils down to enabling low-cost optics. Therefore, our partitioning decision should be subservient to the goal of achieving the lowest cost of optics, not hinder it. (If we can align the two, all the better.) To achieve low-cost optics, the biggest lever we have is extensive re-use of 50G (25 GBaud PAM4) optics -- Nx50G, where N can be 1, 2, and 4. The case of N=1 will provide a huge volume base that the component ecosystem can amortize cost
over. Why not use Mx100G where M can be 1 and 2? Wouldn't 100G (50 GBaud PAM4) optics be lower in cost because it uses only one wavelength? No, not yet, despite it being a seductive idea. Since the .bm days, some colleagues (including me) have been drawn to the idea of single-wavelength 100G. Later, as I supported all three .bs baselines (FR8, LR8 as well as DR4), I had an opportunity to look more closely at both 25 GBaud
PAM and 50 GBaud PAM4 product development. My view is now more balanced, from a technical as well as economic perspective. 50 GBaud PAM will certainly have its day in the sun. We will see that day arriving in the form of reports of improving and real measurements at leading edge conferences like OFC and ECOC. But in IEEE, we should write specs based on performance we can prove. For broad market success, 50 GBaud PAM still has several hurdles to cross -- design, packaging, signal integrity, yields -- with no immediate timeline in sight. Some
may claim to cross these hurdles soon, and that's great, but that's not the same thing as broad supplier base. It is one thing to show a forward-looking lab result; it is quite another to have manufactured components that give us characterization data we can use for developing PMD specs with correctly allocated margins. From that perspective, we
are on more solid ground with 25 GBaud PAM than with 50 GBaud PAM. Let's walk before we run. The DR4 argument -- "We voted for DR4 in 400G, so let's carry the underlying risks and assumptions forward" -- sounds like unnecessarily doubling down on an avoidable risk. With best regards, Vipul Vipul Bhatt On Thu, Dec 3, 2015 at 7:25 PM, Xuyu (Helen) <helen.xuyu@xxxxxxxxxx> wrote: I agree with chris’s point.
100G single lane is one of important solutions for future research and need solid work. Including this to bs will bring an expected debate and delay.
发件人:
Chris Cole [mailto:chris.cole@xxxxxxxxxxx]
John
Further, given the broad and substantial research effort into 100G single wavelength by the optics industry, it is best for 802.3 to let those efforts
play out, and not engage in more crystal balling. 802.3 is not the best place to first report and debate fundamental research results. The right place for this are refereed journals and technical conferences.
Chris From: John D'Ambrosia [mailto:jdambrosia@xxxxxxxxx]
So you are saying do something different than what is agreed upon for dr4?
From: Chris Cole [mailto:chris.cole@xxxxxxxxxxx]
John, 100GBASE-DR implies nothing about modulation format. It simply designates a single lane 500m interface. Modulation would be selected based on technical
and other merit. Compatibility with a 4x standard would be a consideration but hardly an overriding one. Chris From: John D'Ambrosia [mailto:jdambrosia@xxxxxxxxx]
Chris, Yes I am – anything differing than what is being done in .3bs will add delay – but your assumption was not clear from your use of 100G-DR nomenclature.
However, it seems to me that there is a hard argument to use something different for a single lane approach than what is used for a x4 lane approach.
John From: Chris Cole [mailto:chris.cole@xxxxxxxxxxx]
John,
We are once again reminded that trying to predict the future by more than one technology generation ahead is a low probability of success activity and
therefore should not be done in standards bodies.
From: John D'Ambrosia [mailto:jdambrosia@xxxxxxxxx]
Chris, I am not getting your point here – how are we introducing further delay? We already have DR4 in the 400G standard. What additional delay will there
be to just have a single lane implementation? John From: Chris Cole [mailto:chris.cole@xxxxxxxxxxx]
Hi Steve One reason we may not want to do 100GBASE-DR in 802.3bs is to not add considerably delay to the 802.3b schedule while we debate the merit of supporting
measurements. And given the research results we are seeing presented in recent technical publications and conferences, we are sure to see the modulation debate re-opened, which is an even better prescription to delay 802.3bs schedule. Chris From: Trowbridge, Stephen J (Steve)
[mailto:steve.trowbridge@xxxxxxxxxxxxxxxxxx]
Hi Chris and Rob, Just to play devil’s advocate here, I think it depends on the objectives agreed in the study group. If the only stuff we have objectives to do for 200G are the same things we are doing at 400G with fewer lanes, sure, it all folds right in to P802.3bs.
Specifically: We would specify 8-lane CCAUI-8 and 4-lane CCAUI-4 C2C and C2M interfaces. We would specify 200GBASE-SR8, 200GBASE-DR2, 200GBASE-FR4, and 200GBASE-LR4 PMDs. But the flies in the ointment would be if we have objectives to build a 200GBASE-SR4, 200GBASE-CR4, or 200GBASE-KR4 PMD, which I think would be quite
challenging on the current P802.3bs schedule. Presumably the reason you think 100G belongs with 50G is that you assume this project needs to do interfaces like 100GBASE-SR2, 100GBASE-CR2, 100GBASE-KR2.
But why wouldn’t you do, for example, a 100GBASE-DR interface in the P802.3bs project? Regards, Steve From: Rob (Robert) Stone [mailto:rob.stone@xxxxxxxxxxxx]
Chris I agree with your observation, and I have been thinking the same thing with respect to the partitioning of the work. It would seem that if possible, using the KR4 FEC for 50 and 100G would have a lot of benefits with respect to compatibility with existing MAC rates
and 25G based technologies. I would expect that it is likely we will see co-existence of 25 and 50G / lane technologies within the same environment, and if so we should make an effort when defining the logic to enable straightforward low power connections
between the different generations. Using end – end KP4 FEC would help facilitate that. Thanks Rob
From: Chris Cole [mailto:chris.cole@xxxxxxxxxxx]
The idea of rolling 200G into the 400G project is compelling. In prior discussions, we had rejected this as too late for 802.3bs TF, so it’s encouraging
to see we are willing to revisit. One mental test of why this makes sense is to consider what we would have done in 400G Study Group if we knew what we know now. Given the CFI support, it could be argued that most people would have supported both 200G and
400G. If anything, 200G is more compelling.
Chris From: John D'Ambrosia [mailto:jdambrosia@xxxxxxxxx]
Dear Task Force Participants, This email is to make sure that everyone is aware of conversations happening in the 50/100/200G Study Group Phone Conference that happened yesterday – Dec 2. There has been discussion
at how the multi-lane 100G/200G solutions might be rolled into the 802.3bs project. To that end – I gave a presentation at the conference call that looked at potential modifications / additions to our PAR / CSD. See
http://www.ieee802.org/3/50G/public/adhoc/archive/dambrosia_120215_50GE_NGOATH_adhoc_v2.pdf I encourage everyone to review this presentation and consider the findings on the last few pages. Individuals may wish to participate in the upcoming 50/100/200G ad hoc calls that
Mr. Nowell has planned. For more information see
http://www.ieee802.org/3/50G/public/adhoc/index.html. I will be working on the meeting announcement for the January interim, and anticipate that there will be a joint session of our Task Force with the Study Groups to further consider
these implications. Regards, John D’Ambrosia Chair, IEEE P802.3bs 400GbE Task Force |