Thread Links | Date Links | ||||
---|---|---|---|---|---|
Thread Prev | Thread Next | Thread Index | Date Prev | Date Next | Date Index |
Brad, I find this discussion very helpful and not strange. The IEEE rules seem pretty flexible if we can get the support of 75% of the voters. The super majority
of 75% is the hard part, so it helps to build consensus when everyone knows what is going on.
I like how Mark is explaining what is going on in the study groups and how it could affect the task force. I’ve missed some meetings (because of other meetings
like you) and I appreciate his recaps about how we might slice and dice the projects.
I don’t think inclusion of 200G will be the long pole in the 400G tent. We should be able to do derivatives of the 400G work pretty easily if we can agree
on the signaling. I predict the long pole will be standardizing the 50G and 100G lane signaling. Kind regards, Scott
From: Brad Booth [mailto:bbooth@xxxxxxxx]
Mark, Thank you for the clarification. I'm not saying that short cutting has been done or that rules have been violated, but there are a number of blurred lines and a few buzzers going off that this seems very strange. For example, your statement, "Some clear consensus that
some aspects of 200G (logic primarily, maybe AUI, maybe SMF) are all potentially incremental work items that .3bs could take on."
While many of the players involved are the same, this clear consensus is from a SG ad hoc on influencing an existing task force. That does seem to beyond the function of a study group as defined by our rules. The objectives for that SG
haven't even been voted on, and the SG hasn't even had its first initial meeting. And yet there's "consensus" for modification of the .3bs task force?
From the 802.3 rules: The main responsibility of the TF Chair is to ensure the production, and to guide through the approval and publication process, a draft standard, recommended practice or guideline, or revision to an existing
document as defined by the relevant PAR. Should the ad hoc's consensus be a driving factor in modifying the .3bs PAR? Maybe it's just me, but even if the majority of the people involved in these efforts are the same, shouldn't we be using caution? Doesn't anyone else find that a bit strange? Thanks, On Sun, Dec 13, 2015 at 2:31 PM, Mark Nowell (mnowell) <mnowell@xxxxxxxxx> wrote: Copying the 50 Gb/s Ethernet and NGOATH Ethernet Study Groups reflector as this discussion originated there but it jumped over to the 400Gb/s reflector and has
been on-going there. Can we please copy the SG reflector on these discussions… Brad, I don’t think we’ve yet done anything that is changing anything against our usual practices. As we’ve discussed many times, we initiated 2 study groups in order to study the two (related) topics on an optimized single lane project for 50Gb/s Ethernet
and the other study group to study the multiple lane variants of that. As was discussed and presented in the first ad hoc, http://www.ieee802.org/3/50G/public/adhoc/archive/nowell_120215_50GE_NGOATH_adhoc.pdf regardless
of how this work gets “handled” in Task Forces it does not mean that we miss or avoid any of the requirements to justify the work (CSD, objectives, PARs). The starting point of assumption has always been that these 2 SGs form 2 Task Forces. What has emerged from the discussions is that many are realizing that some
of the potential objectives for the NGOATH are essentially incremental work for the 802.3bs TF and so some work has been done to start exploring what could or should be added to the .3bs project in a way that is beneficial (accelerates market availability,
commonality of expertise, leverage of specs) without it being detrimental (schedule impact). I don’t see this as a short circuiting of the process. We primarily have to pick objectives first. This is independent at some level to where the work will be
done. Once we’ve done that we then work the project documentation to justify where the work should be done. Pete Anslow and Matt Brown did a further analysis looking at some of the possibilities of how the potential objectives could be partitioned into Task Forces http://www.ieee802.org/3/50G/public/adhoc/archive/anslow_120915_50GE_NGOATH_adhoc.pdf.
Again it showed ways we could consider moving forward. But as was discussed during the presentation, this did not advocate what objectives should be adopted. Right now I see:
There is nothing that says if the SG doesn’t reach 75% on an objective that it needs to keep circling that topic even it is is within scope of the SG charter.
For example if we don’t adopt some of the potential 100G objectives, that doesn’t prevent us from moving forward if there is agreement to do so As I said, I do not believe we are in any way trying to go outside of operating rules. I’ve no interest in doing so, and I doubt John has either – and I know
that David Law will step in at any time he sees something heading in the wrong direction. If there is some consensus forming within the .3bs Task Force that they want to add some objectives which requires change of their scope and CSD, then that would have
to follow the usual operating procedures for that in line with what has been done before. Until that is seen to be happening, my assumption is that we’re following usual Study Group procedures to decide what to do and document it appropriately. Obviously this is all happening in parallel and there is some urgency in mind since, if the .3bs TF do want to do this, the sooner it happens the less schedule
impact it occurs. Regards…Mark From:
Brad Booth <bbooth@xxxxxxxx> Jonathan, The 25GBASE-T merging into the 40GBASE-T project is an understandable analogy. Both 25GBASE-T and 40GBASE-T had unique CSDs and PARs; therefore, merging them
in relation to the resources made sense. My understanding is that the NG 100G and 200G will have its own CSD and PAR. While I can understand the need to address resource limitations (as a matter of fact,
I highlighted this concern back in June/July with the 802.3 officers), what I was trying to understand is why .3bs would need to modify its PAR or CSD to accommodate 200G. To me, that sounded like we were advocating pulling something out of a study group to
modify an existing project. Just want to understand if we're operating within our working group and sponsor rules. Thanks, On Thu, Dec 10, 2015 at 6:03 PM, Jonathan King <jonathan.king@xxxxxxxxxxx>
wrote: Hi Brad, I think the general idea is that the project structuring should be in service to the needs of the
industry, rather than the other way round. There’s no intent to force anything on to any other project, but to make best use of limited resources by putting synergistic projects together. I think there was a similar recent example when 25GBASE-T was amalgamated into the 40GBASE-T project. Hope this helps jonathan From: Brad
Booth [mailto:bbooth@xxxxxxxx]
I have a procedural question: The 802.3 working group approved the formation of one study group for Next Generation 100G and 200G Ethernet. How are we supposed to handle the CSD and
PAR documentation and approvals? My apologies if this seems obvious to others, but in the 18 years I've been doing this, I don't ever recall forming a study group to develop one CSD and
PAR and then discussing how that project gets partitioned to other projects. We've definitely taken PARs and merged them, but this sounds like creating a PAR and then splitting it to merge only specific portions with another PAR. I'm sure I must be missing
something here, and would be extremely grateful if someone could explain. Thanks, Brad On Thu, Dec 10, 2015 at 12:37 PM, John D'Ambrosia <jdambrosia@xxxxxxxxx>
wrote: All, Please see the email I just sent. Discussions are moving away from inclusion of 100GbE in 802.3bs.
There is a lot of discussion yet to address how 100GbE will be handled. Thus, conversation is moving away from including it in 802.3bs because of the anticipated schedule hit. Regards John D’Ambrosia Chair, IEEE P802.3bs 400GbE Task Force From: Jeffery
Maki [mailto:jmaki@xxxxxxxxxxx]
Vipul, Another important consideration is dissipated power and size. With markets waiting for technology
to be feasible in their desired form factor, the time horizon for adoption is gated by dissipated power and optics technology sizing. This needs to be captured in proving the market potential in the CSD.
Host slots have opportunity to live far longer than modules. What we standardize for host slots (i.e.,
electrical interface and FEC) should be thought through carefully to limit churn and resulting market confusion. Host slots should be as enabling as possible without driving up cost for any one solution. *** *** *** The per-lane specification for 400GBASE-DR4 will be written in P802.3bs. There will be no way to
stop anyone from using a single lane for 100G applications regardless of explicit 802.3 standardization or not for 100G application. What concerns me more is that these implementers will have to make their own choice of FEC and may drive defacto standards
for the host (i.e., ASIC). Thus, I think it important to set industry direction on FEC more than anything at this stage for 100G applications. Jeff From: Vipul
Bhatt [mailto:vbhatt@xxxxxxxxx]
Picking up a sub-thread here, I want to express my support for the views of Chris and Helen. We are better off approaching 100G as 2x50G than as DR4 divided
by 4. In yesterday's ad hoc call, we discussed work partitioning of 50G/100G/200G -- how to combine efforts, avoid schedule delay, simplify editorial challenges,
etc. All well and good. However, our bigger obligation is to write specs for a successful standard. The success of a standard is gauged by how widely it is adopted in the market, not by how efficiently we manage the IEEE process. Widespread adoption happens
only if per-port cost of an Ethernet solution is competitively low. In practical terms, this boils down to enabling low-cost optics. Therefore, our partitioning decision should be subservient to the goal of achieving the lowest cost of optics, not hinder it.
(If we can align the two, all the better.) To achieve low-cost optics, the biggest lever we have is extensive re-use of 50G (25 GBaud PAM4) optics -- Nx50G, where N can be 1, 2, and 4. The case
of N=1 will provide a huge volume base that the component ecosystem can amortize cost over. Why not use Mx100G where M can be 1 and 2? Wouldn't 100G (50 GBaud PAM4) optics be lower in cost because it uses only one wavelength? No, not yet, despite
it being a seductive idea. Since the .bm days, some colleagues (including me) have been drawn to the idea of single-wavelength 100G. Later, as I supported all three .bs baselines
(FR8, LR8 as well as DR4), I had an opportunity to look more closely at both 25 GBaud PAM and 50 GBaud PAM4 product development. My view is now more balanced, from a technical as well as economic perspective. 50 GBaud PAM will certainly have its day in the sun. We will see that day arriving in the form of reports of improving and real measurements at leading
edge conferences like OFC and ECOC. But in IEEE, we should write specs based on performance we can prove. For broad market success, 50 GBaud PAM still has several hurdles to cross -- design,
packaging, signal integrity, yields -- with no immediate timeline in sight. Some may claim to cross these hurdles soon, and that's great, but that's not the same thing as broad supplier base. It is one thing to show a forward-looking lab result; it is quite another to have manufactured components that give us characterization data we can use
for developing PMD specs with correctly allocated margins. From that perspective, we are on more solid ground with 25 GBaud PAM than with 50 GBaud PAM. Let's walk before we run. The DR4 argument -- "We voted for DR4 in 400G, so let's carry the underlying risks and assumptions forward" -- sounds like unnecessarily doubling down
on an avoidable risk. With best regards, Vipul Vipul Bhatt On Thu, Dec 3, 2015 at 7:25 PM, Xuyu (Helen) <helen.xuyu@xxxxxxxxxx>
wrote:
|