Thread Links | Date Links | ||||
---|---|---|---|---|---|
Thread Prev | Thread Next | Thread Index | Date Prev | Date Next | Date Index |
Hi Brad, I think the general idea is that the project structuring should be in service to the needs of the industry, rather than the other way round. There’s no intent
to force anything on to any other project, but to make best use of limited resources by putting synergistic projects together. I think there was a similar recent example when 25GBASE-T was amalgamated into the 40GBASE-T project. Hope this helps jonathan From: Brad Booth [mailto:bbooth@xxxxxxxx]
I have a procedural question: The 802.3 working group approved the formation of one study group for Next Generation 100G and 200G Ethernet. How are we supposed to handle the CSD and PAR documentation and approvals? My apologies if this seems obvious to others, but in the 18 years I've been doing this, I don't ever recall forming a study group to develop one CSD and PAR and then discussing how that project gets partitioned to other projects. We've
definitely taken PARs and merged them, but this sounds like creating a PAR and then splitting it to merge only specific portions with another PAR. I'm sure I must be missing something here, and would be extremely grateful if someone could explain. Thanks, Brad On Thu, Dec 10, 2015 at 12:37 PM, John D'Ambrosia <jdambrosia@xxxxxxxxx> wrote: All, Please see the email I just sent. Discussions are moving away from inclusion of 100GbE in 802.3bs.
There is a lot of discussion yet to address how 100GbE will be handled. Thus, conversation is moving away from including it in 802.3bs because of the anticipated schedule hit. Regards John D’Ambrosia Chair, IEEE P802.3bs 400GbE Task Force From: Jeffery Maki [mailto:jmaki@xxxxxxxxxxx]
Vipul, Another important consideration is dissipated power and size. With markets waiting for technology
to be feasible in their desired form factor, the time horizon for adoption is gated by dissipated power and optics technology sizing. This needs to be captured in proving the market potential in the CSD.
Host slots have opportunity to live far longer than modules. What we standardize for host slots (i.e.,
electrical interface and FEC) should be thought through carefully to limit churn and resulting market confusion. Host slots should be as enabling as possible without driving up cost for any one solution. *** *** *** The per-lane specification for 400GBASE-DR4 will be written in P802.3bs. There will be no way to
stop anyone from using a single lane for 100G applications regardless of explicit 802.3 standardization or not for 100G application. What concerns me more is that these implementers will have to make their own choice of FEC and may drive defacto standards
for the host (i.e., ASIC). Thus, I think it important to set industry direction on FEC more than anything at this stage for 100G applications. Jeff From: Vipul Bhatt [mailto:vbhatt@xxxxxxxxx]
Picking up a sub-thread here, I want to express my support for the views of Chris and Helen. We are better off approaching 100G as 2x50G than as DR4 divided by 4. In yesterday's ad hoc call, we discussed work partitioning of 50G/100G/200G -- how to combine efforts, avoid schedule delay, simplify editorial challenges, etc. All well and good.
However, our bigger obligation is to write specs for a successful standard. The success of a standard is gauged by how widely it is adopted in the market, not by how efficiently we manage the IEEE process. Widespread adoption happens only if per-port cost
of an Ethernet solution is competitively low. In practical terms, this boils down to enabling low-cost optics. Therefore, our partitioning decision should be subservient to the goal of achieving the lowest cost of optics, not hinder it. (If we can align the
two, all the better.) To achieve low-cost optics, the biggest lever we have is extensive re-use of 50G (25 GBaud PAM4) optics -- Nx50G, where N can be 1, 2, and 4. The case of N=1 will provide a huge
volume base that the component ecosystem can amortize cost over. Why not use Mx100G where M can be 1 and 2? Wouldn't 100G (50 GBaud PAM4) optics be lower in cost because it uses only one wavelength? No, not yet, despite it being a seductive idea. Since the .bm days, some colleagues (including me) have been drawn to the idea of single-wavelength 100G. Later, as I supported all three .bs baselines (FR8, LR8 as well as DR4),
I had an opportunity to look more closely at both 25 GBaud PAM and 50 GBaud PAM4 product development. My view is now more balanced, from a technical as well as economic perspective. 50 GBaud PAM will certainly have its day in the sun. We will see that day arriving in the form of reports of improving and real measurements at leading edge conferences like OFC
and ECOC. But in IEEE, we should write specs based on performance we can prove. For broad market success, 50 GBaud PAM still has several hurdles to cross -- design, packaging, signal integrity,
yields -- with no immediate timeline in sight. Some may claim to cross these hurdles soon, and that's great, but that's not the same thing as broad supplier base. It is one thing to show a forward-looking lab result; it is quite another to have manufactured components that give us characterization data we can use for developing PMD specs
with correctly allocated margins. From that perspective, we are on more solid ground with 25 GBaud PAM than with 50 GBaud PAM. Let's walk before we run. The DR4 argument -- "We voted for DR4 in 400G, so let's carry the underlying risks and assumptions forward" -- sounds like unnecessarily doubling down on an avoidable risk. With best regards, Vipul Vipul Bhatt On Thu, Dec 3, 2015 at 7:25 PM, Xuyu (Helen) <helen.xuyu@xxxxxxxxxx> wrote:
|