Thread Links | Date Links | ||||
---|---|---|---|---|---|
Thread Prev | Thread Next | Thread Index | Date Prev | Date Next | Date Index |
Hi Chris, For backward compatible from switch/host, it is make sense for KR4 FEC only right now.
For LAUI-2, we should consider the concern from most guys on how to get 50G per lane from FEC capability perspective. I assume no this kind of
implementation in industry right now. If wrong, please correct me. I am not propose either one from S1, S2 and S3 in this email. I raise this option for us to keep in mind of the impact of KR4 FEC with LAUI at
1E-5 BER if referring to current CDAUI-8 specification in 802.3bs Atlanta meeting. This question also be asked by Mike Dudek after Helen Xu presented. I think it is necessary to support interoperating ability between LAUI with LAUI-2 in IEEE 50GbE. I primary think it is much better that either of 25.78125G and 26.5625G electrical interface should operate at less than 1E-15 as in Gary first
email. They it is neglect interference from electrical interface with pluggable module PMDs links. Thanks! Xinyuan
发件人: Chris Cole [mailto:chris.cole@xxxxxxxxxxx]
Hello Xinyuan, Are you aware of any switch silicon that supports KP4 FEC on CAUI-4 or proprietary LAU-2 interface? I am not aware of any, so in which situation would we use CAUI-4 or LAUI-2 with KP4? Thank you Chris From: Wangxinyuan (Xinyuan) [mailto:wangxinyuan@xxxxxxxxxx]
Another option?
Just borrow from Chris table, with “S3”.
发件人: Chris Cole [mailto:chris.cole@xxxxxxxxxxx]
The 50/100/200G SG will be making a choice in Macau between two objective setting scenarios for 50, 100, 200Gb/s xAUI chip-to-module interfaces.
If the SG elects S1, there is an auxiliary question whether we will need two sets of 50G & 100G PMD objectives, for example SR and SR2 each with
both KR4 and KP4 FEC. If the SG elects S2, than the backwards compatibility interfaces will be defined outside of the IEEE, in an MSA or Consortium. My view is that
complex Ethernet logic is best defined in the IEEE, as it has the most rigorous process and leads to the broadest industry input and review. However, it is also understandable that if there is a desire to move quickly, S1 requires more work and puts more pressure
than S2 on an aggressive schedule. Chris
From: Mark Nowell (mnowell) [mailto:mnowell@xxxxxxxxx]
Jeff, I think those are all market questions, not necessarily IEEE issues. But I agree option 2 seems to me what the market would drive to… Mark On 2/25/16, 3:00 PM, "Jeffery Maki" <jmaki@xxxxxxxxxxx> wrote: Mark, My point was that once one considers the needs for interoperation over three major system generations, then one will probably need options (2)
when one wishes to maintain density on the latest system generation and therefore skip option (1). Or said differently, no matter how hard we work to make (3) true, we’ll still find cases where a given system may need to adopt option (2). Option (2) seems
inevitable, but will module integrators make such modules. Today, we find use of 100GBASE-SR10 in CFP and CFP2. Nobody wishes to make 100GBASE-SR10 in QSFP28. It would be nice to migrate users to 100GBASE-SR4
that is found in QSFP28, but nobody wishes to make 100GBASE-SR4 in CFP or CFP2 with the FEC included. There is no interoperation over MMF between CFP/CFP2 and QSFP28 system generations. Jeff From: Mark Nowell (mnowell) [mailto:mnowell@xxxxxxxxx]
Jeff, You are absolutely correct that interoperability is the prime purpose of standards. When the technology allows us to proceed through multiple technology generations without breaking interoperability that works well for the industry
(see my recent email to Ali using 10GE as an example). When the technology changes so that we wish to move to a new generation in such a way that it is no longer interoperable with the old generation
but we see some market need (density, cost etc) we still do it and figure out the optimal way. If the PCS and FEC architecture doesn’t change ? but perhaps the number of lanes do, then it is a fairly self contained change limited to only implementation ?
for example the industry move from CAUI-10 based 100GBASE-LR4 (using CFP modules) to CAUI-4 based 100GBASE-LR4 (using CFP2/4 or CPAK modules). Both of these still interoperate with each other. If the PCS/FEC architecture changes then, we again do it if it addresses a market need and we deal with that by having multiple modes in our silicon.
For example, when 100GBASE-SR10 migrated to 100GBASE-SR4 we not only changed the AUI & PMD we supported but also needed to add the RS(528,514) FEC into our silicon. This is a different PCS/FEC architecture than the original 100GE PCS/FEC architecture and
they would not interoperate if you plugged a common PMD in between them. The system therefore needed to have management capabilities to know which mode PCS/FEC architecture mode to put the silicon in order to have interoperability. So the reality is that we have really non-interoperable PCS/FEC modes for 100GE in our systems to support the various 100GE flavors out there and
we live with that since it was the right thing to do for the market and it works. The debate, as I’m interpreting it here, is that we as an industry are very interested in moving to PMDs based on 50Gb/s signaling because we see
cost advantages in lowering the number of lanes. We’re therefore looking at the PCS/FEC architectures to see what is the best solution to run over those PMDs and it is looking like it might, again, be a new PCS/FEC architecture in order to maximize the performance
or minimize the cost of the PMDs. I believe there is a lot of consensus on this being the right long term thing we standardize. And it will be implemented as yet another mode in the silicon and the system management will figure out how to switch between
the various modes. Now we’re also seeing people saying that it would be great to be able to use these “better” PMDs with the old PCS/FEC architectures so we are able
to interconnect these “new” products with these new AUIs to the "current" products with the current AUIs. We have 3 choices to do that:
My confusion from a standards process with option 3 is I don’t know how to implement a new AUI that alters the interoperability without defining
at least one PHY to do that. Hence my original question which was what other objectives would we need to add to support this. Mark On 2/24/16, 11:22 PM, "Jeffery Maki" <jmaki@xxxxxxxxxxx> wrote: Mark, Great summary. This problem of interoperation is, I believe, the number one reason we have standards. Not all system vendors will offer product
on the same time horizon, so to keep things working we need interoperation over disparate generations. This problem appears to persist when we move one day to define 100G electrical lanes. We will need a scheme for interoperating systems still using
50G electrical lanes with those using the new 100G electrical lanes. This interoperation problem then will not just be for 100GbE, but also for 200GbE and 400GbE. We should figure out the best scalable approach. The system with 50G electrical lanes is needing to interoperate with old systems using 25G electrical lanes and new systems with 100G electrical
lanes. It seems at this point the new system with 100G electrical lanes may only be able to interoperate with the old system using 25G electrical lanes if an extender sublayer is put in a module on the legacy 25G electrical lane system that converts to whatever
FEC is used for the 100G electrical lane system. Here we can recover interoperation over more than two system generations with the use of the extended sublayer with FEC accommodation. This would mean lots of different modules with the correct FEC and correct
optics. How many generations of interoperation do we need? If only two, then the use of an optional end-to-end FEC (second PHY) would appear to be sufficient
for FEC accommodation, but we still need new optics (new modules) on the legacy system. Is anyone able to argue that only two generations of interoperation are required? I’m not, but certainly a minimum that I believe we have to achieve. Jeff From: Mark Nowell (mnowell) [mailto:mnowell@xxxxxxxxx]
All, Since I started this burst of activity with my questions on the ad hoc call today, let me re-iterate the point I was making. This is purely coming
from my chair’s perspective and looking at what the SG needs to close out in terms of objectives and making sure we all understand the implications and consequences of what we adopt so we don’t get wrapped into knots in Task Force. The proposal from Ali today was to support an objective for an optional 50GAUI-2 and an optional 100GAUI-4. My question was whether that was sufficient to achieve what is intended. I think 50GE and 100GE cases are slightly different, so I’ll tackle them
separately. A general comment first To try and clarify the confusion that is happening around CAUI-4 modes, let me try another way. We only have one mode of CAUI-4 defined (by 802.3bm),
and we have a FECs defined RS(528) and RS(544) (by 802.3bj). Because the RS(528) FEC runs at the same bit rate as CAUI-4 and because CAUI-4 was defined to run @ a BER that doesn’t require FEC, we can run the RS(528) FEC over CAUI-4 without consequence and
have the advantage of the FEC gain being able to be used completely for the optical PMD link. Key point here is that we’re not running the CAUi-4 at different bit rates. 50GE As Ali says we do not want to sacrifice performance on the single lane specifications which I’m guessing will be based on an end to end RS(544) FEC
that covers both the AUI and the PMD and this family of PHYs will be defined by the TF in line with the objectives set (which for the PHYs with AUIs are 100m MMF, 2km SMF and 10km SMF). If an optional 50GAUI-2 is defined, I’m assuming that the interest is to use a RS(528) FEC and therefore this is a new family of PHYs since they
won’t interoperate with the above family of PHYs from a bits on the wire perspective. Further assumptions as to different PCSes reinforce this non-interoperable conclusion. Since, I believe the assumption is that the PMD is still a single lane PMD, it’s
tx/rx specs will either be different from the single lane PHY to achieve the same reaches as above or the reaches will be different to use the same tx/rx as above. The “simple” addition of an option 50GAUI-2 to the 50GAUI-1 is more complex as they will be running at different bit rates, different modulation
formats and different BERs. All of this CAN be considered by the SG/TF BUT just adopting only an objective to support an optional 50GAUI-2 doesn’t really seem to provide any
insight into what the TF needs to do. It also doesn’t enable the TF to develop a more than one solution for an objective (e.g. 100m MMF). Unless there are PHYs that this proposed 50GAUI-2 is associated with ? it is not clear to me that we have a way of including
this 50GAUI-2 in the specification alone but need more consideration on how to do it. 100GE I originally thought 100GE was different but the discussion above actually carries across almost the same. The difference we have is that with 100GE
we only have one objective adopted that need an AUI right now ? 2-fiber 100m MMF. My assumption again is that there is interest in this objective being met with a baseline based on end-to-end RS(544) FEC. As I understand the optional AUI proposal, the goal would be to have the 100GAUI-2 end of the link to run the existing PCS/RS(528)FEC (defined in
802.3ba and 802.3bj) in order to interoperate with a host at the other end that is using the CAUI-4 (and supporting RS(528)). Again the consequence of this is that this is a different PHY as it is running at a different bit rate. There are potentially two
different 100GAUI-2 interfaces here running at different bit rates with different FEC gain coverage. This will also obviously impact the PMD specification too so either reach or PMD specs will need to change. Again, anything CAN be defined as long as we know what we are defining. I believe that it is insufficient to suggest that an objective to define
an optional AUI is enough. It is a good in providing clarity on the intention of what people want to specify though. In summary, if these proposals are to be brought into the SG for adoption, I would hope we have some better clarity on how it would fit into the
specification we would write (as that is our only goal within IEEE). I’d suggest looking at Table 80-2, as Gary pointed out, and figuring out how this table would be updated with these proposals. I do recognize that it is hard to separate the implementations issues in the products we are all looking to build from the IEEE specifications that
we are trying to write, but as chair, I need to remind the group on the IEEE specification aspects. For what it is worth, I think we can achieve all of the intended goals that Ali and Rob Stone are trying to achieve without causing any of these
specification challenges just by selecting the other options in their slides. The bottom option on Ali’s slide 7 and 8 http://www.ieee802.org/3/50G/public/adhoc/archive/ghiasi_022416_50GE_NGOATH_adhoc.pdf and
Rob’s “Brown Field Option B” on Slide 5 of http://www.ieee802.org/3/50G/public/adhoc/archive/stone_021716_50GE_NGOATH_adhoc-v2.pdf. These all support the
legacy hosts, do not require the creation of a new family of PHYs and PMDs in the industry (or the IEEE specification), and are essentially already architecturally supported. Mark On 2/24/16, 6:04 PM, "Jeffery Maki" <jmaki@xxxxxxxxxxx> wrote: Rob, My “strictly speaking” was meant at a head nod to what you say. I was trying to narrow subject when trying to understand Chris. Confusion is occurring
from the use of the terms KR4 and KP4, and what all is meant in the context of 50G connects. Below, I have a typo. “…LAUI-2 could be devised to need to coding gain…” should be “…LAUI-2 could be devised to need
no coding gain…”. Jeff From: Rob Stone [mailto:rob.stone@xxxxxxxxxxxx]
Hi Jeff You are correct that there is no IEEE 50G Ethernet, but there is a 50G Ethernet standard out there based on 2 x 25G lanes (25G Consortium) ? and
it has been put into hosts supplied by several companies. This data was shared in the Atlanta meeting, it can be seen in the Dell Oro forecast on slide 3, (http://www.ieee802.org/3/50G/public/Jan16/stone_50GE_NGOATH_02a_0116.pdf). Thanks Rob From: Jeffery Maki [mailto:jmaki@xxxxxxxxxxx]
Chris and others, I am a bit confused. Strictly speaking, no host has 50G Ethernet today so when one is built to have 50G Ethernet it can also be built to have any
required FEC. Are you mentioning KR4 and KP4 just to give a flavor of the difference in these two potential codes to be adopted? In this way, when mentioning
KR4, you mean LAUI-2 could be devised to need to coding gain itself just as CAUI-4 does not need any coding gain to operate. Jeff From: Chris Cole [mailto:chris.cole@xxxxxxxxxxx]
Mike, The optics we would use with LAUI-2 with KR4 RS-528 FEC would be the same optics as those we would use with LAUI-2 with KP4 RS-544 FEC, except
running at 3% lower rate. The SG will have to decide which we define in the project, and which outside of the project, if any. Chris From: Mike Dudek [mailto:mike.dudek@xxxxxxxxxx]
But what PMD is LAUI-2 going to support. If we don’t have an objective for a PMD that requires it then in my opinion it would be out of scope
to develop it without an explicit objective. Mike Dudek
QLogic Corporation Director Signal Integrity 26650 Aliso Viejo Parkway Aliso Viejo CA 92656 949 389 6269 - office. From: Kapil Shrikhande [mailto:kapils@xxxxxxxx]
To match the capabilities of CAUI-4 (4x25G), the LAUI-2 (2x25G) C2M interface should operate without FEC at a BER of 1e-15 or better (Gary also points to the BER requirement for CAUI-4), so that a
no-FEC PHY using LAUI-2 could operate at 1e-12. And as stated by Chris, LAUI-2 will also support RS-FEC encoded signal (KR4 and KP4 FEC) for those PMDs that require FEC. Kapil. On Wed, Feb 24, 2016 at 10:53 AM, Brad Booth <bbooth@xxxxxxxx> wrote:
|