Re: [802.3_100GNGOPTX] Reach on MMF
I agree with what you are saying. There is another noise source you didn't mention which is the receiver front end noise which is often the largest noise factor and gets worse at higher data rate due to wider noise bandwidth.
Mike Dudek
QLogic Corporation
Senior Manager Signal Integrity
26650 Aliso Viejo Parkway
Aliso Viejo CA 92656
949 389 6269 - office.
Mike.Dudek@xxxxxxxxxx
-----Original Message-----
From: Kolesar, Paul [mailto:PKOLESAR@xxxxxxxxxxxxx]
Sent: Friday, October 14, 2011 8:58 AM
To: STDS-802-3-100GNGOPTX@xxxxxxxxxxxxxxxxx
Subject: Re: [802.3_100GNGOPTX] Reach on MMF
Mike,
As usual, your response is helpful.
It is not clear to me what level of noise impairments to expect in this system, as that will depend on parameters like RIN and MPN (spectral width). But it does seem clear that ISI from the source will be much higher than in previous systems. Outside of speeding up the VCSELs I see no means of improving this situation without Eq. That is why, after CDR, I rank Eq of highest need.
I would agree that if noise sources like RIN and MPN are elevated impairments, then FEC could provide relief. But some types of Eq may have sufficient positive effect here too via an effective increase in S/N ratio obtained by optimizing the decision threshold for each received symbol. I think this is true of DFE but not of spectral shaping Eq which can enhance noise as you stated. While the attraction to use the simplest Eq (i.e. spectral shaping) is undeniable, it may be the case that overall the use of decision optimization Eq is superior to a combination of spectral shaping plus FEC. I'd have to agree that the optimal choice will depend on details of the total channel, which gets back to the questions and requests that Dan was raising.
Paul
-----Original Message-----
From: Mike Dudek [mailto:mike.dudek@xxxxxxxxxx]
Sent: Friday, October 14, 2011 10:04 AM
To: Kolesar, Paul; STDS-802-3-100GNGOPTX@xxxxxxxxxxxxxxxxx
Subject: RE: [802.3_100GNGOPTX] Reach on MMF
I agree that FEC is good to combat noise like effects and Eq is good for ISI. The question is what the balance between the two is at the decision point for the system under consideration. Typically in previous systems they have been very approximately equal (ie ISI penalty around 3dB optical). Unfortunately also the simplest EQ tends to increase the noise. The result is that either EQ or FEC gives a good improvement. I think we need to keep both of these on the table at this point and compare the merits for this particular system.
Mike Dudek
QLogic Corporation
Senior Manager Signal Integrity
26650 Aliso Viejo Parkway
Aliso Viejo CA 92656
949 389 6269 - office.
Mike.Dudek@xxxxxxxxxx
-----Original Message-----
From: Kolesar, Paul [mailto:PKOLESAR@xxxxxxxxxxxxx]
Sent: Friday, October 14, 2011 6:32 AM
To: STDS-802-3-100GNGOPTX@xxxxxxxxxxxxxxxxx
Subject: Re: [802.3_100GNGOPTX] Reach on MMF
Ali,
You suggest the use of either EQ or FEC, but as you know they do not offer the same impairment compensation, so are not interchangeable. EQ is optimal for fixing structural problems with pulse shapes such as those brought about by insufficient bandwidth, dispersion, and distortion that are present all the time. FEC is best for compensating noise, but can get swamped by distortions that would cause a continuous need for correction.
In my view, if the major source of impairment is VCSEL speed and secondly detector bandwidth, those are both problems aligned with the capabilities of EQ but not with those of FEC. This suggests that EQ should be our first line of improvement and that FEC should be reserved as a second layer of improvement to be used only if needed.
Do you see it differently?
Paul
-----Original Message-----
From: Ali Ghiasi [mailto:aghiasi@xxxxxxxxxxxx]
Sent: Tuesday, October 11, 2011 1:50 PM
To: STDS-802-3-100GNGOPTX@xxxxxxxxxxxxxxxxx
Subject: Re: [802.3_100GNGOPTX] Reach on MMF
Dan
Few of us have been looking at this problem, based on published result as well as our assessment is that
100G-SR4 link first would be limited by the speed of VCSEL 18-20 ps, then the PIN/TIA ~ 15 GHz, and last
by the fiber 16-17 GHz assuming 100 m of OM3 or 150 m of OM4.
100G-SR4 can take two path
- Sacrifice the fiber reach and define 50 m reach on OM3 with simple retime interface without FEC or EQ
- Support 40GBase-SR4 fiber reach (100 m OM3/150 m OM4) with use of EQ and/or FEC
Obviously this something that require more in depth study but at the surface it seem solving 100G-SR4 problem
assuming 100 m reach on OM3 will be much simpler than LRM. As we work through technical feasibility it is
important to understand the market need and I strongly encourage the end users to come forward with their
cable reach requirements.
Thanks,
Ali
On Oct 10, 2011, at 3:08 PM, Dan Dove wrote:
> All,
>
> I am thrilled to see the discussion, and hope it stimulates some
> detailed proposals for objectives that include justifications and data.
>
> For MMF, I would really like to see some data that shows how far you
> can run on OM3/OM4 using feasible VCSELs, a reasonably practical
> receiver, perhaps some channel compensation and/or FEC, and maybe even
> broken down into a couple of different reach values based on
> with/without EQZ/FEC.
>
> Then, lay that against a histogram of reach requirements in the data
> center, and a proposal to set the reach based on the balance point
> between cost/feasibility and market potential.
>
> Anyone working on this?
>
> Dan
>
> On Mon, 10 Oct 2011 16:56:58 -0500, Brad Booth <Brad_Booth@xxxxxxxx>
> wrote:
>> Matt,
>>
>> Clarification:
>> Paul's data didn't include SFP+ DAC. My point was only that I've seen
>> data showing a strong trend for DAC for the access channel.
>> Personally, I assumed that some of the short links in Paul's data
>> would also be used for the access channel; however, Paul believes that
>> assumption should not be made.
>>
>> Maybe this would be a good area to provide some clarification at the
>> next meeting on architecture and reaches, so that everyone is using
>> the same terms and assigning the data to the correct buckets.
>>
>> Thanks,
>> Brad
>>
>>
>> -----Original Message-----
>> From: Matt Traverso (mattrave) [mailto:mattrave@xxxxxxxxx]
>> Sent: Monday, October 10, 2011 4:48 PM
>> To: Booth, Brad; STDS-802-3-100GNGOPTX@xxxxxxxxxxxxxxxxx
>> Subject: RE: [802.3_100GNGOPTX] Reach on MMF
>>
>> Paul,
>>
>> I thought I understood the thread until Brad stated, "That would
>> track accordingly with the numbers I've been seeing for SFP+ DAC in
>> ToR and EoR configurations." This seems to imply that SFP+ direct
>> attach cables are included in your data - is that right?
>>
>> Also, I would like to understand if you have looked at the ratio of
>> equipment cord to link cabling. Without understanding more than your
>> explanation I would assume that some excess could be attributed to
>> server to ToR applications? Is that a fair assumption?
>>
>> thanks
>> --matt
>>
>> -----Original Message-----
>> From: Brad Booth [mailto:Brad_Booth@xxxxxxxx]
>> Sent: Monday, October 10, 2011 2:29 PM
>> To: STDS-802-3-100GNGOPTX@xxxxxxxxxxxxxxxxx
>> Subject: Re: [802.3_100GNGOPTX] Reach on MMF
>>
>> Paul,
>>
>> I understand. It is a different point of view on how the data may be
>> interpreted. For example, how many of the < 10 m links terminated vs
>> were part of a longer connection? If I remember correctly, you
>> presented the data based upon a multiple-link topology.
>>
>> I interpreted the data as indicating both multiple-link and
>> single-link (for ToR and EoR). Given the trends in ToR and EoR, my
>> assumption was a growing percentage of those short links are used in
>> those topologies. That would track accordingly with the numbers I've
>> been seeing for SFP+ DAC in ToR and EoR configurations.
>>
>> Cheers,
>> Brad
>>
>>
>>
>> -----Original Message-----
>> From: Kolesar, Paul [PKOLESAR@xxxxxxxxxxxxx<mailto:PKOLESAR@xxxxxxxxxxxxx>]
>> Sent: Monday, October 10, 2011 04:13 PM Central Standard Time
>> To: STDS-802-3-100GNGOPTX@xxxxxxxxxxxxxxxxx
>> Subject: Re: [802.3_100GNGOPTX] Reach on MMF
>>
>>
>> Brad,
>> While I'm sure it is no surprise to you that I also strongly advocate
>> MMF reach objectives that allow seamless upgrade from 40G to 100G, I
>> would like to understand your assertion that my data "indicated that
>> there was a large number of single reach hops that were under 10 m."
>> While my cord data shows about 85% under 10m, my single channel
>> topology (equip. cord + link + equip. cord) data showed 0% coverage at
>> 10m (~30ft). Perhaps this is a terminology issue with single cords
>> being treated as single reach hops. While I do not deny that cords
>> are sometimes used that way, I would be very hesitant to try to infer
>> single-cord channels from the general population of cords.
>> Unfortunately I know of no means to isolate the two populations within
>> the data.
>>
>> Paul
>>
>> -----Original Message-----
>> From: Brad Booth [mailto:Brad_Booth@xxxxxxxx]
>> Sent: Monday, October 10, 2011 3:48 PM
>> To: STDS-802-3-100GNGOPTX@xxxxxxxxxxxxxxxxx
>> Subject: Re: [802.3_100GNGOPTX] Reach on MMF
>>
>> Paul,
>>
>> Thanks for the clarification. My intention was not to imply that you
>> were espousing the use of copper cabling for all access switch to
>> server connections, only to highlight that trends are changing to what
>> was considered a "typical" topology.
>>
>> I do agree with you on understanding the different mixtures of
>> infrastructure with the caution to be every diligent with respect to
>> power, cost and market size. It's that infamous 80-20 rule. There is
>> no point in having 80% of the market have to absorb a disproportionate
>> burden to satisfy the other 20% of the market. Hopefully with some
>> hindsight on previous decisions plus some general understanding of the
>> market trends, the study group can make some better predictions of
>> future requirements. Although, crystal balls have not been known to be
>> reliable. ;-)
>>
>> Cheers,
>> Brad
>>
>>
>> -----Original Message-----
>> From: Kolesar, Paul [mailto:PKOLESAR@xxxxxxxxxxxxx]
>> Sent: Monday, October 10, 2011 3:19 PM
>> To: STDS-802-3-100GNGOPTX@xxxxxxxxxxxxxxxxx
>> Subject: Re: [802.3_100GNGOPTX] Reach on MMF
>>
>> Brad,
>> In my September contribution that you referenced below, my diagram
>> labeling was indicative of present practice. I was not trying to
>> imply that going forward the 100G access channels should remain
>> copper, so thank you for allowing me the opportunity to clarify.
>>
>> On the 802.3 subcommittee's present course P802.3bj will define a
>> copper solution that reaches at least 5m. While 5m is sufficient for
>> Top-of-Rack and adjacent rack connections, it cannot well address
>> Centralized, End-of-Row or most Middle-of-Row switch placements.
>>
>> Given that access channels greatly outnumber aggregation channels, I
>> would have to agree that the access part of the data center network
>> deserves due consideration. To that end, the migration of access
>> channels to different mixtures of Centralized, ToR, EoR and MoR should
>> be a key focus of our studies. Here we should attempt to predict the
>> mixture that will be deployed in the coming years as 100G becomes the
>> norm. The contribution flatman_01_0311.pfd (presented to the study
>> group that became the P802.3bj task force) has some material on this
>> topic. While good predictions on 10G trends extracted from Dell'Oro
>> data run beyond their headlights after 2012, it shows a strong
>> tendency towards ToR which likely applies to 40G and 100G as well.
>> I'm hoping for more clarity for the years after that, because as shown
>> later in Alan Flatman's contribution, 100G server volumes don't pick
>> up until 2018. For comparison, 100G aggregation channels start at
>> least three years earlier.
>>
>> Some may lament that our ability to predict the needs of the market
>> seven years out is spotty. It remains to be seen if the group has the
>> appetite to repeat such endeavors, or will instead choose to focus on
>> the best solutions for the nearer-term aggregation channels...
>>
>> Regards,
>> Paul Kolesar
>>
>> -----Original Message-----
>> From: Brad Booth [mailto:Brad_Booth@xxxxxxxx]
>> Sent: Monday, October 10, 2011 1:42 PM
>> To: STDS-802-3-100GNGOPTX@xxxxxxxxxxxxxxxxx
>> Subject: Re: [802.3_100GNGOPTX] Reach on MMF
>>
>> I believe that if the study group is going to set a reach objective
>> for a 100GBASE-SR4 port type, that it would be highly recommended that
>> we support the same reach as the 40GBASE-SR4. End users that have gone
>> through the effort to install ribbon fiber OM3 or OM4 fiber to support
>> 40G will be much happier with us if we permit them to re-use the same
>> fiber for 100G. Forklift upgrades of equipment and cabling also slows
>> deployment of the technology, so from a broad market potential and
>> economic feasibility standpoint, supporting the same reach as 40G just
>> makes good sense.
>>
>> As for SMF solutions, it would be good to understand the relative
>> cost difference between a 100G LR4 module that is required to meet the
>> 10 km reach vs one that is shorter. For those that may remember
>> 802.3ae, there was a 2 km SMF reach objective. That was the target
>> reach for campus networks with 10 km and 40 km targeting the MAN. It
>> was discovered that the relative cost difference between 2 km and 10
>> km was insignificant and that by bundling them into one port type
>> 10GBASE-LR, the task force could increase the market potential for
>> that device.
>>
>> As we stand today, 802.3ba has a huge reach and cost discrepancy
>> between the 100/150 m MMF solution and the 10 km SMF solution. In my
>> humble opinion, we need to understand the potential impact to the cost
>> (in relative terms) between a solution that can satisfy the campus
>> market vs. the one specified for the MAN. There are some that would
>> also like to use SMF within the data center without the cost burden
>> associated with 100GBASE-LR4. The key will be to understand if there
>> is a breakpoint of reach vs. cost that makes the solution economically
>> viable and has good market potential.
>>
>> The one other aspect that concerned me during the study group meeting
>> was that in Paul Kolesar's slides there was a diagram of the network
>> architecture. I think that diagram is a bit outdated, but that wasn't
>> the main reason I was concerned. What I noticed was that the
>> connection from the access switches to the servers was labeled "Copper
>> horizontal cabling". Why was that connection being assumed to be
>> copper? The data Paul showed indicated that there was a large number
>> of single reach hops that were under 10 m. Paul also highlighted that
>> there was a trend to longer MMF reach. This makes complete sense if
>> one assumes that data centers are transitioning from centralized
>> switches to end-of-row or top-of-rack switches. Paul's data correlates
>> very well with information that I've received about the movement from
>> centralized to end-of-row (or center-of-row). That being said, maybe
>> there is another cost breakpoint for shorter MMF links, say 15-20 m.
>> That to me would be interesting !
>> data to have for comparison and objective setting.
>>
>> Thanks,
>> Brad
>>
>>
>> -----Original Message-----
>> From: Ali Ghiasi [mailto:aghiasi@xxxxxxxxxxxx]
>> Sent: Monday, October 10, 2011 12:32 PM
>> To: STDS-802-3-100GNGOPTX@xxxxxxxxxxxxxxxxx
>> Subject: [802.3_100GNGOPTX] Reach on MMF
>>
>> Hi
>>
>> It great to get some end users feedback on MMF reach, as you know
>> 40Gbase-SR4 has reach of
>> -100 m on OM3
>> - 150 m on OM4
>>
>> Single mode PMD is also with in the scope of 100GNGOPTX study group,
>> when the days come that single mode PMDs are low cost, power, and size
>> then I expect MMF ribbon reach could get limited to 10's meters. I
>> have my finger crossed for that day!
>>
>> Now with reality and unknown in front of us if the ultimate PMD could
>> be developed, what should the reach of
>> 100Gbase-SR4 be?
>>
>> Historically single mode PMDs have been larger, higher power, and
>> higher cost. For example 100GBase-SR10
>> is supported in the CXP form factor and 100Gbase-LR4 is supported in
>> the CFP form factor, the CFI presentation slide 11
>> http://www.ieee802.org/3/100GNGOPTX/public/jul11/CFI_01_0711.pdf
>> shows 32 ports of CXP and only 4 ports of CFP! So for some period of
>> time we could find ourself that the MMF PMD is the only option on the
>> highest density platform. This is why we need to carefully study this
>> subject specially with in the context of larger data centers see slide
>> 7
>> http://www.ieee802.org/3/100GNGOPTX/public/sept11/ghiasi_01_a_0911_NG100GOPTX.pdf
>>
>> Thanks,
>> Ali
>