Re: [802.3_100GNGOPTX] Reach on MMF
Paul,
I thought I understood the thread until Brad stated, "That would track accordingly with the numbers I've been seeing for SFP+ DAC in ToR and EoR configurations." This seems to imply that SFP+ direct attach cables are included in your data - is that right?
Also, I would like to understand if you have looked at the ratio of equipment cord to link cabling. Without understanding more than your explanation I would assume that some excess could be attributed to server to ToR applications? Is that a fair assumption?
thanks
--matt
-----Original Message-----
From: Brad Booth [mailto:Brad_Booth@xxxxxxxx]
Sent: Monday, October 10, 2011 2:29 PM
To: STDS-802-3-100GNGOPTX@xxxxxxxxxxxxxxxxx
Subject: Re: [802.3_100GNGOPTX] Reach on MMF
Paul,
I understand. It is a different point of view on how the data may be interpreted. For example, how many of the < 10 m links terminated vs were part of a longer connection? If I remember correctly, you presented the data based upon a multiple-link topology.
I interpreted the data as indicating both multiple-link and single-link (for ToR and EoR). Given the trends in ToR and EoR, my assumption was a growing percentage of those short links are used in those topologies. That would track accordingly with the numbers I've been seeing for SFP+ DAC in ToR and EoR configurations.
Cheers,
Brad
-----Original Message-----
From: Kolesar, Paul [PKOLESAR@xxxxxxxxxxxxx<mailto:PKOLESAR@xxxxxxxxxxxxx>]
Sent: Monday, October 10, 2011 04:13 PM Central Standard Time
To: STDS-802-3-100GNGOPTX@xxxxxxxxxxxxxxxxx
Subject: Re: [802.3_100GNGOPTX] Reach on MMF
Brad,
While I'm sure it is no surprise to you that I also strongly advocate MMF reach objectives that allow seamless upgrade from 40G to 100G, I would like to understand your assertion that my data "indicated that there was a large number of single reach hops that were under 10 m." While my cord data shows about 85% under 10m, my single channel topology (equip. cord + link + equip. cord) data showed 0% coverage at 10m (~30ft). Perhaps this is a terminology issue with single cords being treated as single reach hops. While I do not deny that cords are sometimes used that way, I would be very hesitant to try to infer single-cord channels from the general population of cords. Unfortunately I know of no means to isolate the two populations within the data.
Paul
-----Original Message-----
From: Brad Booth [mailto:Brad_Booth@xxxxxxxx]
Sent: Monday, October 10, 2011 3:48 PM
To: STDS-802-3-100GNGOPTX@xxxxxxxxxxxxxxxxx
Subject: Re: [802.3_100GNGOPTX] Reach on MMF
Paul,
Thanks for the clarification. My intention was not to imply that you were espousing the use of copper cabling for all access switch to server connections, only to highlight that trends are changing to what was considered a "typical" topology.
I do agree with you on understanding the different mixtures of infrastructure with the caution to be every diligent with respect to power, cost and market size. It's that infamous 80-20 rule. There is no point in having 80% of the market have to absorb a disproportionate burden to satisfy the other 20% of the market. Hopefully with some hindsight on previous decisions plus some general understanding of the market trends, the study group can make some better predictions of future requirements. Although, crystal balls have not been known to be reliable. ;-)
Cheers,
Brad
-----Original Message-----
From: Kolesar, Paul [mailto:PKOLESAR@xxxxxxxxxxxxx]
Sent: Monday, October 10, 2011 3:19 PM
To: STDS-802-3-100GNGOPTX@xxxxxxxxxxxxxxxxx
Subject: Re: [802.3_100GNGOPTX] Reach on MMF
Brad,
In my September contribution that you referenced below, my diagram labeling was indicative of present practice. I was not trying to imply that going forward the 100G access channels should remain copper, so thank you for allowing me the opportunity to clarify.
On the 802.3 subcommittee's present course P802.3bj will define a copper solution that reaches at least 5m. While 5m is sufficient for Top-of-Rack and adjacent rack connections, it cannot well address Centralized, End-of-Row or most Middle-of-Row switch placements.
Given that access channels greatly outnumber aggregation channels, I would have to agree that the access part of the data center network deserves due consideration. To that end, the migration of access channels to different mixtures of Centralized, ToR, EoR and MoR should be a key focus of our studies. Here we should attempt to predict the mixture that will be deployed in the coming years as 100G becomes the norm. The contribution flatman_01_0311.pfd (presented to the study group that became the P802.3bj task force) has some material on this topic. While good predictions on 10G trends extracted from Dell'Oro data run beyond their headlights after 2012, it shows a strong tendency towards ToR which likely applies to 40G and 100G as well. I'm hoping for more clarity for the years after that, because as shown later in Alan Flatman's contribution, 100G server volumes don't pick up until 2018. For comparison, 100G aggregation channels start at least three years earlier.
Some may lament that our ability to predict the needs of the market seven years out is spotty. It remains to be seen if the group has the appetite to repeat such endeavors, or will instead choose to focus on the best solutions for the nearer-term aggregation channels...
Regards,
Paul Kolesar
-----Original Message-----
From: Brad Booth [mailto:Brad_Booth@xxxxxxxx]
Sent: Monday, October 10, 2011 1:42 PM
To: STDS-802-3-100GNGOPTX@xxxxxxxxxxxxxxxxx
Subject: Re: [802.3_100GNGOPTX] Reach on MMF
I believe that if the study group is going to set a reach objective for a 100GBASE-SR4 port type, that it would be highly recommended that we support the same reach as the 40GBASE-SR4. End users that have gone through the effort to install ribbon fiber OM3 or OM4 fiber to support 40G will be much happier with us if we permit them to re-use the same fiber for 100G. Forklift upgrades of equipment and cabling also slows deployment of the technology, so from a broad market potential and economic feasibility standpoint, supporting the same reach as 40G just makes good sense.
As for SMF solutions, it would be good to understand the relative cost difference between a 100G LR4 module that is required to meet the 10 km reach vs one that is shorter. For those that may remember 802.3ae, there was a 2 km SMF reach objective. That was the target reach for campus networks with 10 km and 40 km targeting the MAN. It was discovered that the relative cost difference between 2 km and 10 km was insignificant and that by bundling them into one port type 10GBASE-LR, the task force could increase the market potential for that device.
As we stand today, 802.3ba has a huge reach and cost discrepancy between the 100/150 m MMF solution and the 10 km SMF solution. In my humble opinion, we need to understand the potential impact to the cost (in relative terms) between a solution that can satisfy the campus market vs. the one specified for the MAN. There are some that would also like to use SMF within the data center without the cost burden associated with 100GBASE-LR4. The key will be to understand if there is a breakpoint of reach vs. cost that makes the solution economically viable and has good market potential.
The one other aspect that concerned me during the study group meeting was that in Paul Kolesar's slides there was a diagram of the network architecture. I think that diagram is a bit outdated, but that wasn't the main reason I was concerned. What I noticed was that the connection from the access switches to the servers was labeled "Copper horizontal cabling". Why was that connection being assumed to be copper? The data Paul showed indicated that there was a large number of single reach hops that were under 10 m. Paul also highlighted that there was a trend to longer MMF reach. This makes complete sense if one assumes that data centers are transitioning from centralized switches to end-of-row or top-of-rack switches. Paul's data correlates very well with information that I've received about the movement from centralized to end-of-row (or center-of-row). That being said, maybe there is another cost breakpoint for shorter MMF links, say 15-20 m. That to me would be interesting !
data to have for comparison and objective setting.
Thanks,
Brad
-----Original Message-----
From: Ali Ghiasi [mailto:aghiasi@xxxxxxxxxxxx]
Sent: Monday, October 10, 2011 12:32 PM
To: STDS-802-3-100GNGOPTX@xxxxxxxxxxxxxxxxx
Subject: [802.3_100GNGOPTX] Reach on MMF
Hi
It great to get some end users feedback on MMF reach, as you know 40Gbase-SR4 has reach of
-100 m on OM3
- 150 m on OM4
Single mode PMD is also with in the scope of 100GNGOPTX study group, when the days come that single mode PMDs are low cost, power, and size then I expect MMF ribbon reach could get limited to 10's meters. I have my finger crossed for that day!
Now with reality and unknown in front of us if the ultimate PMD could be developed, what should the reach of
100Gbase-SR4 be?
Historically single mode PMDs have been larger, higher power, and higher cost. For example 100GBase-SR10
is supported in the CXP form factor and 100Gbase-LR4 is supported in the CFP form factor, the CFI presentation slide 11 http://www.ieee802.org/3/100GNGOPTX/public/jul11/CFI_01_0711.pdf
shows 32 ports of CXP and only 4 ports of CFP! So for some period of time we could find ourself that the MMF PMD is the only option on the highest density platform. This is why we need to carefully study this subject specially with in the context of larger data centers see slide 7 http://www.ieee802.org/3/100GNGOPTX/public/sept11/ghiasi_01_a_0911_NG100GOPTX.pdf
Thanks,
Ali