Thread Links | Date Links | ||||
---|---|---|---|---|---|
Thread Prev | Thread Next | Thread Index | Date Prev | Date Next | Date Index |
From: | Paul Kolesar <PKOLESAR@xxxxxxxxxxxx> |
To: | STDS-802-3-HSSG@xxxxxxxxxxxxxxxxx |
Date: | 07/11/2008 01:53 PM |
Subject: | Re: [802.3BA] XR ad hoc Phone Conference Notice |
Joel Goergen
<joel@xxxxxxxxxxxxxxx>
07/11/2008 11:32 AM
|
|
I have an issue with the following statement:
“ Indeed, given that latency is a major performance concern for HPC, the vendors of such machines may prefer to use InfiniBand. This could mean that one of the primary customers to which we have tuned our present objective will actually not use Ethernet, but will benefit anyway by driving InfiniBand to adopt the same 100m PMD specs that 802.3ba defines.”
I reviewed the top 500 org website for their June 2008 report on the top 500 supercomputers. Ethernet at this point has a 56.8% share of the interconnects. See http://www.top500.org/stats/list/31/connfam. Infiniband has 24.20%. So I believe that this demonstrates that this area will use Ethernet. Next, in regards to latency, there is the Data Center Bridging Task Group in 802.1 that is working on this.
Therefore, I do not agree with the statement that “one of the primary customers to which we have tuned our present objective will actually not use Ethernet.”
thanks
-joel
Paul Kolesar wrote:
Steve,
thanks for furthering the
discussion. Your views make sense to me.
I'd like to examine
the super computer cabling distance distribution that Petar shared with us
yesterday in a bit more detail. I've plotted it to allow folks to see it
in graphical form.
This data has several features that are remarkably
similar to that of general data center cabling.
1) The
distribution is highly skewed towards the shorter end of the distribution.
2) The distribution has a very long tail relative to
the position of the mode, the most frequent length, at 20m.
3)
The mode is at a distance that is one fifth of the maximum length.
The white dot on the graph represents the coordinate of
equivalent coverage relative to the 100m objective to the data center cabling
distribution. Speaking to Steve's point that questions the correctness of
the 100m objective for HPC environments, I would venture to say that a 25m
objective, which is the roughly equivalent in coverage to the 100m objective we
are attempting to apply to data centers, would not be satisfactory for the HPC
environment, as it would leave a significant portion of the channels without a
low-cost solution.
It is clear that the 100m objective
is a near-perfect match to the needs of HPC. Yet I do not believe that HPC
should be the primary focus of our development. We must be developing a
solution that properly satisfies a much larger market than this or we are
wasting our time. Indeed, given that latency is a major performance
concern for HPC, the vendors of such machines may prefer to use InfiniBand.
This could mean that one of the primary customers to which we have tuned
our present objective will actually not use Ethernet, but will benefit anyway by
driving InfiniBand to adopt the same 100m PMD specs that 802.3ba defines.
This possibility reinforces my perspective that we need to properly
address a broader set of customers - those that operate in the general data
center environment. It is clear from all of the data and surveys that
remaining only with a 100m solution misses the mark for this broader market.
Continuing under this condition will mean that the more attractive
solution for links longer than 100m in the general data center will be to deploy
link aggregated 10GBASE-SR. Its cost will be on par and it will reach the
distances the customers need in their data centers.
Is this the future you want for all our efforts, or do you want
to face the facts and address the issue head on with a solution that gives data
center customers what they need?
Next week these
decisions will be placed before the Task Force. I hope we choose
wisely.
Regards,
Paul Kolesar
CommScope
Inc.
Enterprise Solutions
1300 East Lookout Drive
Richardson, TX 75082
Phone: 972.792.3155
Fax: 972.792.3111
eMail:
pkolesar@xxxxxxxxxxxxx
"Swanson,
Steven E" <SwansonSE@xxxxxxxxxxx>
07/11/2008 07:32 AM |
|
Yes: 55
No: 3
3. Could we achieve 75% support for adding a new MMF objective?
I don't know but if we could not, I would be forced to vote against adopting the current MMF baseline proposal (which I don't want to do) and I think others may also. This may or may not lead to an impasse similar to what we experienced in 802.3ae.
I understand the concern that adding the objective without a clear consensus on how to support the new objective could lead to delay but I have found this committee to be very resourceful in driving to a solution after we have made a decision to go forward. 40G is one recent example of a situation where no consensus turned very quickly to consensus.
I think adding a new objective is the right approach and in the long run will save the task force valuable development time.
4. Can we agree on the right
assumptions on the 10G model to evaluate the various proposals?
Everyone
seems to be using slightly different variations of the model to evaluate the
capability of the proposal; we need to agree on a common approach of
analysis.
5. Can we not let the discussion on OM4 cloud the
decision?
We can get extended link lengths on OM3. By achieving
longer lengths on OM3, even longer lengths will be possible on OM4 with the same
specification. What I don't want people to think is that OM4 is required to get
longer lengths.
6. Summary
John D'Ambrosia has provided
advice that if we want to move forward with a new MMF objective, July is the
time to do it - if we delay the decision, it is guaranteed to delay the overall
process. Some might think if we make the decision, it will delay the overall
process but we don't know that yet. I don't think adding an informative
specification on a PMD is the right way to go - let's get the MMF objective(s)
right - we owe it to ourselves and to our customers. To do anything less is just
avoiding the issue. Let's get the objectives set, get the assumptions correct
and utilize the process set up by Petrilla and Barbieri to drive toward the hard
decisions that we are all very capable of making.
Sincerely,
Steve
Swanson
"Alessandro
Barbieri (abarbier)" <abarbier@xxxxxxxxx>
07/10/2008 04:43
PM
|
|
Some numbers might help clarify what close to 0 means.
For 2008, Lightcounting gives a shipment number of approximately 30,000 for 10GE-LRM (and for 10GE-LX4 it's about 60,000.) So close to 0 would apply if we were rounding to the nearest 100K. As an aside, 10GE-LRM supports 220m of MMF, not 300m.
300m of OM3 is supported by 10GE-SR, which Lightcounting gives as approximately 400,000 in 2008, so that would be close to 0 if we rounding to the nearest 1M.
Another interesting sideline in looking at these numbers is that 2 years after the 10GE-LRM standard was adopted in 2006, despite the huge investment being made in 10GE-LRM development, and despite very little new investment being made in 10GE-LX4, the 10GE CWDM equivalent (i.e. 10GE-LX4, 4x3G) is chugging along at 2x the volume of the 10GE Serial solution that was adopted to replace it.
This should put some dim on hopes that very low cost 40GE Serial technology can be developed from scratch in two years and ship in volume when the 40GE standard is adopted in 2010.
Chris
From: Gourgen Oganessyan
[mailto:gourgen@xxxxxxxxxxx]
Sent: Wednesday, July 09, 2008
8:02 PM
To: STDS-802-3-HSSG@xxxxxxxxxxxxxxxxx
Subject: Re: [802.3BA] XR ad hoc
Phone Conference Notice
Petar,
Well, sadly that's what has been happening in the 10G world, people are forced to amortize the cost of 300m reach (LRM), while in reality the number of people who need 300m is close to 0.
That's why I am strongly in support of your approach of keeping the 100m objective as primary goal.
Frank, OM4 can add as much cost as it wants to, the beauty is the added cost goes directly where it's needed, which is the longer links. Alternatives force higher cost/higher power consumption on all ports regardless of whether it's needed there or not.
Gourgen Oganessyan
Quellan Inc.
Phone: (630)-802-0574 (cell)
Fax: (630)-364-5724
e-mail: gourgen@xxxxxxxxxxx
From: Petar Pepeljugoski
[mailto:petarp@xxxxxxxxxx]
Sent: Wednesday, July 09, 2008
7:51 PM
To: STDS-802-3-HSSG@xxxxxxxxxxxxxxxxx
Subject: Re: [802.3BA] XR ad hoc Phone
Conference Notice
Frank,
If I interpret
correctly, you are saying that all users should amortize the cost of very few
who need extended reach.
We need to be careful how we proceed here - we
should not repeat the mistakes of the past if we want successful
standard.
Regards,
Peter
Petar
Pepeljugoski
IBM Research
P.O.Box 218 (mail)
1101 Kitchawan Road, Rte.
134 (shipping)
Yorktown Heights, NY 10598
e-mail: petarp@xxxxxxxxxx
phone: (914)-945-3761
fax:
(914)-945-4134
From: | Frank Chang <ychang@xxxxxxxxxxx> |
To: | STDS-802-3-HSSG@xxxxxxxxxxxxxxxxx |
Date: | 07/09/2008 10:29 PM |
Subject: | Re: [802.3BA] XR ad hoc Phone Conference Notice |
Hi Jeff;
Thanks
for your comment. You missed one critical point that there is cost increase from
OM3 to OM4. If you take ribbon cable cost in perspective, OM4 option is possibly
the largest of the 4 options.
Besides, the use of OM4 requires to
tighten TX specs which impact TX yield, so you are actually compromising the
primary goal.
Frank
From: Jeff Maki [mailto:jmaki@xxxxxxxxxxx]
Sent: Wednesday, July 09, 2008 7:02 PM
To:
STDS-802-3-HSSG@xxxxxxxxxxxxxxxxx
Subject: Re: [802.3BA] XR ad hoc Phone
Conference Notice
Dear MMF
XR Ad Hoc Committee Members,
I believe our current objective of "at
least 100 meters on OM3 MMF" should remain as a primary goal, the baseline.
Support for any form of extended reach should be considered only if it
does not compromise this primary goal. A single PMD for all reach
objectives is indeed a good starting premise; however, it should not be
paramount. In the following lists are factors, enhancements, or approaches
I would like to put forward as acceptable and not acceptable for obtaining
extended reach.
Not Acceptable:
1. Cost increase for the
baseline PMD (optic) in order to obtain greater than 100-meter reach
2. EDC on
the system/host board in any case
3. CDR on the system/host board as part of
the baseline solution
4. EDC in the baseline PMD (optic)
5. CDR in
the baseline PMD (optic)
Acceptable:
1. Use of OM4 fiber
2.
Process maturity that yields longer reach with no cost increase
In
summary, we should not burden the baseline solution with cost increases to meet
the needs of an extended-reach solution.
Sincerely,
Jeffery Maki
————————————————
Jeffery
J. Maki, Ph.D.
Principal Optical Engineer
Juniper Networks,
Inc.
1194 North Mathilda Avenue
Sunnyvale, CA
94089-1206
Voice +1-408-936-8575
FAX +1-408-936-3025
www.juniper.net
jmaki@xxxxxxxxxxx
————————————————