Scott, Jack, and Jim seem to be stating
different shades of the same issues. I will go out on a limb and
attempt to distill a set of commonalities that are at the core in an effort
to drive some consensus, albeit at the risk of poking the hornets nest
yet again.
1. Jitter. The CDR in XFP modules
assists in meeting jitter budgets. SFP+, while more cost effective
as an O/E converter, will pass the jitter from its electrical inputs to
the optical output. Whether this jitter is within the budget for
10GBASE-S is a matter of transmit signal and power supply quality at the
electrical input and is implementation dependent. For example, one
could choose to place a CDR next to the SFP+. This should still result
in a lower cost implementation as the CDR passes thru fewer hands in the
supply chain and also because it may allow sharing more highly integrated
multi-CDR ICs. Another jitter reduction technique is simply to reduce
the fiber distance, but this could mean missing important parts of the
market and must done with that awareness in the forefront of the trade-off
debates. For example, FC applications are most often serving clustered
SANs, but Ethernet serves a more diverse set of applications that are not
necessarily clustered, even in data centers. Also, the jitter improvement
will diminish per unit of length reduction to near zero after some point.
So this can be a costly trade-off if not done sensibly.
2. Other parameters. The
specs on various applications affect different suppliers differently. A
distance reduction below the 300 m capability of 10GBASE-S will provide
relaxation opportunity. Which parameters are relaxed and by how much
will be determined later.
3. 40G Cost. Cost comparisons
are complicated by the implementation variables. There is a line
of differential between 4xSFP+ and QSFP costs that goes beyond the transceiver
cost and includes issues of board real estate, transceiver availability,
crosstalk, etc. Here we have a choice to either use a CDR or not,
depending on jitter budget requirements and board layout considerations.
In comparison, 4xXFP has an inherent CDR, so it is guaranteed to
bear that cost. In comparing these three approaches, there is not
a single universal answer. But it is a certainty that a 4xXFP approach
is more costly than CDR-less 4xSFP+ or QSFP approaches, provided the link
and jitter budgets can close in the latter case. So in cost comparisons,
current XFP implementations of 4xLAG 10GBASE-S will be disadvantaged when
compared to future 40G layouts based on either 4xSFP+ or QSFP. Cost
comparisons between future SFP+ implementations of 4xLAG 10GBASE-S and
40G layouts based on 4xSFP+ or QSFP may be a wash, but it is almost a certainty
that the 4xLAG will be required to meet the 10GBASE-S specs that support
300 m, while the 40G will have a more relaxed set of requirements that
should ease implementation and lower costs.
Regards,
Paul Kolesar
CommScope Inc.
EnterpriseŽ Solutions
1300 East Lookout Drive
Richardson, TX 75082
Phone: 972.792.3155
Fax: 972.792.3111
eMail: pkolesar@commscope.com
Jim Tatum <Jim.Tatum@FINISAR.COM>
05/15/2007 10:22 AM
Please respond to
Jim Tatum <Jim.Tatum@FINISAR.COM>
To
STDS-802-3-HSSG@listserv.ieee.org
cc
Subject
Re: [HSSG] 10 x 4 = 40
As the presenter of some the information cited below,
I feel the need to
comment on the statements. First the presentation made to the Fibre
Channel committee about the difficulties of making 10G lasers is right
on, and it will continue to affect the relative cost of the components
when compared to lower speed devices. The laser requirements have driven
more testing, and the launch power windows driven tightly that it does
create lower yields and longer transceiver set up times than are
achievable at lower speeds. This is being amplified by the need for even
wider ranges of temperature operation (-40 to 85C transceiver case).
The decision to adjust reach in Fibre Channel came from the system
vendors when given the choice of economics of components more in line
with 4G than 10G. It was clearly indicated that the market need for 300m
distances was minimal, and that the vast majority of FC links are well
below 50m. So the decision to optimize on that solution to enable the
lowest cost optical components seems to be correct. One final point on
this is that the modeling for 8G fiber channel never assumed CDRs in the
module, and the link distance was significantly reduced to close the
jitter budget. I am keen to see how this plays out in the market for the
SFP+ adoption of 10G BASE-x as the jitter budget is broken at TP3
without a CDR.
At 100G, the question really is market need. Can components be made that
will meet the requirements for 300m links? Of course. Will they be the
most economical choice? Most likely not. Is 300m the correct answer for
the market? I don't know, but that is the fundamental question, quickly
followed by at what cost increase.
Jim Tatum
Finisar Advanced Optical Components
-----Original Message-----
From: Jack Jewell [mailto:Jack.Jewell@PICOLIGHT.COM]
Sent: Tuesday, May 15, 2007 9:31 AM
To: STDS-802-3-HSSG@listserv.ieee.org
Subject: Re: [HSSG] 10 x 4 = 40
Scott,
I disagree with some points made. There are multiple 10G VCSEL
suppliers, and at least one of them meets the 10GBASE-SR spec without
undo difficulty and with little to be gained by the proposed spec
relaxation. [Won't mention any names, but it's a company with which
I'm
quite familiar!] The CDR in the XFP modules has nothing to do with
over-spec'ing SR, and it resides in LR modules as well. Furthermore,
SFP+ modules operate under the 10GBASE-SR specifications and they will
be very cost-effective. CDR and XAUI chip costs, together with the
com
crash, presented a far larger barrier to 10G adoption than SR specs.
Let's be careful in drawing analogies with the 10G experience.
At 100G, it's appropriate to have a shorter reach objective than the 10G
one. The main reason is that the near-certain implementation is 10-12
channel parallel optical using fiber ribbon. Issues such as crosstalk,
power variation, etc motivate a per-channel spec relaxation. The
"right" distance may be 100m, 220m, or something else; it will
be
determined later. Implementing 100G with SFP+ isn't attractive, but
SFP+ could well be appropriate for 40G, especially if cost is the main
concern. The SFP is the most cost-effective OE transceiver vehicle
on
the planet (in cost/Gb/s), and that is likely to be the case for quite
some time. While not as high-density as parallel optical MSA'd modules
such as SNAP12 and QSFP, the SFP (including SFP+) has advantages besides
cost, e.g. straightforward implementation of SMF products for 10km reach
and more. At 100G, the need for density is extreme, hence the need
for
a parallel-optical module. At 40G, the need for bandwidth density
is
not as high, the density advantage of parallel optics is reduced, and
4x10G implementations use only 8 of the 12 channels available. At
40G,
the viability of an (4x)SFP+ PMD implementation is real, and its
advantages may be compelling. Moreover, the SFP+ can cost-effectively
utilize the -SR and -LR specs defined in 802.3ae.
Jack
-----Original Message-----
From: Scott Kipp [mailto:skipp@BROCADE.COM]
Sent: Monday, May 14, 2007 4:04 PM
To: STDS-802-3-HSSG@listserv.ieee.org
Subject: Re: [HSSG] 10 x 4 = 40
Hugh,
I have reviewed your proposal regarding 40G = 4 X 10G and think there
may be a disconnect between 40GBase-SR and 4 X 10GBase-SR. The HSSG is
considering defining 100GBase-SR (and thus 40GBase-SR) to span only 100
meters instead of 300 meters defined in 10GBase-SR. Each lane of
100G
and 40G does not need to be defined with the difficult 300 meter
requirements of the 10GBase-SR standard. If each 10G lane of the
100G
or 40G PMDs would only be required to span 100 meters, the cost of the
HSSG solution would be lower cost per channel than 10GBase-SR.
The supported distance of each 10G lane is important because it has
large cost implications. I would like to quote a Finisar proposal
regarding the manufacturability of 10GBase-SR transceivers that can
found here: http://www.t11.org/ftp/t11/pub/fc/pi-4/06-036v0.pdf
On slide 5, the Finisar presentation states:
Practical 10G VCSELs to date have been small aperture MM devices.
- Yield VERY poorly even to largest allowed spectral width - This is
largest single cost driver for 10GBASE-SR
Then on slide, 6, Finisar and Advanced Optical Components - the largest
VCSEL manufacturer in the world states:
Practical TX design almost impossible. Requires high ER and expensive
yielding and testing
Finisar showed the difficulties of manufacturing 10GBase-SR transmitters
(5 years after standardization) because of the 300 meter requirement
over OM3 fiber. This presentation was pivotal in defining 8 Gigabit
Fibre Channel (8GFC) in a low cost manner that only spans 150 meters at
8.5 Gbps. If the 300 meter requirement is placed on 100G Ethernet
and
thus on 40G Ethernet, then the cost of 40G and 100G will be tremendous
and will see low adoption.
Therefore, the 4 X 10GBase-SR = 40G argument is not necessarily true and
will depend on how the HSSG defines each 10G lane.
If the HSSG defines 10G lanes to only 100 meters, then the PMDs may not
require clock and data recovery (CDR) chips in the transceiver like
those used in the XFP transceiver. The QSFP transceiver that supports
four lanes of 10G does not use a CDR and is expected to be significantly
lower cost than 4 XFPs. In other words, 4 XFPs do not equate to 1
QSFP.
The cost of a QSFP may equate to 4 SFP+s. Likewise, a low cost QSFP may
be made that is compliant to 40GBase-SR, but not compliant to 4 X
10GBase-SR.
One of the goals of 40G is not to have a 'perception of being "low-end"'
but to actually be low end in cost. If history repeats itself, cost
will be a determining factor in adoption of 100G and 40G.
An interesting example of high speed links, can be seen in
telecommunications. OC-768 systems that run at 40G serial are state of
the art right now and should be for the foreseeable future. OC-192
is
seeing very low volumes and this could be a problem with 100GE if there
is a "early adopter premium". OC-768 is not economical
now at 6x or 7x
the costs of 10G according to an article titled "What's holding
up
40G?" (
http://lw.pennnet.com/display_article/279721/13/ARTCL/none/none/What%E2%
80%99s-holding-up-40G.) An interesting aspect of this article states:
"What's attractive about 10-GigE isn't necessarily the technology.
It's
that once a chip is manufactured, it gets shipped in volumes of
millions, not in volumes of a few thousand."
With less than a million ports of 10GE ever shipped worldwide, this
statement is false and 10GBase-SR's lofty distance requirements is one
of the reasons why. The HSSG should look into examples of high data
rate connections (especially 10G) to get 100GE and 40GE right.
Regards,
Scott Kipp
QSFP Chair and Editor
Office of the CTO, Brocade
-----Original Message-----
From: Hugh Barrass [mailto:hbarrass@CISCO.COM]
Sent: Friday, April 13, 2007 3:53 PM
To: STDS-802-3-HSSG@listserv.ieee.org
Subject: [HSSG] 10 x 4 = 40
All,
There's been some discussion (!) around the existence of an MSA for a
40G module format. The module is actually based on 4 x 10G channels,
this leaves system implementors 2 choices:
1. Simply define the MSA to use LAG . The MAC & PCS are already defined
for 10GBASE-R, the PMD definitions are already available for SR, LR &
LRM. This could be incorporated into systems being developed
immediately, exploiting existing MAC and fabric silicon. From a
standards perspective, I would classify this as another 10G format (no
fundamental difference to X2, XFP or SFP+).
Additionally, a breakout device could allow compatibility with discrete
10G systems (using SFP+, XFP, X2 etc.) and also would allow the use of
a
40G socket to connect to multiple 10G destinations (redundant
connections, multipath routing etc.).
2. Try to push through a new definition in 802.3 for 40G MAC and PCS.
This would almost certainly be tied to the same schedule as the 100G MAC
& PCS definition, it might be available to start development in 3-4
years. It would require new MAC/fabric silicon, that would have to start
development after the standard is in its last stages of development.
A socket using the single 40G approach could not be connected to a
breakout for legacy compatibility.
Notes:
If #1 happens (almost impossible to "prevent" it) then confusion
will
ensue as option #1 & option #2 products mix in the market. It will
be
very difficult to distinguish or differentiate between the two. I don't
know how those who commit to the "proper" approach of #2 will
be able to
recoup their extra development costs compared to those who get a 3-4
year headstart by implementing #1. Additionally, option #2 based
products will hit the market at the same time as 100G products become
available. They will start with the perception of being "low-end"
and
will not be able to command the "early adopter premium" that
is often
relied on to recover leading edge development costs.
Frankly, looking at this, I would not recommend to my employer that we
should spend time (and money) to develop the silicon to support option
#2 vs option #1. Of course, others may feel free to spend their
development differently. Additionally, if I was a component vendor, I
know which option I would pursue.