Thread Links | Date Links | ||||
---|---|---|---|---|---|
Thread Prev | Thread Next | Thread Index | Date Prev | Date Next | Date Index |
I am picking up on a thread that is about
10 days old. I have changed the “Subject:” to better identify
the contents for future reference. To restate, the crux of the issue is to
define a power budget that is useful for data center environments. For single-mode
channels in data centers the attenuation of the fiber consumes a minor fraction
of the loss needed to properly support useful channel topologies, with a far
larger fraction of the loss budget devoted to overcoming connection insertion
loss. The bulk of what follows is therefore devoted to determining the
connection loss budget. Since the writing of the thread below, a
few of us have been exchanging thoughts on establishing loss budgets for
potential new single-mode solutions. We have been taking into account the
need to support at least a 500 m reach and sufficient numbers of patch panels
to support double-link channels made of structured cabling. We also
recognize that there is a fundamental difference between the number of
connections within a 2-fiber channel and a parallel fiber channel, as depicted
in the diagram below. In this diagram the foreground channel requires 4
MPOs to support a parallel fiber solution. The background channel
requires 4 MPOs plus 4 LCs because of the use of fan-out cassettes commonly
deployed for pre-terminated cabling. During our private discussions a
contribution by Jonathan King to P802.3ba, king_01_0508, was referenced as a
way to approach the problem of connection loss allocation. In his work
Jonathan modeled multimode MPO connection loss with a Rayleigh distribution
using # conn JK’s Monte
Carlo Normal Stat
Calc
mean
std.dev.
mean std.dev. 1
0.22
0.134
0.22 0.134 2
0.421
0.187
0.44 0.190 3
0.631
0.221
0.66 0.232 4
0.842
0.264
0.88 0.268 The means and standard deviations of the
two approaches agree to within 5%, with the However, it is not clear if
Jonathan’s simulation properly accounts for the fact that connection loss
does not increase linearly with lateral offset. In other words, it
accelerates with increasing offset. So while pure Rayleigh distributions
are good for modeling offsets, properly converting them to compute loss has the
effect of progressively stretching the distribution towards higher loss,
increasing the mean and the standard deviation. I’ll put this
aside, opting for keeping it simple for now, knowing that the statistical
calculations I use below help to account for this. If we accept Jonathan’s premise that
1.5 dB is sufficient to support 4 multimode MPO connections with a resulting
failure rate of 1.6% (i.e. see page 11 where loss exceeds 1.5 dB for 1.6% of
the connections), then that can be applied as a benchmark for this work
too. In his simulation this represents the +2.5 standard deviation point
in the distribution (mean + 2.5*std.dev. = 1.5 dB). For the normal
statistical calculations this represents the +2.3 standard deviation
point. Now all we need to quickly calculate the
loss (i.e. without M.C. simulation) is the mean and standard deviation for
single-mode 2-fiber connections and array connections. But determining
these values is not trivial. If you search the web you can find a variety
of performance claims, stated with different terminology like typical, average,
maximum, and sometimes mean and standard deviation. What’s even
more confounding is that these specs are stated under differing conditions,
such as random mate to like product, when mated to a reference-grade plug, or
when “tuned” (or various combinations of these). Tuning is
the process by which a cylindrical ferrule is iteratively clocked into optimal
performance position, a labor intensive process. The bottom line is that
one can argue about specs from many different angles, each one backed up by
some evidence. I wish to cut thru all that and get down to what will work
in practice for cabling that is commonly deployed in data centers. So
here goes. Connections should be modeled using random
mate statistics for un-tuned assemblies because random mate concatenations
represent actual deployment conditions and un-tuned terminations are lower in
cost and commonly used in data center environments. A good starting point
for single-mode LCs and SCs is mean = 0.2 dB, standard deviation = 0.15 dB.
A good starting point for single-mode MPOs is mean = 0.35 dB, standard
deviation = 0.25 dB. Using these values, the statistical calculation for
2.3 standard deviations of four LCs/SCs plus four MPOs is 3.54 dB. For
four MPOs alone it’s 2.55 dB. Add fiber attenuation at a rate of at
least 0.5 dB/km to account for the use of indoor cabling (as opposed to outside
plant’s ~0.4 dB/km) and you have a useful solution. As an aside, TIA’s attenuation specs
for single-mode cable at 1310 and 1550 nm are: 1.0 dB/km for indoor plant 0.5 dB/km for indoor/outdoor 0.5 dB/km for outside plant. These are commonly used as the performance
limits for structured cabling. Up to this point in the evolution of single-mode
systems defined for 802.3, the use of OSP specs from ITU have made sense
because they were focused on distances that were clearly meant to support
outside plant in the network. But the continued use of ITU specs is not a
wise choice for data center environments (i.e. primarily indoor with some mix
of indoor/outdoor deployment) where TIA and ISO structured cabling standards
prevail. To summarize, the minimum loss budgets
that I think are needed to well support 500 m channels are these: ~2.8 dB for parallel fiber solutions, ~3.8 dB for 2-fiber solutions. For reference, the LR4 loss budget is 6.3
dB, so the above values represent 44% of the LR4 budget (3.5 dB lower) and 60%
of the LR4 budget (2.5 dB lower) respectively. These are substantial
reductions that should allow lower cost solutions. However, if the
budgets are trimmed further they will loose utility and the IEEE effort will be
in vein. This would be the case if we followed 802.3’s prior approach
of allocating only 2 dB of connection and splice loss for the channel. As
previously mentioned, this has worked for data centers in the past only because
there is ample remaining loss budget to put in place to overcome the
attenuation of long lengths of fiber, typically at least 4 dB to support 10 km.
In these past cases, for shorter channels the fiber attenuation budget is
routinely traded for extra connection insertion loss budget. We will not
have that luxury within the short-reach budgets we are considering. Also
it should be noted that the 3.8 dB value for 2-fiber solutions is not far from
the usual 4 dB loss budget commonly provided for short-reach central office
solutions. Hopefully this discussion (really an
on-line presentation) will help us converge our concepts for next week’s
meeting. Regards, Paul From: Paul, The way to derive a
link budget for structured data center cabling is as you articulate below. This
works well for optics intended for structured cabling applications, for example
MMF SR and SR4 and the new SMF minimum 500m interface.
Chris From: Kolesar, Paul [mailto:PKOLESAR@xxxxxxxxxxxxx]
Scott and Chris, The crux of the issue is to define a power
budget that is useful for data center environments. For single-mode
channels in data centers the attenuation of the fiber consumes a minor fraction
of the loss needed to properly support useful channel topologies, with a far
larger fraction of the loss budget devoted to overcoming connection insertion
loss. There can be a significant difference in
what is needed to support 2-fiber channels compared to parallel channels for
the same topology. This is because, for pre-terminated structured
cabling, there are twice as many connections for 2-fiber channels owing to the
deployment of fan-out cassettes that transition from array connections (e.g.
MPOs) on the permanent link cabling to multiple 2-fiber appearances (e.g. LCs
or SCs) on the front of the patch panel. For parallel solutions these
fan-out cassettes are not used because the fan-out function is not
needed. Instead, array equipment cords attach directly to array
pre-terminated permanent link cabling. Thus parallel solutions deploy
only array connections, one at each patch panel appearance, while 2-fiber
solutions employ a 2-fiber connection plus an array connection at each patch
panel appearance. For the long data center channels that
single-mode systems are expected to support, two-link (or greater) topologies
will be very prevalent. A two-link channel will present four patch panel
appearances, one on each end of the two links. For 2-fiber transmission
systems, that means we need enough power to support four 2-fiber connections
plus four array connections, a total of eight connections. For parallel
transmission systems, that means we need enough power to support four array
connections. To show an example, I’ll take a
simple approach of allocating 0.5 dB per connection. This translates into
a need to provide 4 dB of connection loss for 2-fiber systems and 2 dB for
parallel systems. Admittedly this isn’t correct because, among
other things, 2-fiber connections generally produce lower insertion loss than
array connections. But it serves to illustrate that the difference in useful
loss budgets between the two systems can be significant, being associated with
the loss allocation for four 2-fiber connections. In prior 802.3 budgets we typically
allocated 2 dB for connection loss in single-mode channels. However, in
the past the length of fiber was much longer, typically at least 10km.
This meant that at least 4 dB of power was devoted to overcome fiber
attenuation. For such systems it is possible, and quite typical, to
exchange the fiber loss allocation for additional connection loss when the
channel is short. So 10km budgets supported the above structured cabling
scenario. For P802.3bm we are talking about supporting much shorter
channels, so the fiber loss allocation will be much smaller making such trade-off
no longer very helpful. But the customer still needs to support channels
with at least two links. The bottom line is that a blanket absolute
connection loss allocation is not the best approach. Rather we should be
requiring support for a minimum number of patch panels, which I propose to be
four for the two-link channel. From this the number of connections will
depend on the system, and the connection loss allocation can be appropriated
accordingly. Hopefully this helps draw the disparate
views together. Regards, Paul From: Scott
For structured links,
as explained to us by Paul Kolesar, the number of connections is known, for
example 2 for single-link channels and 4 for double-link channels (kolesar_02_0911_NG100GOPTX).
This works well when defining MMF link budgets, for example SR or SR4, and also
the proposed new 500m SMF standard, which can be viewed as a reach extension of
the 100m or 300m MMF application space. However, that is not
how 2km or 10km SMF interfaces are used. These have a variable number of
connections and often include loss elements. It is reasonable as a methodology
to start out with a nominal number of connections and fiber loss to determine a
starting point link budget. However, it is unreasonable to stop there and
ignore widespread deployment. So using the methodology outlined below to
determine a 2km loss budget gives the wrong answer.
From: Chris, I suggest that we
specify our single-mode links in a similar way that they have been defined in
IEEE 802.3-2012. For 10GBASE-LR, the
channel insertion loss is defined as: Notes below Table
52-14: c. The channel insertion loss
is calculated using the maximum distance specified in Table 52–11 and
cabled optical fiber attenuation of 0.4 dB/km at
1310 nm plus an allocation for connection and splice loss given in 52.14.2.1. 52.14.2.1 states: The maximum link distances
for single-mode fiber are calculated based on an allocation of 2 dB total connection and splice loss
at 1310 nm for 10GBASE-L, and 2 dB for 30 km total connection and splice loss at 1550 nm for 10GBASE-E. Even for
100GBASE-LR4 and 40GBASE-LR4, the connection insertion loss is: 88.11.2.1
Connection insertion loss The maximum
link distance is based on an allocation of 2 dB total connection and splice
loss. So the usual
standard for single-mode Ethernet links is 2dB of connection insertion loss and
we should continue to specify single-mode links with this connection loss
unless there is a good reason to change. The exception to
this way of defining connection insertion loss is 40GBASE-FR. Since
40GBASE-FR was designed to use the same module for VSR2000-3R2 applications as
well, the connection loss was increased to 3.0dB and the channel insertion loss
was defined as 4dB. This was a reasonable variation from the normal
specification methodology to interoperate with other telecom equipment. Since
the 100GBASE-nR4 solution does not have this requirement for compatibility with
VSR, the 802.3bm task force does not need to carry this costly requirement of a
3.0 dB connection loss and 4.0dB loss budget into 100GBASE-nR4. The standard even
calls out how 40GBASE-FR is defined differently from other Ethernet standards: 89.6.4
Comparison of power budget methodology This clause
uses the budgeting methodology that is used for application VSR2000-3R2 in
ITU-T G.693 [Bx1],
which is different from the methodology used in other clauses of this standard
(e.g., Clause 38, Clause 52,
Clause 86, Clause 87, Clause 88). For 802.3bm, we
should define the channel insertion loss as distance * loss/distance +
connection loss. For a 500m solution,
the channel insertion loss would likely be: 0.5km * 0.4dB/km + 2.0dB of
connection loss = 2.2dB. For a 2km solution,
the channel insertion loss would likely be: 2 km * 0.4dB/km + 2.0dB of
connection loss = 2.8dB. Kind regards, Scott From: I look forward to
hearing the presentations on this today’s SMF Ad Hoc call. However, having previewed
the presentations, I continue to be disappointed that we are still discussing
reach as if that was the only important application parameter. For example,
shen_01_0113_smf only discusses 2km as if that was sufficient to describe the
application. There is no mention of loss budget. Further, by only focusing on
reach, the presentation perpetuates the myth that somehow the 10km reach is a
niche application, and that we have just discovered 2km as an overlooked sweet
spot. It is well
understood that there are few datacenter reaches that are 10km. What is
important about widely used 10km interfaces like 10GE-LR, 40GE-LR4, and
100GE-LR4 is their greater than 6dB loss budget. In most applications, the
reach is much less than 10km, but the >6dB loss budget is fully utilized. 2km reach has been
an important and wide spread application, with both ITU-T and IEEE
standardizing on a minimum 4dB loss budget. This means that over the last
decade, end users have become accustomed to interfaces labeled 2km supporting a
4dB loss budget, and designed their central offices and datacenters around
this. It is not clear why we continue to reinvent the wheel and propose
reducing the established 4dB loss budget by fractions of a dB, for example as
in vlasov_01_0113_smf to 3.5dB. If we were to deploy such an interface, it will
cause problems in existing applications which try to use the new interface and
find the supported loss budget less than expected. The new interface will
require datacenter link engineering, as opposed to the plug-and-play paradigm
that has made Ethernet so successful. When discussing
datacenter interfaces, it will be very helpful to always state both the reach
and loss budget, for example 500m/2dB, 2km/4dB, or 10km/6dB, or something else.
This way, there will be a clear understanding of what application is being
addressed. Thank you Chris |