Thread Links | Date Links | ||||
---|---|---|---|---|---|
Thread Prev | Thread Next | Thread Index | Date Prev | Date Next | Date Index |
Scott and Chris, The crux of the issue is to define a power
budget that is useful for data center environments. For single-mode
channels in data centers the attenuation of the fiber consumes a minor fraction
of the loss needed to properly support useful channel topologies, with a far
larger fraction of the loss budget devoted to overcoming connection insertion
loss. There can be a significant difference in
what is needed to support 2-fiber channels compared to parallel channels for
the same topology. This is because, for pre-terminated structured
cabling, there are twice as many connections for 2-fiber channels owing to the
deployment of fan-out cassettes that transition from array connections (e.g.
MPOs) on the permanent link cabling to multiple 2-fiber appearances (e.g. LCs
or SCs) on the front of the patch panel. For parallel solutions these
fan-out cassettes are not used because the fan-out function is not
needed. Instead, array equipment cords attach directly to array
pre-terminated permanent link cabling. Thus parallel solutions deploy
only array connections, one at each patch panel appearance, while 2-fiber
solutions employ a 2-fiber connection plus an array connection at each patch
panel appearance. For the long data center channels that
single-mode systems are expected to support, two-link (or greater) topologies
will be very prevalent. A two-link channel will present four patch panel
appearances, one on each end of the two links. For 2-fiber transmission
systems, that means we need enough power to support four 2-fiber connections
plus four array connections, a total of eight connections. For parallel
transmission systems, that means we need enough power to support four array
connections. To show an example, I’ll take a
simple approach of allocating 0.5 dB per connection. This translates into
a need to provide 4 dB of connection loss for 2-fiber systems and 2 dB for
parallel systems. Admittedly this isn’t correct because, among
other things, 2-fiber connections generally produce lower insertion loss than
array connections. But it serves to illustrate that the difference in useful
loss budgets between the two systems can be significant, being associated with
the loss allocation for four 2-fiber connections. In prior 802.3 budgets we typically
allocated 2 dB for connection loss in single-mode channels. However, in
the past the length of fiber was much longer, typically at least 10km.
This meant that at least 4 dB of power was devoted to overcome fiber
attenuation. For such systems it is possible, and quite typical, to
exchange the fiber loss allocation for additional connection loss when the
channel is short. So 10km budgets supported the above structured cabling
scenario. For P802.3bm we are talking about supporting much shorter
channels, so the fiber loss allocation will be much smaller making such
trade-off no longer very helpful. But the customer still needs to support
channels with at least two links. The bottom line is that a blanket absolute
connection loss allocation is not the best approach. Rather we should be
requiring support for a minimum number of patch panels, which I propose to be
four for the two-link channel. From this the number of connections will
depend on the system, and the connection loss allocation can be appropriated
accordingly. Hopefully this helps draw the disparate
views together. Regards, Paul From: Chris Cole [mailto:chris.cole@xxxxxxxxxxx] Scott
For structured links, as explained to us
by Paul Kolesar, the number of connections is known, for example 2 for
single-link channels and 4 for double-link channels (kolesar_02_0911_NG100GOPTX).
This works well when defining MMF link budgets, for example SR or SR4, and also
the proposed new 500m SMF standard, which can be viewed as a reach extension of
the 100m or 300m MMF application space. However, that is not how 2km or 10km SMF
interfaces are used. These have a variable number of connections and often
include loss elements. It is reasonable as a methodology to start out with a
nominal number of connections and fiber loss to determine a starting point link
budget. However, it is unreasonable to stop there and ignore widespread
deployment. So using the methodology outlined below to determine a 2km loss
budget gives the wrong answer.
From: Scott Kipp [mailto:skipp@xxxxxxxxxxx] Chris, I suggest that we specify our single-mode
links in a similar way that they have been defined in IEEE 802.3-2012. For 10GBASE-LR, the channel insertion
loss is defined as: Notes below Table 52-14: c.
The channel insertion loss is calculated using the maximum distance specified
in Table 52–11 and cabled optical fiber attenuation of 0.4 dB/km at
1310 nm plus an allocation for connection and splice loss given in 52.14.2.1. 52.14.2.1 states: The
maximum link distances for single-mode fiber are calculated based on an
allocation of 2 dB total connection
and splice loss at 1310 nm for 10GBASE-L, and 2 dB for 30 km total connection
and splice loss at 1550 nm for 10GBASE-E. Even for 100GBASE-LR4 and 40GBASE-LR4,
the connection insertion loss is: 88.11.2.1 Connection insertion loss The maximum
link distance is based on an allocation of 2 dB total connection and splice
loss. So the usual standard for single-mode
Ethernet links is 2dB of connection insertion loss and we should continue to
specify single-mode links with this connection loss unless there is a good
reason to change. The exception to this way of defining
connection insertion loss is 40GBASE-FR. Since 40GBASE-FR was designed to
use the same module for VSR2000-3R2 applications as well, the connection loss
was increased to 3.0dB and the channel insertion loss was defined as 4dB.
This was a reasonable variation from the normal specification methodology to
interoperate with other telecom equipment. Since the 100GBASE-nR4
solution does not have this requirement for compatibility with VSR, the 802.3bm
task force does not need to carry this costly requirement of a 3.0 dB
connection loss and 4.0dB loss budget into 100GBASE-nR4. The standard even calls out how
40GBASE-FR is defined differently from other Ethernet standards: 89.6.4 Comparison of power budget
methodology This clause uses the budgeting methodology that is used for
application VSR2000-3R2 in ITU-T G.693 [Bx1], which is different from the methodology used in other
clauses of this standard (e.g., Clause 38, Clause 52,
Clause 86, Clause 87, Clause 88). For 802.3bm, we should define the
channel insertion loss as distance * loss/distance + connection loss. For a 500m solution, the channel
insertion loss would likely be: 0.5km * 0.4dB/km + 2.0dB of connection loss =
2.2dB. For a 2km solution, the channel
insertion loss would likely be: 2 km * 0.4dB/km + 2.0dB of connection loss =
2.8dB. Kind regards, Scott From: Chris Cole [mailto:chris.cole@xxxxxxxxxxx] I look forward to hearing the
presentations on this today’s SMF Ad Hoc call. However, having previewed the
presentations, I continue to be disappointed that we are still discussing reach
as if that was the only important application parameter. For example,
shen_01_0113_smf only discusses 2km as if that was sufficient to describe the
application. There is no mention of loss budget. Further, by only focusing on
reach, the presentation perpetuates the myth that somehow the 10km reach is a
niche application, and that we have just discovered 2km as an overlooked sweet
spot. It is well understood that there are few
datacenter reaches that are 10km. What is important about widely used 10km
interfaces like 10GE-LR, 40GE-LR4, and 100GE-LR4 is their greater than 6dB loss
budget. In most applications, the reach is much less than 10km, but the >6dB
loss budget is fully utilized. 2km reach has been an important and wide
spread application, with both ITU-T and IEEE standardizing on a minimum 4dB
loss budget. This means that over the last decade, end users have become
accustomed to interfaces labeled 2km supporting a 4dB loss budget, and designed
their central offices and datacenters around this. It is not clear why we
continue to reinvent the wheel and propose reducing the established 4dB loss
budget by fractions of a dB, for example as in vlasov_01_0113_smf to 3.5dB. If
we were to deploy such an interface, it will cause problems in existing applications
which try to use the new interface and find the supported loss budget less than
expected. The new interface will require datacenter link engineering, as
opposed to the plug-and-play paradigm that has made Ethernet so successful. When discussing datacenter interfaces,
it will be very helpful to always state both the reach and loss budget, for
example 500m/2dB, 2km/4dB, or 10km/6dB, or something else. This way, there will
be a clear understanding of what application is being addressed. Thank you Chris From: Hi, As previously announced, there is an SMF
Ad Hoc meeting starting at 8:00 am Pacific today Tuesday 8 January. I have currently received two requests
for presentations, so the draft agenda is: ·
IEEE patent policy reminder o
http://www.ieee802.org/3/patent.html ·
Approval of the draft minutes from 18 December
call ·
Presentation o
100G-BASE-WDM4 optical budget
constraints
Yurii Vlasov, IBM o
System vendor perspective to NG100GE SMF
interface
Tek Ming Shen, Huawei ·
Discussion Both presentations are now available on
the SMF Ad Hoc web page. Peter Anslow from Ciena has invited
you to join a meeting on the Web, using WebEx. Please join the meeting 5-10
minutes early so we may begin on time. +44-203-4333547 ( 4438636577 (
Regards, Pete
Anslow | Senior Standards Advisor |