Thread Links | Date Links | ||||
---|---|---|---|---|---|
Thread Prev | Thread Next | Thread Index | Date Prev | Date Next | Date Index |
Dan, Sorry if this is a repeat to you, but you
said you were not getting emails at your APM address to which it was first
sent. Paul --- So far I am not getting hung up on the
discrepancies in the DC size distributions. Even if we knew them
precisely, we would still not be able to tell the cabling topologies used
within them. That is why I prefer the approach of looking at the cable
lengths themselves. A decade ago this would not have been possible, but
the advent and popularity of pre-terminated cabling now allows us that
insight. The cabling data and analysis I presented represents a good view
of how our customer base builds data centers. Those customers can be
generally classified as from the “Fortune 1000”, so are also very
relevant as consumers of our next-generation output. The survey and cabling distribution data
presented to 802.3 over the years does indicate that data centers are
evolving. Denser servers have accompanied shorter cabling spans to their
access switches, and with that, an increase in switch-to-switch cable lengths
for the channels that carry the aggregated traffic. Whether 10G server
access channels remain dominated by very short twinax copper or migrate to
longer reach UTP as 10GBASE-T becomes lower in power consumption remains to be
seen, but I think it is clear that there will be a higher percentage of closer
access switch placement topologies like ToR, MoR, and EoR than in the past,
basically because there are more servers in close proximity to each other.
So far these topologies have driven an overall increase in the switch-to-switch
reach requirements. One of the other trends is the advent of
the cloud. This is somewhat synonymous with data center hosting where the
internet services for many customers are housed in larger and larger data
centers. This is supported by the observation that mega data centers only
came into existence in the past 5 years or so. These are the primary
locations where traffic demand is so high that 100G services are
essential. You also bring up the
“containerized” data center, the growing trend where pods are built
in trailer-sized containers and added as demand necessitates. If your
assertion that smaller volume means more energy efficiency, these pods may have
that advantage too. Yet it is not clear to me how they impact reach
requirements. For example, these pods are delivered by truck, so for the equivalent
of a large data center the container “parking lot” would require
truck-sized access aisles that must co-exist with cabling pathways. Such
ingress and egress requirements may increase reach requirements even though the
containers are relatively small compared to a big building. All this leaves me with the view that
pre-term cabling data are likely the best means of assessing channel length
distributions and their trends. Please realize that the data I presented
included that from 2010, a year when the shorter reach limitations of 40GE and
100GE would have been known. Despite the fact that the reach shrunk from
300 m for 10GE to 100 – 150 m for 40/100GE, the channel length
distribution got longer compared to 2005 data. Because of this I
can’t simply agree that if we reduce the reach again that it will drive
acceptable data center design changes, because the data is telling me that
channel lengths longer than 100 m are common. So while I agree that a 100
m reach satisfies the large majority of channel needs (because the pre-term
data analysis supports that view), we cannot determine if that reach is optimal
without knowing the cost of solutions that will be required to satisfy the
longer channels, for it is only in the context of the total solution set where
total cost of ownership can be determined. I also agree that this analysis should
attempt to include not only the initial hardware cost, but the operational
costs associated with power consumption. However, this operational cost
aspect is drawn out over long periods of time, so it cannot be simply added to
the initial cost. So strictly speaking the complication of the time-value
of money comes into play. But as a first pass, without time-value
factored in, it may be the case that an extra couple Lastly, you mentioned using FEC as a reach
booster. FEC does provide additional reach for SM solutions because it
translates into more power budget by allowing the receiver to operate below
sensitivity or with noisier signals because FEC patches up errors caused by
noise. However FECs efficacy for MM optics that are constrained not by noise
but by slow VCSELs and detectors and chromatic dispersion is really
questionable. These are not noise impairments that occasionally cause an
error, but rather distortion impairments that insert structural pulse shape
problems affecting many bit patterns. Such distortions can swamp FEC
capability and are much better compensated by equalization techniques that are
designed to tackle distortion problems rather than noise. So I agree that we should not get hung up
on the DC size distribution data, but we should follow thru with our present
course of analysis before setting objectives. Regards, Paul From: Dan Dove
[mailto:dan.dove@xxxxxxxxxxxxxxxxxx] For some
reason, this message is not going to the reflector, and I am not receiving
messages at my APM address. Date:
Thu, 08 Dec 2011 12:56:45 -0800 |