Thread Links | Date Links | ||||
---|---|---|---|---|---|
Thread Prev | Thread Next | Thread Index | Date Prev | Date Next | Date Index |
For some reason, this message is not going to the reflector, and I am not receiving messages at my APM address.
Date: Thu, 08 Dec 2011 12:56:45 -0800
To: 100G Group <STDS-802-3-100GNGOPTX@xxxxxxxxxxxxxxxxx>
Subject: Re: [802.3_100GNGOPTX] Emerging new reach space
Hi all;
I wonder if we are spending our time on the wrong question, ie; "What is the current distribution of fiber in data centers?"
Its useful to see where fiber is being installed at 10G and to understand the reaches various applications have used, however, it seems to me that a few key principles are going to drive the future installations and reach requirements. To an extent, what we produce will be a driving force rather than a response to the market.
Principles;
Data Centers are extremely sensitive to energy consumption and Total Cost of Ownership (TCO) factors related to it.
Energy consumption requires thermal management, and efficiency is a function of Data Center volume which a number of papers have cited are inversely proportional. This is driving toward higher density, smaller geometry clusters.
Initial investment in IT equipment itself is based on performance demands and thus likely to be an independent (or loosely dependent) variable from reach.
Not to be simplistic, but this leads me to the euphemism "Build it, and they will come".
More specifically, if we find that X meters of MMF is what we can support without exceeding a bend in the cost curve, as long as X is reasonable and usable, people will figure out how to optimize around it.
Between papers on "containerized data centers" and "energy efficient warehouse scale data centers" I see a trend growing. Relatively small clusters of racks which contain Top of Rack (TOR) or Middle of Rack (MOR) switches that tie servers together using very short (most likely copper) links. These switches are then tied together with a fiber-run (probably two) to a cluster switch. The total geometry of this cluster wants to be small because air-flow defines the thermal efficiency. A large, spread out cluster requires a lot of air to cool it. The thermal performance is based on velocity, thus a smaller, faster air column is going to work better than a big, slow one.
Even for massive data centers, they are likely to segregate the air columns, and maintain smaller clusters that are tied together with floor switches or meshes.
This all means that the future distribution of fiber may look very different from what historically has been the case. The size of the clusters will be based upon energy efficiency and thermal management more than the reach of the fiber within the cluster. The distance between cluster switches *may* be supportable by MMF, and a 100m reach allows for a pretty significant sized building to be supported before you have to even consider SMF as a link between them.
Below is an example of a 150,000 square foot data center layout.
With today's server technology, the number of servers in such a building can start to challenge other design limits than cable reach. Power distribution, thermal management, ingress/egress data capacity become a bigger challenge than the distance between switches.
I am only saying this to question if we are beating ourselves up over a non-issue. If we can get 100m of reach as Jonathan King's presentation in November stated, I think we are going to satisfy everything from a 150,000 sq ft data center down. The only question that might remain, and its more of a Task Force question, is whether we need two PMDs (one for cluster links, one for floor links) and this depends on the cost delta between a 40m reach and 100m reach. As Jonathan points out, the difference could be achieved simply by enabling/disabling FEC with a common PMD.
Personally, I am ready to support a 100m objective and believe that we can show technical feasibility and market potential based on available information.
Some more work on economic feasibility would be appreciated, and additional supporting presentations on MP and TF would be helpful, but we are very close if we can simply conclude that 100m is a reasonable target without getting bogged down on inconsistencies between sources of historical data.
Dan