Thread Links | Date Links | ||||
---|---|---|---|---|---|
Thread Prev | Thread Next | Thread Index | Date Prev | Date Next | Date Index |
Study Group Members,
I share the regret, expressed in several posts to this list, that large internet data center operators are unwilling to make their requirements known in an open non confidential manner. I would like to forward to the group a paper, recommended by a colleague, that may help close this information gap.
“The Datacenter as a Computer: An Introduction to the Design of Warehouse-Scale Machines
Luiz André Barroso and Urs Hölzle”
www.morganclaypool.com
ISBN: 9781598295566 paperback
ISBN: 9781598295573 ebook
The paper may be viewed, without charge, at the Morgan & Claypool site, but please be aware of the restrictive notices on page (iv)
http://www.morganclaypool.com/doi/pdf/10.2200/S00193ED1V01Y200905CAC006
Copies may also be purchased at internet book sites.
The paper, 120 pages in all, describes the specific challenges faced when applications implemented by internet content providers, such as Google and Microsoft, require many thousands, even tens of thousands of servers. Indeed the usage model of the data center changes from “a place to house servers” to a “building to host an application”.
The introduction, pages 1 to 11, gives insight into why these warehouse computers differ from traditional data centers and how this impacts the need for communication bandwidth within the data center.
The ideal system, as described by Barosso and Hölzle, would be one where the cross sectional communication bandwidth of the data center would equal to the bandwidth of the servers, i.e. a network without over subscription. In such a system the application developer can freely locate functions throughout the network, optimally distributing load and minimizing computational and HVAC hotspots. The authors admit that economic considerations cannot support such a model and that over subscription levels of 5:1 are evident between racks of servers (80 servers per rack), 10 racks in a group (800 servers). Using the terminology as per kolesar_02_0911_NG100GOPTX, page 4, citing barbieri_01_0107.pdf, this over subscription would refer to the “access" to “distribution” network layers.
http://www.ieee802.org/3/100GNGOPTX/public/sept11/kolesar_02_0911_NG100GOPTX.pdf
Decreasing the relative cost of these access to distribution layer links would enable warehouse scale computer builders to reduce the level over subscription and get closer to their ideal system. Throughout the paper the authors use a system wide approach to find the lowest cost. By this I mean reducing cost in one area is not beneficial if it results in an increase in overall costs, since cost is just shifted from one area to the other.
Such considerations should play a part in the determination of a reach objective. As we increase the reach to include an ever higher percentage of the links as described in kolesar_02_0911_NG100GOPTX, we should be cognizant of the increase in relative cost to achieve this increased reach; and evaluate if, when considered at a network level with a distribution of link lengths as per Paul’s presentation, we are decreasing overall cost, or not.
Best Regards
Andy