Skip

Data Centers
June 15, 2009 10:30 PM   Subscribe

Data Center Overload. "Data centers are increasingly becoming the nerve centers of business and society, creating a growing need to produce the most computing power per square foot at the lowest possible cost in energy and resources."
posted by homunculus (32 comments total) 6 users marked this as a favorite

 
"Data centers worldwide now consume more energy annually than Sweden"
posted by smackfu at 10:49 PM on June 15, 2009


You know who else used to consume more energy annually than Sweden?
posted by b1tr0t at 10:52 PM on June 15, 2009 [3 favorites]


Grmph. Though mention is made of power consumption and where the datacenters are placed no mention of where the electricity is coming from - if there is a dam nearby - is made. That's something you always want to know with datacenters.

Nice pics though.
posted by Artw at 11:03 PM on June 15, 2009


creating a growing need to produce the most computing power per square foot at the lowest possible cost in energy and resources.

When the objective is stated that way, the Microsoft server farm in the pictorial seems pretty conventional compared to something like the Sun Modular Data Center,
posted by George_Spiggott at 11:23 PM on June 15, 2009


OMG the cabling in that MS datacenter is INSANE! I thought you could consolidate a lot of those cables as optical wires, but I guess that's cheaper.

When I was a little kid, probably back in the 80s my mom got me this NOVA book on computers. One of the shots showed a big mainframe or something with, as I recall, thousands of parallel cables (like old hard drive cables). Each of those wires probably a direct electrical connection from one circuit board to another. The wires in that MS database must be carrying at least a gigabyte each. Crazy.
posted by delmoi at 12:14 AM on June 16, 2009


That cabling surprises me- I could have sworn Microsoft, like so many others, had long since moved to in-rack switches and out-of-band management devices so that the rack unit becomes a self-contained silo of 20, 40, or even 80-92 servers with a single or pair of failover multi-gig cables going to the core switch. That looks from the pictures like more generic datacenter every-server-is-cabled-to-the-core-switch, which is not only a cable nightmare but inefficient: most core switches only have in the hundreds of ports.
posted by hincandenza at 12:46 AM on June 16, 2009


Related to that cabling mess is something James "I'm not just the President, I'm also a client!" Hamilton (formerly of MS Research, now of Amazon) writes on his blog about the sub-30W Dell server. The idea is tiny bundles of mini-servers that collectively exceed the computing power of a more traditional 2U server for the same power and space footprint.

I know MS has been heading in that direction for a while- like Google and Yahoo, they have a computing footprint need that makes servers disposable commodities and power and management costs the real issue. They've been pushing the modular shipping container datacenter (something they've been talking about for years) where the idea would literally be a set of datacenters around the world and those modular DCs in shipping containers rolling up on a flatbed, dropped into place, and online- moving around thousand-CPU units like legos. Thus the cabling mess of those pictures is all the more surprising: it's like the Quincy DC was architected by an IT drone from 1993!
posted by hincandenza at 12:53 AM on June 16, 2009


So we just have to eliminate Sweden.

Maybe we could develop data centers that run on Sweden. That would be fine until we reach peak Sweden. I wonder how long that would be?
posted by pracowity at 12:54 AM on June 16, 2009 [1 favorite]


That cabling surprises me- I could have sworn Microsoft, like so many others, had long since moved to in-rack switches and out-of-band management devices so that the rack unit becomes a self-contained silo of 20, 40, or even 80-92 servers with a single or pair of failover multi-gig cables going to the core switch. That looks from the pictures like more generic datacenter every-server-is-cabled-to-the-core-switch, which is not only a cable nightmare but inefficient: most core switches only have in the hundreds of ports.

So, in rack switches for 95% of applications is horridly inefficient and generally totally impractical.

Standard server rack is 42RU, standard power load for a Cisco 4948 (3 per rack, you want something like this because it's non blocking and has 10g uplinks) is 300 watts each. That's about 100k per rakc in just network gear for just an access tier, horribly wasteful because you'll never get to that density in a rack anyway.

To explain why it's wasteful:
Assuming you are going with 48 port switches:

That will give you enough headroom on the physical port side for about 48 servers give or take a couple. Assuming you stick with this metric, you have about 39 useable RU's (or, still 42 or even 49 if you go with non standard racks, but stay with me).

If you go with pizza boxes, say the HP DL380, or the Dell 1950 you're looking at 2 x 800w PSU's with an average draw of about say, 500W per RU. 500w*39 = 19,500W per rack footprint. To power those 39 servers in a rack it's about 20,400w/rack footprint, but even if you do things the right way and have 1RU of cable management per 24 switch ports you're still looking at 35 servers...let's leave HBA and SAN connections out of the picture for now. We'll assume raw processing power for the web tier.

The average datacenter in the united states is designed around 4-6kw per footprint at the high end, in certain facilities you can get as high as 10kw per footprint, but generally you're going to max out at doing 10kw or 12kw unless you want to do some fancy things like water cooled racks, double spacing your rows, etc.

The model that is generally deployed is a collapsed access and distribution with a dedicated core or dedicated access tier with a collapsed core/dist. With modern switches you can spend about 500k on switch gear, get redundancy where you need it and be flexible enough to serve 20-25 racks of 20 servers each, which is more realistic in terms of power load.

But this is all sort of a silly discussion anyway because the current migrating trend is to VM infrastructures, buy big multi core hardware and throw multiple virtual machines on it and P to V all your existing physical boxes. This way when the new fancy hardware comes out that has 2000 cores and 2 terabytes of memory and everything is on an infiniband bus you can just migrate the virtual machines and ebay the hardware with no downtime. This is the trend for probably 95% of what I see in the market now, the exception remaining databases which still generally is found on big iron -- IBM P Series for Oracle, occasionally HP, rarely Solaris or Linux.

So the reason you see all that cabling is because it's still the most cost effective way to bring connectivity to servers. The three tiered model Cisco has pushed for years scales incredibly well and you pay through the nose for it. The more popular model is 2 x Cisco 6509E per rack or equivalent (we'll leave the Nexus 7k out because it's just too horribly new to consider for any real deployment) because the cabling is way cheaper and has a much longer lifetime than those 25-50k a pop in rack access switches.

Environmental engineering and architecture in datacenters is an awkward dance between business requirements, systems requirements and technical evolution of products. You screw up any one of them and you'll find yourself with a rather expensive piece of infrastructure you can only partially use.
posted by iamabot at 1:21 AM on June 16, 2009 [15 favorites]


The above note overly simplifies a very complicated piece of infrastructure design, but generally articulates why in rack switches don't work terribly well for high performance computing environments as a general statement. There are of course situations where in rack switches are appropriate, but it's highly unlikely people will encounter them more than once or twice in a career unless they specialize in designing environments that address that specific market.

What was not covered was all the backend infrastructure to even get the power to the racks themselves, things like power conditioners, UPS, PDU's, power cabling infrastructure, monitoring and management of the power infrastructure, in rack PDU's...and then all the surrounding environmentals for the cooling infrastructure...air movement, CFM monitors, temperature sensors, water, chillers, staff to do all of the above.

It's hugely expensive...the mantra is to fit as much efficient and useful computing power per rack, that does not necessarily mean as many processors as you possibly can, because the overhead on providing power to everything that comes with processors adds up quickly (motherboards, disks, ram, fans, blinken lights, etc).
posted by iamabot at 1:43 AM on June 16, 2009




Related to that cabling mess is something James "I'm not just the President, I'm also a client!" Hamilton (formerly of MS Research, now of Amazon) writes on his blog about the sub-30W Dell server. The idea is tiny bundles of mini-servers that collectively exceed the computing power of a more traditional 2U server for the same power and space footprint.

I don't quite get stuff like that. Why not just get a 1U box with 8 cores and run 8 VMs? I'm sure you could do it with less then 2400 watts and a $3,200 base cost (well it might be close there, depending on how much ram you put in)
posted by delmoi at 2:28 AM on June 16, 2009


I'm pretty sure msft is moving towards larger self-contained modular units, including in the quincy data center, (cnet link), but those probably don't make for very exciting pictures compared to "ooh! cables and stuff!"
posted by rmd1023 at 4:33 AM on June 16, 2009


delmoi: because the primary challenge in web-scale computing isn't small applications that can be split up inside one big box. That problem has been resolved years ago, and now is little more than tweaks. IMO we're seeing things move in two directions (which can be evenly divided between webhosting and computing):

1. Applications that don't require a full computer will be obviously dumped into VMs, and is a pretty simple and efficient market already. You can buy virtual hosts for as cheap as $20 per month if you know where to look. These ISPs can run massive XEN installations and move VMs around at will between boxes, hosting thousands of different applications. Arguably this is just slightly more powerful "shell" accounts of old (where you get root). Most of these are just simple webhosts, storage, etc. When you hear "cloud computing" or whatever, this is what usually gets the most attention because it's a solved problem and easy to sell and market.

2. The more interesting direction which is not fully resolved (with the mini-servers is attempting to address) is when you need servers which cannot be held in a 1U box with 8 cores, or even one entire rack (or 1000 CPU cores in the more extreme cases); when you need to compute, rather than just store data. You're not distributing one box into many applications, you're distributing one application into many boxes. This is an important distinction because when you're dealing with this kind of problem, you ignore all the other BS with hosting and get down into pure number crunching per dollar. Big boxes are expensive and less redundant. Oftentimes these problems can be solved much much cheaper by throwing a bunch of low-end boxes together and distribute the computing tasks over Hadoop MapReduce.

Note that these two movements aren't wholly orthogonal, Amazon EC2, for instance, is marketing their platform as an easy way to run a Hadoop installation, where you can launch 10 8-core VMs with Hadoop, with the click of a mouse or or 20 single-core VMs with Hadoop, the CPUs per box (and quantity of boxes) really doesn't matter to you because you can distribute the tasks evenly between them. What matters is pure computation per dollar.
posted by amuseDetachment at 4:36 AM on June 16, 2009


1. Applications that don't require a full computer will be obviously dumped into VMs, and is a pretty simple and efficient market already. You can buy virtual hosts for as cheap as $20 per month if you know where to look.

I've had one of those $20 virtual hosts for a couple years :)

But what kind of application requires "a whole computer"?

As far as distributed computations like MapReduce, shouldn't they run about 8 times as fast on an 8 core machine as on 8-single core machines? In fact, for some applications you would get a big performance boost because the communication between those 8 cores would be far quicker, but you would have to design your program in a way that it could be divided into a series of "8-packs" that communicate quickly between themselves and less so with outside nodes. That would take a lot of work and just throwing CPUs at the problem might be quicker. (plus it would be obsolete when we ever get 16 core chips)

It was really interesting to see that Google pushed the battery all the way into the individual node like that, rather then using a UPS.
posted by delmoi at 6:02 AM on June 16, 2009


What was not covered was all the backend infrastructure to even get the power to the racks themselves

We have a cage at a major datacenter, and we currently can't add any more servers because the building is at capacity power-wise. All that infrastructure you talk of means adding a little more power is very expensive.
posted by smackfu at 6:29 AM on June 16, 2009


That would be fine until we reach peak Sweden. I wonder how long that would be?

Many people think that the Ikeapocalypse will happen in their lifetimes.
posted by rokusan at 7:41 AM on June 16, 2009 [4 favorites]


As far as distributed computations like MapReduce, shouldn't they run about 8 times as fast on an 8 core machine as on 8-single core machines?

Not if it's I/O bound. Typically individual MapReduce jobs are independent, at least in the mapping stage. If you have 8 cores on one system bus you won't have as much I/O bandwidth as 8 cores on 8 buses. I mean, it helps to an extent, but cores vs systems always comes down to whether the task at hand is I/O bound or CPU bound.

we currently can't add any more servers because the building is at capacity power-wise

I used to work for a virtual infrastructure tools company. That's the #1 reason driving virtualization - management features are nice, but the real issue is getting more cores in for the existing power connections.
posted by GuyZero at 8:16 AM on June 16, 2009


It was really interesting to see that Google pushed the battery all the way into the individual node like that, rather then using a UPS.
Reminds me of my first laptop (the first laptop, according to some folks), the Toshiba T1000. All power came through the battery, even when the machine was plugged in; the battery was in a constant cycle of drain and recharge. That wasn't very good for the early-generation rechargable battery, though, so it didn't last very long. Plus, if the battery were to become fully discharged, I couldn't simply plug the unit in--I had to plug it in and wait for the minimum charge to build up. When the battery eventually failed, the machine was completley unusuable. I ended up wiring a case for standard C-cell batteries to the back of the unit.
posted by MrMoonPie at 8:23 AM on June 16, 2009


It's certainly possible to optimize for big single machines, but much of the computation that is needed on the web probably won't beat the savings from a bunch of cheap hardware.

The interesting thing with services like EC2 is that you're straight paying for computation per dollar. With sufficiently large computation jobs (assuming it's distributable), time can be made negligible as a primary factor. For a large computational job, it means that it costs the roughly the same for 1x 1-core, 1x 8-core, 20x 1-core, or 20x 8-core. It doesn't matter how many CPUs you throw at the problem, it will always be the same price. It costs roughly the same to rent 1 machine for 100 days as it does to rent 100 machines for 1 day.

When the user has completely disentangled time with computational needs, and their billing matches that, then it's in the data center's interest to just look for pure computing bang-for-the-buck.

It makes much more sense if you disassociate webservers with computation (that populates the data in the databases for the webserver). Examples: k-means clustering for recommendation services (netflix, amazon, google ads, etc), video encoding (youtube converting videos to flash), statistics (log analysis), etc.
posted by amuseDetachment at 8:25 AM on June 16, 2009


EC2 is a wholly different model and really unique. CaaS (Computing as a Service) for Amazon is a way to subsidize their peak season hardware budget throughout the year and recover those capital costs. I doubt it will be profitable for them long term, but if it reduces ongoing capital expenditures for their core business, so much the better. CaaS is a race to the bottom in terms of pricing, so even when those profit margins erode for Amazon, it will still do it because as long as they get back some of their capital costs and cover expenses it makes sense.
posted by iamabot at 8:39 AM on June 16, 2009


Wonderful post and discussion. I was especially intrigued by the location of the NJ2 datacenter to minimize latency (PDF).
posted by exogenous at 10:35 AM on June 16, 2009


CaaS (Computing as a Service) for Amazon is a way to subsidize their peak season hardware

Is that a stated goal or just a theory? Do they drop their QoS and service contracts come December? There are certainly a lot of users who depend on S3 and EC2 as "web dialtone" services with an expectation of perfect reliability.
posted by GuyZero at 11:12 AM on June 16, 2009


Here's the lowly computer that powers your google searches.

They have a few of those I think. It's more than just that one.
posted by GuyZero at 11:13 AM on June 16, 2009


Is that a stated goal or just a theory? Do they drop their QoS and service contracts come December? There are certainly a lot of users who depend on S3 and EC2 as "web dialtone" services with an expectation of perfect reliability.

It's not a stated goal as far as what I've read as far as Amazon specifically, I should qualify that statement and say I think it's probably a good hunch from what I've seen in the utility computing segment of the market, and I think it ties together really well.

Regarding peak and EC2, you don't have to drop your SLA's, but to handle peak you ramp up your physical footprint and it makes old gear retirement really easy and it makes paying for that peak preparedness capital outlay pretty easy.

This is all pretty much speculation, and they (amazon) may view EC2 as a really product they will have a diverse business around with it's own profitability measurements, but if i'm looking at EC2 from a supporting ongoing operations perspective it's pretty nifty as a capital budget recovery technique. I still see utility computing as a race to the bottom, and the barriers to entry really aren't that high.

The physical and logical architecture isn't all that hard, utility computing isn't that new of a concept and you can go commercial with it using VMware's solutions, I think Amazon went with Xen for their hypervisor and built a toolkit around managing it, but I haven't really researched it specifically.
posted by iamabot at 11:44 AM on June 16, 2009


I agree it makes a lot of sense in that perspective. I just wanted to know whether they were doing it that way explicitly. Customers don't like to be told they're getting seconds, regardless of whether or not it's true. Certainly if they get fully or at least partially depreciated hardware to build out their cloud services it drastically reduces the capital costs for the service.

On the other hand, I expect Amazon has the same datacenter constraints as everyone else and would probably want to maximize their compute power per watt density like everyone else.
posted by GuyZero at 11:57 AM on June 16, 2009


On the other hand, I expect Amazon has the same datacenter constraints as everyone else and would probably want to maximize their compute power per watt density like everyone else.

Amazon, Google (not so much yahoo unless they changed from Equinix), Microsoft have the economies of scale where they design and own their own facilities and ground up solutions as evidenced by the Google server design. Really Google and Microsoft are the biggest designers of their own datacenters from the ground up, Yahoo leases space from Equinix and maybe some other providers.

The NJ2 facility referenced in the article is I believe a Savvis facility and engineered around 6kw per footprint (I'd have to check).

At the end of the day it comes down to SLA's and whether you believe them I guess, outsourcing of IT infrastructure and datacenter services is still growing dramatically, but in the last 3-4 years I've seen a lot of the push for more space and power that was very much bladecenter driven (talk about ridiculous power consumption and heat loading, 12kw PER CHASSIS!?! (3-4 chassis per rack!?!?)) drop off quite a bit as customers move towards virtualization and multi core systems.
posted by iamabot at 12:51 PM on June 16, 2009


"Data centers worldwide now consume more energy annually than Sweden"

Isn't the solution to just move all datacenters to Sweden?

And oh, ya - cool article. I liked the link to Google's datacenter stuff, too.
posted by Nauip at 1:46 PM on June 16, 2009


Echo the 'good article, great discussion' post.

If I were into this stuff, where would I go to keep up with new developments and such?
posted by anti social order at 1:58 PM on June 16, 2009


Really Google and Microsoft are the biggest designers of their own datacenters from the ground up, Yahoo leases space from Equinix and maybe some other providers.

Having met a few guys who did datacenter design work for Dell I got the impression that all facilities are custom designed to some extent. A lot of companies still like to have everything done in house. Once you get down to the level of what's in the rack it usually stops though. DC design typically revolves around (in my limited experience) figuring out airflow, layout out racks in the available space and making sure your cooling & power budgets line up.

The GOOG designs stuff as is evidenced in the link above, but I'm not sure if it's done anywhere else. Most places are just racking HP or Dell boxes along with storage and switches. Of course, my experience is with talking to people who buy commodity systems so it's no surprise they use commodity components. Selection bias at work. But I think (though I'm not sure) that MSFT just racks commodity servers. Don't forget that since GOOG runs custom software throughout that they can make server design decisions that other people can't if they're using stock Windows in the DC.

Also, if you assume that Hadoop works the same way that GOOG's stuff works then storage is intermeshed with compute power which is definitely not the case in many enterprise shops, especially ones that use VMware which is very much designed around SAN storage and servers without storage. vMotion is totally based on SAN storage. I'd guess that that's how MSFT operates their DCs and that it's almost the opposite of what GOOG does. of course, the MSFT guys could be doing a lot of stuff and this is just me guessing. Certainly the average enterprise operates very differently from companies like Amazon or Google in terms of the demands on the DC.
posted by GuyZero at 2:12 PM on June 16, 2009


If I were into this stuff, where would I go to keep up with new developments and such?

There are lots of trade magazines about data centres, Data Centre Management for example. Or try CIO.com. Or the usual CNET, VNU, silicon sites or the websites of the people who make data centre tech such as HP (declaration - I work for HP's PR company).
posted by Summer at 2:15 PM on June 17, 2009


If I were into this stuff, where would I go to keep up with new developments and such?

I suggest going to Networkers (Cisco Live!)what a stupid name)) and InterOp.

Really the best way to keep on top of it is to do it and invite vendors in to talk to you, and then cut them loose, works great for free lunches.

For infrastructure you're looking at Liebert, APC, Chatsworth, Panduit, Wrightline, Cisco, HP, IBM, Dell, EMC, Hitachi for the big components.

Go geek out at Greybar's website for a couple weekends, it's fun.
posted by iamabot at 4:26 PM on June 17, 2009


« Older GEST Songs of Newfoundland and Labrador   |   She Certainly Leaves an Impression Newer »


This thread has been archived and is closed to new comments



Post