High Tech's Dirty Little Secret
September 23, 2012 6:43 AM   Subscribe

"Of all the things the Internet was expected to become, it is safe to say that a seed for the proliferation of backup diesel generators was not one of them." Power, Pollution and the Internet [sl NY Times]
posted by nowhere man (88 comments total) 17 users marked this as a favorite
 
One of the abiding memories of the 2003 blackout was the stench of diesel in downtown Toronto. Every building had its backup generator running, and most of the vents were just above ground level.

I'm kind of hoping we can get past an always-on culture. There's no way that it can be supported by renewable energy sources, and consequently, it cannot be sustained.
posted by scruss at 7:00 AM on September 23, 2012 [1 favorite]


The article is interesting, but on a meta level, I saw just the headline of this post in Google Reader and thought "oh, this has to be the Times article from this morning." Sometimes Mefi can be predictable, I guess.
posted by The Michael The at 7:00 AM on September 23, 2012


This is not exactly a secret.

..this foundation of the information industry is sharply at odds with its image of sleek efficiency and environmental friendliness.

No, the "information industry" has never been known for environmentalism. Back when chips were manufactured in the US, mostly around Silicon Valley, the use of harmful chemicals was well known. Even today, Silicon Valley has the one of the highest concentration of toxic waste sites.

BTW, I once worked as a contractor for a Fortune 10 company, one of the largest data processing companies in the world. Their site used a large network of mainframes to process high speed financial transactions. They wanted to move their IT site from a downtown office into a new building in an industrial park, which would be much less expensive. They built an entirely new infrastructure, including a diesel generator the size of a small locomotive. The switchover was (mostly) well planned, one new mainframe was brought up at the new site, transactions routed through the new computer, then the old one halted, then moved to the new site, repeat until all mainframes were relocated. This took weeks.

I was on site the day the final mainframe went online and the transition completed. Everyone was congratulating themselves for a job well done.. until there was an unexpected total power failure. The new diesel generator was set to automatically kick in, but it also failed completely. It took hours to restore power and get the machines started again. The company lost millions during the outage, far more than it saved by moving to the new site.
posted by charlie don't surf at 7:02 AM on September 23, 2012 [14 favorites]


A moronic quote, considering the internet was first intended as a way to keep various military and governmnet sites tlaking to each other during and after a nuclear war and where would the power it needed come from then but from diesel generators?
posted by MartinWisse at 7:14 AM on September 23, 2012 [4 favorites]


This surprises me. I work for a company that sells a bit of DC space and everyone is trying real hard to limit power (well, cooling) draw. (mind you, we have a carbon tax)
Nobody wants to run gear at full power to serve 6%.
But just to give an idea, a standard 19in rack will draw about 2kW, or the same as a hot electric radiator. A rack of blade servers, closer to 6kW. That's a lot of aircon to keep it happy.
posted by bystander at 7:22 AM on September 23, 2012


Speaking of air conditioning...

My office happens to be in the basement of a building. Being underground, it is cold, so they heat it. But in the server room at the end of the hall it gets too hot, so they air condition.

Spending money on both heating and cooling adjacent spaces seems...kinda dumb to me.
posted by DU at 7:37 AM on September 23, 2012 [5 favorites]


Does anybody have a link to the smokestack running metafilter's host server? Because I am not feeling guilty enough about my western civilized carbon footprint yet.

One thing I like about laptops with the power right there under your hands is the heat; you actually have to try in order to ignore the resource consumption. I don't think I have ever seen a computer advertisement where they focused on how little power they consumed versus the competition.
posted by bukvich at 7:41 AM on September 23, 2012


Reading the rest of the article, I am now much less impressed by a Times year long investigation. I can only assume after the year the authors sat around and said "Well, all these servers use a lot of power. Can we make it look bad? Yes. When there are blackouts they run air polluting diesel generators!" Then cited a bunch of figures about the growth of data centers.
The example of waste they chose was Lexis-Nexis, which is not exactly an online powerhouse of cutting edge tech, and companies I know have programs to reduce power use (e.g. Google) were only quoted as a total consumption figure.
They even gave an example of somebody searching to find an Italian restaurant in Manhattan as if the alternative (yellow pages, phone assistance, or driving around uses less energy).
--not as described. Do not buy--
posted by bystander at 7:45 AM on September 23, 2012 [2 favorites]


The answer isn't to ask IT managers to sacrifice reliability in the name of energy savings. It just isn't going to happen. There are a few areas of needed improvement I can see. Some are being pursued, others not so much.
1: improved scaling/sharing of servers. I'm no expert, but I believe that this is what the newer blader servers aim to accomplish - by putting multiple servers virtually on one piece of hardware you can have a shared additional capacity instead of each server carrying extra all the time for its own occasional peaks. Increased capability to runn server processes in parallel will make this even more valid going forward.
2: hardware improvement. Simply move and store more data using less power, an active area.
3: software improvement. Provide better user experience with less data bloat. It can be done, but right now the trend is to just carry everything and the kitchen sink in your data set all over the place. Improved tools for data optimization can work, but ironically put the user in even more need of that 100% uptime requirement that is derided in the article as the source of the energy consumption problem.
4: use the waste heat. This is the big nut which I think is being neglected in many cases. All that energy these facilities consume gets rejected as heat at the end of the process. It is fairly simple to capture that heat and pipe it to nearby locations. With the data center uptime requirements it would also be a more reliable source than just about any traditional building or district heating system. So, if you put your data center in a northern climate with a small town's worth of apartments nearby you could provide nearly free heat to all those buildings. But, this requires a much more complex design, investment and real estate pro forma, so it just isn't worth it in most cases. So, we pay to air condition the heck out of the data center and then to heat hundreds of other nearby buildings at the same time.
posted by meinvt at 7:46 AM on September 23, 2012 [1 favorite]


So from memory of some talk posts, Mefi runs on 3 or 4 servers. I guess total power consumption runs about 10 light bulbs (old skool ones, not CFL) on average.
posted by bystander at 7:47 AM on September 23, 2012


I don't think I have ever seen a computer advertisement where they focused on how little power they consumed versus the competition.

You have, but it's not phrased that way. In the world of laptops and smartphones, it's done by touting the battery life.
posted by hippybear at 7:48 AM on September 23, 2012 [8 favorites]


I'm kind of hoping we can get past an always-on culture. There's no way that it can be supported by renewable energy sources, and consequently, it cannot be sustained.

We don’t *currently* store renewable energy for a rainy day, but one day we will. We will never give up on always-on.
posted by davel at 7:57 AM on September 23, 2012 [1 favorite]


We will never give up on always-on.
It's 1am where I am, and I'm about to drop my house's load to c.5% of what it is at 6pm. Always-on doesn't mean always 100%.
posted by bystander at 8:10 AM on September 23, 2012


I'm also going with not impressed. The article mentions how much electricity data centers use (2%). This seems like a lot but is it if compared to how much economic activity it supports? Moreover, a common figure used to judge energy use over time is per capita electricity (or overall) energy use. Not in the article either.

This isn't saying there aren't major efficiencies to gain in data center engineering and utilization but saying "they use a lot of power!" without context of time or value is poor journalism. By that standard I expect a piece about how bad cars are tomorrow that doesn't mention how bad public transit is in most places.
posted by R343L at 8:10 AM on September 23, 2012


If datacenters actually gave a shit about power consumption they wouldn't air condition the computers. You'd have to disallow datacenter customers from installing whatever piece of crap machines they want, but aircon is not necessary for high density. Google doesn't air condition their datacenters, for example.
posted by ryanrs at 8:11 AM on September 23, 2012


It seems like data centers would be the ideal environment to implement rooftop solar power, which we should be doing on more buildings anyway. Large buildings with wide roofs and a massive need for energy. Apple is already doing this as noted above, but I believe they have an enormous, acres-wide dedicated solar power plant. That's not exactly feasible in all places. Supplemental solar couldn't meet 100% of the requirements, but even a 10-20% reduction would be huge. Surely we could implement some more tax credits for this.
posted by T.D. Strange at 8:14 AM on September 23, 2012


aircon is not necessary for high density
The DC's my company sells are +or- 8degrees on 21C. We aircon to hold 2kW under 29C on a 30C day, per rack. I shudder tho think what the temp on a core chip is if ambient is 30c - probably 100C+
If Google isn't using aircon, it's because they aren't using dense racking, so they are choosing higher real estate costs over energy use. (I support that choice, BTW)
posted by bystander at 8:18 AM on September 23, 2012


Unfortunately, solar on DCs is just a drop in the bucket. Solar energy runs about 1kW per m2, a single story DC burns about 2kW per m2. And solar PV is about 16% efficient at max. So at best you could cover 8% of load in the main daylight hours. Worth doing, but not a dramatic improvement.
posted by bystander at 8:21 AM on September 23, 2012 [2 favorites]


How can we judge what quantity of electricity used in data centers is "wasted?" If a website is actually more responsive because it has lots of idle servers standing by, isn't that responsiveness worth something?

In the IT field, and in the build-your-own-PC hobby, low power consumption has become a big feature. New CPUs are touted for their low idle power consumption, new power supplies sport efficiency badges, new low-speed disk drives are marketed as "green" versions.

It seems like data centers would be the ideal environment to implement rooftop solar power...

I've thought so too; there's no need to implement an expensive power-wasting storage system, since there is always a demand for all the power you could make right there. Google had a project like this on top of their headquarters. They used to have a site that showed how much juice the rooftop array had produced over the last week or so. I can't find it anymore; I don't know what happened.
posted by Western Infidels at 8:23 AM on September 23, 2012


Datacenters are hard.

One problem -- the software development mindset, which demands fast release and new features and rarely cares about efficiency. The answer to slow is "throw more hardware at it." Indeed, the only time I see efficiency come into play is when the developers say "We need more hardware" and you tell them "There is none. This is the biggest we can afford. There is no bigger."

There's a lot of work being done to handle varying loads. VMware's vSphere can, when load is low, move VMs off of a host and power it down, which drops almost all that load off (there's about 10 or so watts being burned by the management card that will restart the machine when it's needed.)

IBM has been working on this is the very large realm. The BlueGene/Q series of supercomputers is built not to just maximize flops*, but flops per watt. Most of these cluster average about 2.1GFLOPs per watt. And it's not like they're slow -- the #1 on the current TOP500 list is Sequoia, at LLNL, which is pulling 16.3 petaflops on 7.8 MW of power, driving 1.5 million CPUS and 1.6 petabytes of RAM. The current #2, K at RIKEN Advanced Institute of Computational Science in Tokyo, runs at 10.5 Pflops, but draws 12.6 MW for 700K cores and 1.4 PB of RAM. K was the first machine to top 10 Pflops.

Apple has also, for a very different reason, been big on power efficiency. One, Apple is primarily a mobile device company now, with the vast majority of their installations relying on battery power. Two, for esthetic reasons, Apple did not want large, loud fans on their systems, and one way to avoid needing them is to simply reduce the power draw. Plus, on mobile devices, fans themselves take power, that's power that isn't being used for other work.

Datacenters will continue to be power hogs. The trick, I think, is going to be increasing the work they do so we need less data centers. But, right now, there are a lot of market factors arguing against efficiency at the software level, which means that you need more hardware to keep up.

The biggest factor for will be power costs. The reason eastern North Carolina has become a big data center hub has very little to do with Research Triangle Park. You can remote in from anywhere these days. No, the reason is simple -- electricity in that area is cheap.

Spending money on both heating and cooling adjacent spaces seems...kinda dumb to me.

Most of the big DCs are in their own buildings. There are a number of shared DC/Office buildings that attempt to use DC heat for climate control in the winter. The problem, of course, is what you save then, you lose in the summer.

Mefi runs on 3 or 4 servers. I guess total power consumption runs about 10 light bulbs (old skool ones, not CFL) on average.

I'd guess Mefi runs about 3KW -- several servers, and the disk. Don't underestimate the power draw of storage. By far, the warmest rack to stand behind in my DC is the one with the CX4-480 and some 240 disks spinning.

Storage Systems are moving to both 2.5" disk, which draws less power per spindle, but may draw more per rack -- you can fit 15 3.5" disks in a 4U disk tray on an EMC Clariion, but you get 25 2.5" disks in 3U. So, in 12U, you get 45 3.5" disks, or 100 2.5" disks.

The other move we're starting to see is SSD, which draw much less power. I'd love to be all SSD for a number of reasons, but right now, cost is the big factor. I'm moving to SSD for when I need lots of IOPS**, but I hope to eventually have most of my main tier on them, just for power reasons.

One of the reasons that virutalization has taken off so hard is power. If you need 100 servers, you have to power a minimum 100 power supplies, 100 banks of DRAM, 100 network interface, etc. All of these need power all the time. With VMs, you can basically work fewer boxes harder and not have the parasitic load. Currently, I run about 200 VMs in production, and about 300 in test dev. With 1U servers, that's 500 -- over 12 racks full to the gills. With our current hardware, we run all of that in 32U (not including network gear), on 16 2U servers. We're looking at Cisco UCS, which would fit all of that (and more, given the increased CPU speed) into 14U, including the UCS switches.

Dropping from 500 power supplies (or 1000 if you're redundant) to 32 (we are redundant) is a huge savings in power. Dropping from 500 banks of DRAM to 16, ditto. We work the CPUs in the hosts much harder than we would otherwise, but we're working over only 32 of them, not running 500 (or 1000, if you have dual cpus, which most servers do!) near idle.

Google doesn't air condition their datacenters, for example.

They, in fact, do. They do run theirs warmer than most, they run the cold aisles at 80F, and they are very careful to distribute the load evenly. They also don't dehumidify or reheat. They also use "free" cooling where they can -- in Finland, they pipe seawater into the heat exchanges, in Airzona, they use evaporative cooling extensively (with water recycling, of course.) Of course, by running at 80F, there are many more days where they can use free cooling rather than conventional chillers.

The big reason most people have the DCs set to 60F or so is they don't truly know the actual heat loads, which do vary, and they're afraid of a spike in heat. Google has spent a great deal of effort knowing the exact heat profiles in the data centers, so they know that if they run at 80F, they're not going to see a sudden jump to 100+ and servers failing.

And that's the final trick. Google has built their systems to tolerate failure. If a node overheats, it shuts down, cools off, then restarts. They don't try to crash-proof the hardware, they make the software resilient to crashing, so Google can afford less reliable hardware.

So, we're back to it again. Fundamentally, the best place to save power in the DC? The software that's running on it. The more robust the software is, the less robust the hardware needs to be, and when you start stripping out things you formerly needed for redundancy, you save power.

Though better power supplies help *a whole bunch*. The worst source of pure loss, and thus pure heat, is converting 120/240VAC to the voltages needed in the server. You can't do that elsewhere, because of line losses of low voltage wiring, so you want to do that as efficiently as possible. Nothing will cost you money like a cheap power supply.


* Flops = FLoating point Operations Per Second. You'll occasionally see it written as flop/s, that's FLoating point OPerations/second. Floating Point is a method of storing numbers that aren't integers, and flops has become a standard candle for measuring supercomputer work.

** IOPS= Input/output Operations Per Second. Storage has three key metrics. The first is commonly called bandwidth -- how much data you can move in a given amount of time. The second is IOPS, how many *different* IO operations can you perform in a given amount of time.

Most data we move is in small blocks. Sustained Sequential Read and Writes very rarely come into play (the biggest exception? Backups) Most operations are random, low size, and a mix of read and writes. Here, storage bandwidth is ok, but IOPS is far more important. Before, the only way to handle high IOPS workloads was lots of spindles handling the data -- indeed, it wasn't uncommon to "short stroke" a drive. You'd stripe a number big drives, but only use a fraction of the space on each drive and leave the rest blank. That way, the read/write heads on the disk only moved across a fraction of the disk. This reduced seek latency. Combined with higher spindle speed (15K RPM being the acme of the hard disk) reducing rotational latency, you'd get more IOPS per spindle, thus, more IOPS total.

With SSDs now coming into play -- which have no heads to move or disks to turn -- they're taking over in the high IOPS workload space, and will eventually supplant hard disks in the main line completely. First, high IO, then general IO, reducing HDDs to third tier and backup storage. Eventually, I suspect, even in the backup realm, but for now, HDDs still remain the king at the third storage metric. Capacity, which is also called density. with 1.5 TB per platter common *now*, you can get a whole bunch of data onto them.
posted by eriko at 8:26 AM on September 23, 2012 [83 favorites]


I shudder tho think what the temp on a core chip is if ambient is 30c

The CPUs run within a couple C of Intel's absolute max rating. Which is fine, of course.
posted by ryanrs at 8:28 AM on September 23, 2012


>> Google doesn't air condition their datacenters, for example.
> They, in fact, do

What I mean is they don't use refrigerant compressors.
posted by ryanrs at 8:29 AM on September 23, 2012


It was disingenuous of them to compare web data centers to a supercomputing cluster at a national lab. Of course the lab cluster will be running at full load.
posted by scose at 8:33 AM on September 23, 2012


While reading this, I was overcome by nostalgia for the California coverage of this a decade ago.

I was expecting more awareness of how much more efficient modern data centers are now, too, given how many years have been spent changing 100% duty cycle assumptions (i.e. the real reason to mention AWS), virtualization (aptly described above), direct DC power (avoiding less efficient AC/DC converters in each server), far better thermal engineering, passive cooling, etc.
posted by adamsc at 8:41 AM on September 23, 2012


This isn't something to shriek and panic about; even in the article's own words, this is only about 2% of the power consumed in the US, and considering just how dependent the economy has become on this stuff, that's really not so bad. Even if all the data centers instantly became perfectly efficient tomorrow, that would save maybe 1.5% on the total power bill, and then it would just start increasing again like normal.

And, while they talk about how much energy the paper industry uses (about 90% as much as data centers), they don't talk about the growth and/or decline of paper. Early IT stuff caused a huge increase in the amount of paper being used, because it made printing so easy, but I'm suspicious that many offices may be headed for a mostly- or all-electronic workflow, maybe without even meaning to. There are probably both offsets and additional costs in the advent of the Internet, and simply measuring the energy inputs of data centers is a pretty simplistic model.

There are many ways they can be made more power-efficient, but in general, more efficient is also less reliable. And when it costs you a million bucks a minute for downtime, you're not very worried about spending an extra 25 or 30 million bucks a year in additional power as an insurance policy.

As the technology keeps improving, you should see a steady reduction in waste. As a couple of posters upthread are talking about, virtualization is becoming much more prevalent, and it's easily possible to shrink 10 or more physical servers into just one. So the thing about 'nobody ever wants to unplug old servers' will become less and less true, because they'll get virtualized and put into a low-load clusters, so they stay available, but share the hardware they're on with more active virtual machines.

Data centers are still very new things, overall. We've only really had BIG ones for, what, about fifteen years now? We're still in the 'wild expansion' phase, but before too much longer, the 'efficiency improvement' phase should become much more noticeable.
posted by Malor at 8:43 AM on September 23, 2012


Everyone was congratulating themselves for a job well done.. until there was an unexpected total power failure. The new diesel generator was set to automatically kick in, but it also failed completely.

My work was designated as one of the fail-over sites for business continuity in the late nineties, back when (almost) everyone was praying the world wouldn't end on New Year's Eve. I've watched over the last decade as a small hamlet of outbuildings has appear outside my office window. I think we're now up to three failover generators (back-up to back-up to back-up) as well as two separate fuel storage buildings. It does seem like overkill, but turning on locomotive-sized engines coupled to medium-voltage power substations after they've been sitting for a few months doesn't always go as planned. Apparently they have fallen back to number three in the routine fire-ups.
posted by bonehead at 8:51 AM on September 23, 2012 [1 favorite]


"Worldwide, the digital warehouses use about 30 billion watts of electricity, roughly equivalent to the output of 30 nuclear power plants?"
That's hell lot of energy waste. When can the world expect a sustainable, eco-friendly digital warehouses?

"Even running electricity at full throttle has not been enough to satisfy the industry. In addition to generators, most large data centers contain banks of huge, spinning flywheels or thousands of lead-acid batteries — many of them similar to automobile batteries — to power the computers in case of a grid failure as brief as a few hundredths of a second, an interruption that could crash the servers."

This is horrific :-(

I lived in the illusion that Internet was primarily responsible for a greener and cleaner world. But the reality is otherwise.....An eye opening article.
posted by molisk at 8:52 AM on September 23, 2012


What's wrong with flywheels and lead-acid batteries?
posted by ryanrs at 8:55 AM on September 23, 2012 [1 favorite]


What's wrong with flywheels and lead-acid batteries?
The fact that you have to take power off the grid to charge/spin them up and hold them there on the off chance the grid dies. Losing a bunch of power to inefficiency in the process.
posted by Pink Fuzzy Bunny at 9:22 AM on September 23, 2012


Efficiency and reliability are opposites. An efficient system just makes do, a reliable one has to have spare capacity. You can have one or the other, but not both.
posted by bonehead at 9:27 AM on September 23, 2012 [11 favorites]


The NYT can take a bold step in practicing what it preaches by running its data center at 100% utilization.

We'll see how long that site stays up.

(Not to say there aren't issues to be solved, but overprovisioning is only sometimes the result of waste and other times the result of redundancy.)
posted by Noisy Pink Bubbles at 9:28 AM on September 23, 2012


Yeah, remember that lead-acid batteries are almost completely recyclable. Lead is terrible for the environment, but about 97% of lead-acid batteries in the US are successfully recycled.
posted by Malor at 9:29 AM on September 23, 2012


Perhaps one of the most irritating parts of 'datacenter articles' is the lack of understanding of the different types of datacenters and the power challenges they have. They just keep talking about chips and gigabytes and diesel and holy crap before you know it all those wacky nerds are just defending their servers with leatherman tools and sporks.

As an incredibly opinionated geek who has worked in the datacenter industry, let me rant about the delightful details between datacenter types. First up, we have what Facebook and Google and other big companies are (likely) doing:

1) YOUR OWN DAMN DATACENTER

A large company builds a large datacenter to their exact specifications and fills it with their stuff. Everything from the floor on up is their design, and their money directly paying for it. This is what we call AWESOME because you can control every single element of the whole datacenter - placement of CRAC units (large aircon units that blow air into the floor), overall airflow throughout the entire center, whether you want to run skinless servers (no cases on the server, changes airflow through a rack), etc etc. Basically, you know where everything is and are able to treat the datacenter holistically. You can even go for complete containment in little 'pods', or shipping containers - it's your datacenter! Yay!

2) Renting space from Someone Else

This is what a huge number of companies do. They rent space from a colocation facility, which is sort of like a KOA campground for servers. You rent a cabinet (or two, or three, or more), come in, plunk your servers in it, turn it on, hook up the Intertube and you are rocking and rolling as long as you pay the multi-thousand dollar bill that comes in the mail from the datacenter company each month. The datacenter company is responsible for providing power to your servers, cooling them, and ensuring connectivity to the internet and redundant network/power in the case of failure. Your responsibility is not to try to bring coffee into the datacenter like a goddamned idiot, writing that check, and not pissing all over other customers equipment.

The Someone Else situation has its own unique set of challenges, which I will approach as if I am a character called Depressed Datacenter Technician.

1) Customers vary in size, from a half cabinet to having their own special fenced off cage area. The small ones will hire a local technician to rack their gear, which will end up being backwards in the cabinet (there is a cool air intake aisle, the cold aisle, and an exhaust aisle, the hot aisle). There will be much handwaving and pointing and explanation of what 'hot' and 'cold' mean and eventually the salesperson who closed the deal will step in and tell DDT to let it slide just this once. So now you have some idiot with his servers exhausting into other customers intakes, and DDT didn't even get a commission.

2) If they're in a cage, the customer will rack their gear any goddamned way they want and you won't be able to do anything about it. Forget trying to do air containment for cooling efficiency, or even telling them that the reason one of their servers keeps dying is because they've got a jet of hot air coming out of their backwards-mounted firewalls directly into the intake on Perpetual Sickly Server. Cage customers write big checks, so shut the hell up.

Basically, you have limited control over what your customers do. And you CANNOT run your cold aisles at 80 degrees like Google, because the customers will have an absolute shitfit ("WHAT THE HELL AM I PAYING YOU FOR?? I HAVE HOT AIR AT MY OFFICE, WE NEED COLD AIR!!!") and any power savings you manage by turning down your AC will be offset by the motley crew of different customers servers panicking because of temperature and running their fans full blast in an attempt to cool themselves.

Oh, and about that 'motley crew' of customers servers, let's take a virtual walk and peek in the cabinet of your average mixed-use colocation datacenter:

Stop number One: The Amazing Commission Cabinet. This is an entire cabinet that is 90% empty, with a firewall mounted at the top and a single server. The salesperson got an amazing commission, the customer is paying a stupid high bill, and are locked into a minimum two year contract. Holy wasted space, batman. If you aren't doing hot/cold containment, this cabinet is just an easy way for all of that cold air to mix with the hot air on the other side and form shitty lukewarm air that won't cool anything.

Stop Two: Long-Time Customer, Possible Pornographer. He's got a huge data connection, but not much in the way of servers. They are all old beige box towers running virus infested illegal copies of Windows 2000. So they're not even rack mounted; they sit on a shelf bolted into the cabinet, with their dusty 'PENTIUM 3' stickers silently observing the world. He pays his bill every month and ocassionally asks for a reboot over the phone and referral for a local geek he can hire part time.

Stop Three: Awkward Web Hoster. They weren't big enough for a cage, so they got one cabinet and have packed it full of slightly olders 1U servers. They've got thirty plus of these bad boys humming away, producing a wall of white noise and a profound amount of heat. They take up way more energy to cool than Possible Pornographer, but they pay the same amount for cabinet rent. Although Web Hoster DOES pay more for more power connections. But the offset.. hmm.. well, those numbers will have to be crunched by someone with a higher salary than DDT.

Stop Four: Not Moved In Yet. An empty cabinet. There are many of these as customers churn.

Stop Five: The power room. Yes, there are large banks of UPS batteries, an automatic transfer switch, and a diesel generator out back. The gen is tested once a month, the batteries are maintained by a contractor, and fingers crossed when the grid goes down everything will transfer over automatically.


-----------------

Anyways. I can go on forever about datacenters. If you're really intrigued by them, go to an Uptime Institute symposium and talk to the geeks there. They're pretty awesome and are willing to share. Or MeMail me and bring a pillow because this shit is seriously boring to most folks.

These feelings, I have them.
posted by Skrubly at 9:33 AM on September 23, 2012 [66 favorites]


scruss : I'm kind of hoping we can get past an always-on culture. There's no way that it can be supported by renewable energy sources, and consequently, it cannot be sustained.

Not sure what you mean by that... Yes, we need to move beyond using fossil fuels for everything, and we need to stop population growth in its tracks - But that has nothing to do with whether or not renewables can meet our energy needs.

The real limiting factor to supplying all our energy needs, today, boils down to "better batteries" (and to a lesser degree, distribution, but we know how to solve that, we just haven't had a good enough reason to front the expense of building a superconducting electric "pipeline" across the entire US yet). We could pave Death Valley with solar panels, but we have no* good way (yet!) to distribute that few-hours-centered-on-noon production out over the daily and weekly demand curve.

We don't need fewer generators. We need better batteries, and/or we need supercaps.


bystander : Solar energy runs about 1kW per m2, a single story DC burns about 2kW per m2. And solar PV is about 16% efficient at max. So at best you could cover 8% of load in the main daylight hours.

You don't need to have a 1:1 relationship between generating surface and consumption. You generate in places we can't conveniently use for other purposes (like death valley), and you use wherever. Yes, your 20,000ft2 DC may well require somewhere around 200,000ft2 of PVs (and more than that, since the sun doesn't shine 24/7). Not an issue.


ryanrs : What's wrong with flywheels and lead-acid batteries?

Flywheels of the size and speed needed for any real standby time have a "colorful" mode of failure. When you store 100MWH, no matter how you store it, you have stored 100MWH. Catastrophic failure of your storage medium will necessarily release that 100MWH, very very quickly.

As for lead-acid, most places do use them, in the form of plain 'ol UPSs. They have a fairly low energy density, though, such that you need half your space taken up by batteries just to give you a few minutes of battery-sourced uptime. And, they wear out over time, requiring replacement every year or so.



* Yes, I've read about literally dozens of grid-scale storage methods, such as compressed air caves, reversible two-reservoir hydro turbines, etc. We don't seem very good at actually putting any of these into production, though.
posted by pla at 9:34 AM on September 23, 2012


For reasons I cannot possibly fathom, moderators removed a remark I made noting Apple's new data center being run on solar power and fuel cells. This is the largest solar power installation in the US.
posted by charlie don't surf at 9:40 AM on September 23, 2012 [1 favorite]


The flywheel goes in a vault and the batteries get recycled. These things are definitely not the worst parts of the system.
posted by ryanrs at 9:41 AM on September 23, 2012 [1 favorite]


Isn't there some way to capture the heat generated and put it to some other use? Heating water for residential use? Powering steam turbines that (re-)generate electricity? Cooking giant s'mores?
posted by RandlePatrickMcMurphy at 9:55 AM on September 23, 2012 [1 favorite]


For reasons I cannot possibly fathom

Likely had entirely to do with how you phrased your statement.
posted by hippybear at 9:56 AM on September 23, 2012


Isn't there some way to capture the heat generated and put it to some other use? Heating water for residential use? Powering steam turbines that (re-)generate electricity? Cooking giant s'mores?

It's not as easy as it would seem. It's hard to concentrate heat. You somehow need to take 85 degree F air and concentrate it up to hundreds of degrees to boil water to run the turbines. By the time you do all those energy conversions, you probably aren't going to extract any useful energy out of that system.

There are efficiencies to be gained if you are building something from scratch or replacing a system that is broken, but the cost of retrofitting (just to save energy) will almost always outweigh the cost savings.

The best way to do it as far as I know is geothermal. But at the scale of a datacenter, drilling all those holes is going to cost a lot of money.
posted by gjc at 10:31 AM on September 23, 2012


"Concentrating" heat requires energy. It's what refrigerators do. The waste heat from datacenters is not hot enough to do much of anything useful.
posted by ryanrs at 10:34 AM on September 23, 2012


It's because you were accusing MeFi of being full of Apple haters, charlie, when that just isn't true.
posted by Malor at 10:51 AM on September 23, 2012


Those photos of Apple's solar farm look pretty cool. It is new enough that it doesn't show up on the google map satellite view of 6028 Startown Rd, Maiden, NC. I would be curious to know what the payout timeline on the project looks like. I know of a man who put solar panels on his house which will pay for themselves after thirty years, including a large tax subsidy. Solar is tough to justify if you don't have as much money as the Apple does.
posted by bukvich at 10:55 AM on September 23, 2012


Ctrl-F on that NYT link and on this thread reveals no instances in either of "coal", which is where the power comes from when it's not being run on short-term diesel generators. To get all excited about the diesel generators which ideally get very little use if any at all while ignoring the fact that the power that runs the datacenters under normal conditions isn't coming from sunshine and unicorn farts suggests that a sincere, genuine concern for the environment isn't the animating impulse here.
posted by Pope Guilty at 11:00 AM on September 23, 2012 [3 favorites]


Malor: "Data centers are still very new things, overall. We've only really had BIG ones for, what, about fifteen years now? We're still in the 'wild expansion' phase, but before too much longer, the 'efficiency improvement' phase should become much more noticeable."

No, we've had really big ones for a long time. There are several large DCs in Tulsa that date back to the 70s. Designed for mainframes, of course, and partially buried to help survive the impending nuclear holocaust. There has been a lot of change since then. The biggest change, IMO, is the advent of excellent network connectivity that allows one to let a whole DC go down thanks to geographic redundancy. We're not fully there yet, but soon enough we will be.

T'would be nice to have 10-20 minutes of compressed air and another 10-20 minutes of battery and no generator at all. That gives plenty of time for VMs to be moved offsite... ;)
posted by wierdo at 11:00 AM on September 23, 2012


It seems pretty stupid to rely on air cooling in the first place- we don't do that with any car more recent than an old-school VW bug. Seems as though plumbing in liquid coolant (maybe a nonconducting mineral oil) is likely to be a more economical choice in just a few years.
posted by jenkinsEar at 11:14 AM on September 23, 2012


It seems pretty stupid to rely on air cooling in the first place- we don't do that with any car more recent than an old-school VW bug. Seems as though plumbing in liquid coolant (maybe a nonconducting mineral oil) is likely to be a more economical choice in just a few years.

Heat is heat. All liquid cooling does is allow the equipment to be more densely packed into its packaging. Once it leaves the machine and enters the building's systems, it still needs to be shed one way or another.
posted by gjc at 11:20 AM on September 23, 2012 [1 favorite]


Sure, but there's a ton of space outside the building that'll be cheap to radiate from, instead of relying on expensive aircon to aircool everything on the inside.
posted by jenkinsEar at 11:23 AM on September 23, 2012


Liquid cooled. The thing with the current big computers is they are a bunch of really cheap pieces of crap computers wired together.
posted by bukvich at 11:40 AM on September 23, 2012


Likely had entirely to do with how you phrased your statement.

Surely the moderators don't object to stating the obvious.
posted by charlie don't surf at 12:10 PM on September 23, 2012


The best would be such digital warehouses are stationed in open and not in a confined room......say on top of a mountain and are supplied with wind energy....Why don't they think about it?
posted by molisk at 12:13 PM on September 23, 2012


Data centers are still very new things, overall. We've only really had BIG ones for, what, about fifteen years now? We're still in the 'wild expansion' phase, but before too much longer, the 'efficiency improvement' phase should become much more noticeable.

There is a story by Stanislav Lem about this. A stellar civilization keeps running out of room for storage of records. Soon they are forced to deposit archives on more and more distant planets and star systems, vastly increasing the cost of storage. Eventually the entire civilization's economy is overwhelmed by the cost of seeking new locations for their archives, and basically all the matter in an immense galactic radius is converted to data storage. But still they keep sending exploration starships to seek new places to turn into archives. Then one day, they make first contact with another civilization. It's a starship coming from outside their galactic empire, on the same mission.
posted by charlie don't surf at 12:17 PM on September 23, 2012 [4 favorites]


"The best would be such digital warehouses are stationed in open and not in a confined room......say on top of a mountain and are supplied with wind energy....Why don't they think about it?"

It's not easy to find a large staff of sherpas who are skilled at datacenter maintenance. DCs are not static, installed-and-forgotten lumps of technology. When you're using thousands or millions of components, particularly components with parts that move at very high speed, a hardware failure rate of 2% - 3% means that a large number of components must be replaced every day.

That said, major companies like Google and Facebook DO attempt to build data centers on sites that provide some sort of geographic advantage. Google's first ground-up custom data center was built in The Dalles, Oregon due to the proximity of (comparatively) cheap and efficient hydroelectric power.
posted by drklahn at 1:03 PM on September 23, 2012


Google's first ground-up custom data center was built in The Dalles, Oregon due to the proximity of (comparatively) cheap and efficient hydroelectric power.

I think they're now located on the WA side of the Columbia River. At least, a couple of years ago when I last drove the highway which snakes along that side of the river (opposite of I-84, which is in OR), there was this random turnoff road with a large colorful Google sign close to a dam, which I assumed was a data center.
posted by hippybear at 1:20 PM on September 23, 2012


Uptime % = number of hours down/year

90% = 876
99% = 87
99.9% = 8.7
99.99% = ~50 minutes
99.999% = ~5 minutes
99.9999% = under 1 minute

BTW I found the article informative, didn't know about the 6%. But isn't that how it's always been? The economic model is oversell capacity. You have a pipe or computer that can serve X, but you are selling to X*1000 customers because not all of them are using at the same time, so that difference is where you make money. The phone company did this for generations, as people discovered on Mothers Day with "all circuits busy". You can have 10 lines that service 100 paying customers.

I think the 6% number can be improved with more sophisticated power management. Computers that turn on and off as needed, like the Prius that turns off at a stop light. This would require all the servers be part of a cloud (built yourself) and not dedicated to a single application. SSDs. Takes more work upfront to design and build but easier to manage and maintain.
posted by stbalbach at 1:32 PM on September 23, 2012


It continues to boggle my mind that electricity is literally falling from the sky for free in the form of sunlight, and almost no one takes advantage of it.

There are several cases above where someone says "solar" and someone else says "it's not enough to run the entire data center." But it doesn't have to be enough to run the whole thing. If solar can take care of 25% of the power demands of a data center, that's an instant 25% reduction in cost and pollution.

Seriously, why wouldn't you? Once the array has paid itself off (which I believe is usually a matter of 5-10 years) it is literally free money.
posted by ErikaB at 1:47 PM on September 23, 2012 [3 favorites]


This thread has totally fucked the Favourites Per Watt figures.
posted by fullerine at 1:52 PM on September 23, 2012


As a former commission datacenter space salesmen, allow me to take this moment to apologize to Skrubley on behalf of all of us. Although, in my own defense. I was the only sales guy the datacenter staff trusted enough to let me lead the tours myself. My favorite part was dragging the potential client outside, no matter how bad the weather, to show them the diesel backup generators. For all the client knew they had no fuel and had not run in months or years, but we had to show the damn things on every tour.
posted by COD at 2:35 PM on September 23, 2012 [1 favorite]


> This is the largest solar power installation in the US.

Not even close. Apple's installation is big; 20-40 MW. Compare that to First Solar's Agua Caliente Solar Project is already at 250 MW, and may make it to the planned 290 MW (if 1st Solar doesn't crash).
posted by scruss at 3:09 PM on September 23, 2012


Some of the more highly modded posts in Slashdot's discussion thread for this article, insinuate that NYTimes article is an alarmist hack job.

First, some argue that the computers which an average user uses to actually make (for example) a Google query uses more energy that then datacenters which return the results.

Second, it's not as if the NYTimes is a neutral observer on the question of datacenter usage, seeing their primary business model has been upended by search.

Third, spare capacity is a feature, not a bug, even in the realm of digital media. When looking at print distribution, for an "on the other hand", not every subscriber is going to read every word of every article, let alone actively use that information (which generates its own waste) to contribute to reducing the carbon footprint of the media system that delivers said information.

Fourth, so the NYTimes has published an article that proves it takes energy to keep datacenters running and . . . these datacenters are not always at full load . . . and the energy used by these datacenters is less than the energy used by users to query those datacenters . . . and much of this activity is economically useful . . . so what is it that I'm supposed to start wringing my not-quite-green-enough liberal hands about?
posted by mistersquid at 3:26 PM on September 23, 2012 [3 favorites]


Datacenters use a heap of energy (fairly efficiently), but the resulting heat energy has to be moved out of the building - inefficiently - by aircon, which means even more heat is added to the aircon output.

Here in water-hungry CA, we should kill two birds with one stone by using all that waste heat the aircon are pumping out to desalinate the seawater.

It's common in industrial complexes to save $$$ and fuel by capturing waste heat and reusing it for another process (or selling it via pipe to the company next door.)
posted by anonymisc at 3:42 PM on September 23, 2012


I know of a man who put solar panels on his house which will pay for themselves after thirty years, including a large tax subsidy. Solar is tough to justify if you don't have as much money as the Apple does.

Today the price is down to ~7 years (or less if you live in an ideal place), and that's not counting that the future price of energy is more likely to rise than fall. Seems like a hell of a good investment to me, but yes, all the money needs to be upfront. That's a significant hurdle for non-government entities.
posted by anonymisc at 3:50 PM on September 23, 2012 [1 favorite]


I hear Mathowie powers Metafilter by burning kittens!
posted by blue_beetle at 4:18 PM on September 23, 2012


I hear Mathowie powers Metafilter by burning kittens!

I have no idea how these people got their cats wedged into their scanners, or why, but they burn amazingly cleanly.
posted by Pope Guilty at 5:24 PM on September 23, 2012 [2 favorites]


Interesting article, nothing really groundbreaking in it, but how they looked at this industry as a whole was interesting because I don't think I would have done the same - it is simply too varied of an industry. I've been building/designing racks and custom customer cages for a decade or so and have worked in or currently have customers just about everywhere in the world in all datacenter types private, public, government, etc. I also have worked for one of the largest "cloud" computing vendors worldwide, as well as one of the largest datacenter providers, so my opinion may tend more towards what you see as a response from folks in the industry. I will try not to be dismissive, I think it's great that people understand the complexities of this stuff and how it is evolving.

So the power thing, it's always tough, about 10-15 years ago the datacenters that were built were built aiming at a few key metrics - basically power per tile is what it boils down to. This is how much power the facility can cool per rack. Some time ago the key figure just about everyone had was 2KW/tile - some facilities have figures as high as 10KW. It's all funny fuzzy math really, you submit a rack plan, the facilities engineering folks try to interpret it - you promise to not deviate from the plan, contracts are written to insure that the datacenter can make you move stuff around if it violates X, Y, or Z things you promised to do or not do...or you pay them more money. I've seen nearly 50KW per rack with 24 racks in a two row cluster and then 30 feet around them free of everything else because of poor planning and very large checks being written. I've also seen customer have literally one rack per server with 250 racks because of the same. I've seen datacenters literally catch on fire and customers stack racks of residential window air conditioners in open doorways to cool a room - so the service they are getting paid for doesn't go down. These customers and providers generally HATE these situations all the way from the top to the bottom. However, 95% of this will be gone within 10 years due to virtualization of some flavor or another. In house facilities will likely take longer but they are likely going away as well simply because of cost and scaling - unless of course there are legal hurdles but the providers have been building customer government facilities for the last 5-10 years to deal with these hurdles.

You've heard it described as cloud and the battle for the cloud etc, but really it's physical machine virtualization and it's totally fucking awesome even outside the marketing fervor. Think of virtualization as such a fundamental technology shift as the jump from land lines to current smart phones, or perhaps telegraph to direct dial phones. It's a tremendous shift for things like physical compute consolidation but the reach of it is going to end up being so much more than that. There are only hints of what it will be like in 10 years but it is really accelerating now and the core of it is going to be the more efficient use of computing power available to us now that sits idle. eriko's comment is excellent, and provides a good snapshot of what it is like in a traditional datacenter transitioning to virtualization as a practice and methodology.

Back to the article though ...

There is such a range of what people call datacenters - I have had customers call their wiring closets datacenters and then I have facilities that are disaster rally points for communities because they are built as survival structures and provide fuel and power via their back up power facilities. There are datacenters that are literally incredible fire hazards because they are converted facilities that somehow got permitted for ridiculous power and aren't inspected rigorously and then there are facilities engineered to use their waste heat for things like melting ice on runways and heating buildings.



A couple of comments from the thread:

My office happens to be in the basement of a building. Being underground, it is cold, so they heat it. But in the server room at the end of the hall it gets too hot, so they air condition.

This is for a couple of reasons - broadly speaking you the machines need a different humidity and are less tolerant of humans shedding things like skin cells than other humans are. It's also really challenging to engineer a consistent cooling plant on to an existing structure - using the earth and a heat sink works great but you really need a big area under ground to do it properly - miles of pipes really spread out. The heat dump in to the office space could work but it would need to be brought to a better humidity and then you'd need the fire suppression systems modified to insure that in the event of a fire the ventilation was scrubbed for toxics.

If datacenters actually gave a shit about power consumption they wouldn't air condition the computers. You'd have to disallow datacenter customers from installing whatever piece of crap machines they want, but aircon is not necessary for high density. Google doesn't air condition their datacenters, for example.

Yes, airflow is necessary for high density, Google spent about a decade building a custom compute environment and failed a LOT on their way to where they are now. Google also doesn't run a compute platform suited to the majority of business needs (although someday they might, but I doubt it). Datacenter providers as an industry CARE A LOT about power consumption. I know that there is a perception that because Google or Apple or Amazon doesn't have a specific concern about something because they spent 5 years engineering a specific solution to crushing growth problems they've had that it applies everywhere else but it doesn't. Google had a problem, they needed to cheaply and efficiently run a lot of parallel tasks and doing it the traditional way would have eaten too much capital so they built their own practice - before they built their own practice they ran in rack space. Amazon faced crushing peak period loads for their site AND for their partner sites (target/etc), they needed their coud infrastructure to handle that and then they needed a way to offset the costs of having that infrastructure spun up but sitting idle - so they started selling compute. Apple is about as close to traditional as possible, they just don't want to be subject to the whims of an evolving energy market.

The best would be such digital warehouses are stationed in open and not in a confined room......say on top of a mountain and are supplied with wind energy....Why don't they think about it?


Ambient cooling is a big deal (and possible sometimes) - but weather is a fickle monster, the environmental impact of huge heat loads being dispersed can cause a lot of environmental problems leading to problems with permitting but it's done. A lot of people don't live on mountains and mountains tend to be hazardous places for reliable back up power, ease of spare parts and if you put a datacenter on a mountain you're not just dealing with the local facility, you're dealing with the pipe to the local facility which means that connectivity is going to be dramatically more expensive.

There are several cases above where someone says "solar" and someone else says "it's not enough to run the entire data center." But it doesn't have to be enough to run the whole thing. If solar can take care of 25% of the power demands of a data center, that's an instant 25% reduction in cost and pollution.

Maybe I can explain a bit of this myself? The reason people in this particular industry don't rush to embrace solar or wind as a way to power their facilities is because they are not in the energy generation business. What they are doing is complex enough, they can buy power from their utility cheaper than they can install a solar or wind facility. It's not so much that the solar or wind or geo therm tech isn't advanced enough or won't make the base needs for the full facility it is more likely that they get good pricing from the utility because of their size so the pay off for that investment is longer than standard residential and that they already have a level of complexity in managing the facility that the added liability of being responsible for a percentage of your own onsite generation is too much. That's my best reasoning having observed the industry and worked in it.

So the article itself was kind of a half assed attempt to gain an understanding of an industry that is very broad and complex, but I don't blame them, it's a big industry and it's really hard to understand it and how we've ended up where we are and where we are going unless you've lived it - sadly that takes more than a year. My personal belief isn't that this problem that s described in the article with the efficiency of compute infrastructure is in the middle of a big shift, maybe it will be a different problem down the road but it's a byproduct of the technological age more than a problem with a segment of the industry.
posted by iamabot at 5:48 PM on September 23, 2012 [7 favorites]


Ctrl-F on that NYT link and on this thread reveals no instances in either of "coal", which is where the power comes from when it's not being run on short-term diesel generators.

House Passes Extra-Terrible Pro-Coal Bill Before Heading Home
posted by homunculus at 5:54 PM on September 23, 2012




>that has nothing to do with whether or not renewables can meet our energy needs.

Yes, it does. Renewable energy is intermittent; there is no source that isn't, for a given place. Even run-of-river hydro can't be run full-on all the time in dry seasons. And we're already using supercaps and high-efficiency batteries in the industry for regulation. Supercaps handle the regulation from cycle level for phase and power factor control up to a few seconds for grid voltage support. Beyond that, they're too leaky.

Liquid metal superbatteries (like GE's Durathon) are in commercial use. Unfortunately, they're still expensive — having a couple of hours of storage might typically double the capital cost of the wind projects I work on.
posted by scruss at 6:16 PM on September 23, 2012


Diego Doval wrote an almost equal length rebuttal: blog.diegodoval.com/2012/09/23/a-lot-of-lead-bullets-a-response-to-the-new-york-times-article-on-data-center-efficiency/
posted by adamsc at 6:42 PM on September 23, 2012


anonymisc : Today the price is down to ~7 years (or less if you live in an ideal place)

Better than that. If you can handle doing the installation yourself, you can, today, buy 1.45KW worth of panels and a 1.5W sustained/3KW peak grid tie inverter for just under $2000 ($330 per pair of 145W panels, x5, and $340 for the inverter) from Amazon. And, it all qualifies for free SuperSaver shipping (not to mention a 30% renewable energy tax credit)!

This spring I set up a similar but smaller version of that (with almost double the total cost per watt) as a toy/demo to see how well solar would work in my area (fairly far North, with frequent overcast weather); I have yet to get hard numbers for winter, but based on the rest of the year and what I've seen other people report as the typical %drop in winter, and taking the tax credit into consideration, I expect to break even in a mere 4.5 years. And I definitely do not live in an ideal area for PV. :)

So, if someone bought that $2k/1.5KW system today, it would break even in under three years (presuming electric costs stay the same or go up, of course).

I feel pretty optimistic about the future of renewable energy in the US. As I mentioned upthread, the only real problem (not for home grid-tie use, but overall) involves buffering generation to match demand over a 24 hour period.



scruss : Yes, it does. Renewable energy is intermittent; there is no source that isn't, for a given place.

You have conflated "inconvenient" with "impossible". Even using tech available right now, a typical household could realistically stick 50-100KWH worth of lead-acid (at around $100/KWH) in the basement and call it good. As a side note, you can already buy "appliances" that act as whole-house off-peak-charging UPSs to game the utilities' pricing structure.

The logistics get harder for high energy density consumers like data centers, but the same tricks work just fine. We just need to get better at storing 12-18H worth of production.
posted by pla at 7:47 PM on September 23, 2012 [2 favorites]


ErikaB writes "It continues to boggle my mind that electricity is literally falling from the sky for free in the form of sunlight, and almost no one takes advantage of it. "

Well you need a fairly fancy bucket to catch that energy. It's a mistake to think that just because there aren't input cost to solar that the energy is free.

anonymisc writes "Here in water-hungry CA, we should kill two birds with one stone by using all that waste heat the aircon are pumping out to desalinate the seawater. "

The waste heat Air Conditioners pump out is very low grade; certainly not enough to desalinate sea water.

iamabot writes " The reason people in this particular industry don't rush to embrace solar or wind as a way to power their facilities is because they are not in the energy generation business. "

To iterate from another industry: The mine I work at uses a boggling amount of electricity; something like 11 Million dollars annually. We also happen to be situated on a chunk of land that has decent wind. Not awesome but commercially viable. And we can grid tie so we wouldn't have to handle storage. We're just thinking about maybe installing wind turbines some time in the future. And the reluctance it's mostly for the reasons iamabot says, we're not really in the energy generation business. So even though we are much better off than a data centre to service power generation in that we have industrial electricians, mill rights, welders, heavy equipment, etc, on site and experience building large industrial plant it's still not a slam dunk for us. Really they should just scrape some capital together and contract it out but even that would require years of planning. And there would be the horribly exhausting public and environmental review process that would distract from the core business (heavy industry generally like to keep a low profile). It's much easier to just drop another line from Hydro as we need more power.

What really is needed is for solar or wind companies to develop a package to go after these big industrial clients with turn key solutions. Someone to go to the board and say "Here's our proposal for a wind plant to 100% cover your power needs on an annual basis. 1 Here is list of projects we've completed on time and on budget so you know we aren't blowing smoke up your ass on the numbers. Give us a call when you want to proceed." Maybe that market is being serviced but the dearth of co-development out there in this regard would lead me to believe it's under serviced at best.

1It's an interesting situation. Because the wind doesn't blow 100% of the time but we could grid tie our generation on a long enough average we can
have net zero purchased electricity. If the wind only blows 50% of the time just install generation of 200% of average load and you'll sell the same amount as you buy over the year. As a bonus when the mine ceases production you have a remaining asset generating revenue which will cushion the local and business economic blow of the mine closing.

posted by Mitheral at 8:13 PM on September 23, 2012


ErikaB writes "It continues to boggle my mind that electricity is literally falling from the sky for free in the form of sunlight, and almost no one takes advantage of it. "

As the person that said that a DC can't power itself on solar, I agree.
I'm a huge fan of solar (come look at my roof), but I posted in response to the idea panels on a DC would make a huge difference, because they won't. But there is no reason not to have panels everywhere it makes sense.
posted by bystander at 10:18 PM on September 23, 2012 [1 favorite]


The heat dump in to the office space could work but it would need to be brought to a better humidity

Incorporating an air to air heat exchanger in the overall HVAC design ought to save a heap of energy without needing to compromise at all on humidity and particulate control.

Another approach that might work well is using the same heat pump to cool the data center and heat the office space, instead of giving the office space its own separate heater. For an underground installation I would expect that the data center would always be producing more heat than the office space can use, so you'd split the hot side refrigerant circuit and divert a thermostatically controlled amount from an outside radiator to one inside the office space. It would work basically the same way as the heater in your car, where some of the engine coolant gets diverted from the main radiator to supply the heater core inside the passenger compartment.
posted by flabdablet at 1:51 AM on September 24, 2012


> You have conflated "inconvenient" with "impossible".

Batteries are not an energy source. And the only thing that lead-acid batteries have going for them is the price: the energy density, the charge retention, and the sheer bloody inconvenience of the heavy bastards make them a pain to work with. Your insurer might also have words about having gallons of acid in containers which also evolve hydrogen lurking in your basement.
posted by scruss at 4:47 AM on September 24, 2012


scruss, do you have any experience with vanadium redox flow batteries?
posted by flabdablet at 4:51 AM on September 24, 2012


We looked at them, but the supplier (VRB Systems) crashed and burned before we got to get any further. The costs were too high; an (admittedly high, possibly intended to scare) estimate by the transmission operator here had them economic at 80¢/kWh (2007, CAD). When power is bumping along at ~10¢, these are not even on the horizon for utility scale.

I re-ran the estimate I gave above, and I was a bit off. Using the BATEA, to get four hours storage for a 100 MW wind farm would add about 45% to your capital cost.
posted by scruss at 6:58 AM on September 24, 2012


The waste heat Air Conditioners pump out is very low grade; certainly not enough to desalinate sea water.

Pretty sure I could get it to do significant work. People tend to think that desalination requires enough heat to boil all of the water, campfire style, (and at sealevel pressure), but there are better ways. (And you're probably thinking of an air-con not built to provide useful heat.)
posted by anonymisc at 11:45 AM on September 24, 2012


So we are back to the same argument, datacenter providers are not in the desalination business, it is a matter of focus and controlling risk. Datacenters are a reasonably thin margin business as it is, and it is only going to get thinner.
posted by iamabot at 12:14 PM on September 24, 2012


From my perspective it's not a criticism of datacenters or the people running them. I agree with you. In the future, we in developed countries are going to have to live on fewer natural resources in many key areas, and if we want to maintain or increase our standard of living, that means doing more with less, everywhere. I don't see the downstream use of waste heat as a responsibility datacenter operators should be focused on or should shoulder, it's its own resource. We (society) should be pushing for ways to smooth those paths sooner rather than waiting until we're out of options, so if someone (or local gov) does want to do desalination (or runway de-icing, or whatever) the complications of accessing waste heat are not terrifyingly daunting compared to just buying more fuel and betting the house that fuel will remain affordable.
posted by anonymisc at 12:41 PM on September 24, 2012 [1 favorite]


scruss : Batteries are not an energy source.

Okay, seriously, you playing obtuse here?

You objected to renewables because they don't count as "always on" due to their "intermittent" nature.

I pointed out that we already have (albeit at much too high a cost and with low convenience) the technology to buffer their production to match our demand curve.

And then you complain that my chosen (proof of concept but far from ideal) buffering medium doesn't count as an energy source???

I don't quite know how else to take that other than playing games... Though I honestly hope you thought you had that conversation with (at least) two different people...
posted by pla at 5:57 PM on September 24, 2012


FWIW, a large-scale buffering medium is already used in some areas - running the hydro dams backwards so to speak. Pumping water back up into the lake when excess energy is available.
I'm not sure but I think that in some cases the pumping stations are used for irrigation pumping when not refilling the lake.

It has it's drawbacks (such as it doesn't work without a hydro dam + lake) but it's one more brick in the wall.
posted by anonymisc at 6:24 PM on September 24, 2012


anonymisc writes "(And you're probably thinking of an air-con not built to provide useful heat.)"

A/Cs can't be built to provide useful heat without degrading the efficiency of the A/C. Generally speaking one wants to limit condenser temperatures to the minimum that will support efficient cooling. High condenser temps mean your compressor is working harder than it needs to wasting energy. This is one of the reasons split A/C units are more efficient than window units; the larger condenser areas mean lower head pressures. Also why Central A/c Systems are so much larger physically than equivalent models from 30 years ago.
posted by Mitheral at 6:05 PM on September 25, 2012


http://wwwaste.fr/
posted by carsonb at 11:28 PM on September 25, 2012


> You objected to renewables ...

I wasn't objecting to them. I've just been working with them for long enough to know that they are locally intermittent, always. Sure, you can temporarily store some of the peak output, but it's no more a source than a small bucket. And batteries are leaky buckets, too.
posted by scruss at 6:06 PM on September 27, 2012


Good rebuttal to this article at The Verge, among others.
posted by PercussivePaul at 11:59 AM on September 29, 2012


Wired: Data Center Servers Suck, but Nobody Knows How Much
On its surface, the issue is simple. Inside the massive data centers that drive today’s businesses, technical staffers have a tendency to just throw extra servers at a computing problem. They hope that by piling on the processors, they can keep things from grinding to a halt — and not get fired. But they don’t think much about how efficient those servers are.

The industry talks a lot about the power efficiency of data centers as a whole — i.e. how much of the data center’s total power is used for computing — but it doesn’t look as closely at the efficiency of the servers inside these computing facilities — how much of the time they’re actually doing work. And it turns out that getting a fix on this is pretty hard.
posted by the man of twists and turns at 6:35 AM on October 8, 2012






For the linked article:
Much of this is part and parcel to the big guys in the industry stunting the needed evolution so they can continue to sell legacy equipment that doesn’t address today’s problems.
I don't think this is a vendor driven problem. If people were willing to wait even a few minutes when they retrieved data they hadn't touched in a year the amount of disk we've got spinning would go way down.
posted by Mitheral at 4:45 PM on October 22, 2012


« Older Roller Coaster Tycoon in real life   |   Life in a Cheap Suit Newer »


This thread has been archived and is closed to new comments