The world's fastest computers
October 2, 2011 9:58 PM   Subscribe

The top 500 supercomputers in the world, in rank order, as of last June. The top entry on the list uses 548,000 SPARC64 cores and burns 10 megawatts. No word on what the air conditioning plant looks like.
posted by Chocolate Pickle (55 comments total) 7 users marked this as a favorite
 
And yet none of them are being put to good use mining Bitcoins.
posted by mccarty.tim at 10:09 PM on October 2, 2011 [7 favorites]


Google has to be #1 by a long way, yeah?
posted by empath at 10:10 PM on October 2, 2011


Pics or it didn't happen.
posted by tumid dahlia at 10:18 PM on October 2, 2011 [1 favorite]


Yeah, but can it play Crysis?
posted by Mister Fabulous at 10:20 PM on October 2, 2011 [1 favorite]


Curious how that list would change if all the cooperative computers (like for World Community Grid, which gets my spare cycles), and all the botnets, were included.

Natch, the latter would have to be an estimate.
posted by IAmBroom at 10:34 PM on October 2, 2011


Google has to be #1 by a long way, yeah?

Google may well have the most raw horsepower, but it isn't much good for scientific computing. The interconnects between processors are much too slow. The data centers that drive the internet have very different performance goals from supercomputers.
posted by qxntpqbbbqxl at 10:38 PM on October 2, 2011


In twenty years my wristwatch will run at umpteen yottaflops and consume one point twenty-one jiggawatts, making these seem like so many antiquated Altair 8800s.
posted by iotic at 10:43 PM on October 2, 2011


Interesting how #1 is also one of the most power-efficient among the top ten, in terms of teraflops per kilowatt.

I bet there's some customary delineation between tightly coupled machines like these and loosely coupled machines like WCG or Google's farms— lots of problems aren't really amenable to being solved by loosely-coupled machines.
posted by hattifattener at 10:45 PM on October 2, 2011


> lots of problems aren't really amenable to being solved by loosely-coupled machines.

Exactly, depending on the calculation, latency is king.

If the largest useable piece of data to be processed can only be assigned 64K at a time, getting the next free processor working on it is of upmost importance, because by the time it completes there are eight more calculations based on that one processes results waiting on it.
posted by mrzarquon at 11:01 PM on October 2, 2011


In twenty years my wristwatch will run at umpteen yottaflops and consume one point twenty-one jiggawatts, making these seem like so many antiquated Altair 8800s.

True. We're retiring a bunch of UltraSparc III machines at work which would struggle to match my Xperia Mini Pro I got last week.
posted by rodgerd at 11:13 PM on October 2, 2011


Found this K Computer (the top entry) explorer.. Each rack is water-cooled, and has a whole server for a system disk.

I bet it still has latency issues on Flight Simulator.
posted by hypersloth at 11:15 PM on October 2, 2011


I looked at this list and immediately wondered what happened to that all-Mac G5 supercomputer array Virginia Tech put together...

Well, back in 2003 "Big Mac" was named the third fastest in the world.... at 10.28 terraflops. This list shows #500 (an IBM BladeCenter at an unnamed Chinese engineering firm) running at 40.19 terraflops.

Progress is progress, but Moore's Law can be a bitch sometimes.
posted by m@f at 11:17 PM on October 2, 2011 [3 favorites]


I do wonder how much some of these must have cost. I'm sure this is a case where "If you have to ask the price, you can't afford it" but are we talking greater or less than $100 million for machines like the top tier?

And I bet the top one could crack 56-bit DES in about ten seconds.
posted by Chocolate Pickle at 11:34 PM on October 2, 2011


I do wonder how much some of these must have cost. I'm sure this is a case where "If you have to ask the price, you can't afford it" but are we talking greater or less than $100 million for machines like the top tier?

And I bet the top one could crack 56-bit DES in about ten seconds.


As a ballpark figure: a supercomputer of the previous generation* (Huygens) cost 70 million euro. Other sources speak of 30 million. I bet IBM gave them a bit of a discount on the bulk order.

*) Previous as in: installed in 2003, but still providing my colleagues with useful CPU time.
posted by swordfishtrombones at 11:43 PM on October 2, 2011


Oops, installed in 2007. Not that old.
posted by swordfishtrombones at 11:45 PM on October 2, 2011


Sounds like RIKEN's K Computer cost them about $1.2 billion.

It might take more than ten seconds— it can do roughly 2^56 floating-point operations in ten seconds, but it probably can't test one key per FLOP. That's more of an integer and bit-shuffling problem, and not what these guys are optimized for.
posted by hattifattener at 11:46 PM on October 2, 2011


Interesting how #1 is also one of the most power-efficient among the top ten, in terms of teraflops per kilowatt.

Seems to me that the most power efficient computer is simply going to be the most recent one built. Heat per transistor goes down as the nanometer / die fabrication size gets smaller. The supercomputer with the most recent chips naturally gets the lowest thermal output per transistor.
posted by -harlequin- at 12:39 AM on October 3, 2011 [1 favorite]


(It probably won't be a perfect collelation, since I assume there are other minor factors in play, (cores per motherboard?), but over the bigger picture, I would expect efficiency to very closely track age)
posted by -harlequin- at 12:44 AM on October 3, 2011


The atomic energy people sure play a lot of games.
posted by pracowity at 1:38 AM on October 3, 2011


If the computers used at the National Security Agency were allowed to be listed, I suspect this list would need revision.
posted by Postroad at 1:40 AM on October 3, 2011 [2 favorites]


I'm just disappointed that once again I didn't make the list. I've got about 86 X 109 neurons with about 1015 synaptic connections. I do incredibly intricate simulations just to get out of bed in the morning, but nobody respects old-school computers anymore.
posted by twoleftfeet at 2:27 AM on October 3, 2011 [6 favorites]


And yet none of them are being put to good use mining Bitcoins.

The largest super computer is doing 8773.63 TFlops, or 8 petaflops. According to this the current bitcoin network is currently doing about 147.74 Petaflops, so it's 16 times as powerful then the largest super computer. I'm not sure how they calculate the number of flops though. They may just be going on the total number of operations needed to compute a hash.
"If you have to ask the price, you can't afford it" but are we talking greater or less than $100 million for machines like the top tier?
If you're doing bitcoin mining you can put together a rig that can do 8 TFlops for about $1,200. Multiply that by a thousand and that's just $1.2 million for 8 petaflops. That's for the cards and the motheboards. You're going to need an interconnect though.

I would also bet the non-GPU systems cost considerably more then that per flop though.
posted by delmoi at 2:40 AM on October 3, 2011


Am I reading this right? Is the #1 super computer really 4 times faster than 2nd place?
posted by sodium lights the horizon at 3:05 AM on October 3, 2011


10 Megawatts? Great Scott!

That's about half the peak output of a large nuclear power plant. That's some really big iron, even if it is really efficient per FLOP.
posted by loquacious at 3:34 AM on October 3, 2011


Aw.. dammit.. this just rubs in the fact that I can't get a 64bit version of Mint running on my phenom 1100T.
posted by Ahab at 3:41 AM on October 3, 2011 [1 favorite]


10 Megawatts of power is what the University I work for uses at any given time. That's freakin insane.
posted by deezil at 4:19 AM on October 3, 2011


uhh loquacious, you may want to revise your numbers. 10MW is the output of a small steam turbine. We get 2-3 times as much energy from the heat recovery system on a smelter. For reference, the smallest reactor at Fukushima is 460MW, with the biggest at 1.1GW.
/derail
posted by defcom1 at 4:20 AM on October 3, 2011


Speed is a funny thing, and surprisingly difficult to compare among computers. These rankings use a measure of "floating point operations per second", which is rapidly becoming as obsolete as MIPS (millions of instructions per second), as silicon becomes much more specialized to accommodate performance bottlenecks for a particular application.

Even then, just raw FLOPS doesn't give you the complete story - to do useful work, you need to move things to and from memory, spawn and retire threads, and this is becoming more and more difficult to accomplish with massively parallel systems. You're going to run face-first into Amdahl's law at some point, and latency at some point becomes too large a drag on the entire system to see any gain from tossing on a few hundred more CPUs.

If latency isn't an issue, it's cheaper to put the application on the grid, and put the grid in the cloud. If latency is a problem, too many cores and interconnects drag on performance.

To solve this, more and more supercomputers are specialized, with processors designed for very specific tasks - IBM's BlueGene has cores that are optimized for the calculations required to simulate genes folding into proteins. In others, GPUs are heavily relied upon instead of a general-purpose CPUs.

In any event, comparing flops is a good rule of thumb to compare your computing power to other supercomputers, but it may not give you an accurate picture at how speedy it is doing the task it was designed for. The fastest computers in the next few years may never show up on this list, but will be able to do work a general-purpose MPP system just can't.
posted by Slap*Happy at 4:28 AM on October 3, 2011 [3 favorites]


The company that I work for has contributed major sub-components for quite a few systems on that list, including two in the top ten. That's really one of the main reasons that I stay working there and put up with the craziness of a start-up. Knowing that I'm contributing my tiny bit to the bleedingest bleeding edge of computing adds something to the job satisfaction.
posted by octothorpe at 4:45 AM on October 3, 2011


...burns 10 megawatts. No word on what the air conditioning plant looks like.

No AC needed. All 10 MW turn directly from electricity to information with no waste heat.
posted by DU at 4:51 AM on October 3, 2011


10 megawatts? That's hotter than the sun!
posted by samsara at 5:07 AM on October 3, 2011 [1 favorite]


I'd really like to see some sort of Massive Computer Comparison that did its best to convert the performance of as many computers as possible -- Apple II, Cray-I, etc up through current machines -- to the performance of some standard machine. Like, a scatterplot of performance over time measures in terms of Pentium 90s, or switch to Atari 2600s, etc.
posted by ROU_Xenophobe at 5:11 AM on October 3, 2011


This is the Kingdom of Linux.
posted by CautionToTheWind at 5:50 AM on October 3, 2011


Yea, Linux totally dominates the HPC market. If you look at the OS list, more than 90% of them are running some form of Linux. Periodically, MS tries to push Windows HPC and the industry points and laughs at them and goes back to running RedHat or SUSE.
posted by octothorpe at 6:06 AM on October 3, 2011


I think it's pretty interesting that the #2 is listed as running X5670 2.93Ghz 6C, NVIDIA GPU. I follow the distributed.net project, and their GPU clients are far and away faster than their CPU clients, now. I don't pretend to understand why.
posted by Devils Rancher at 6:17 AM on October 3, 2011


The atomic energy people sure play a lot of games.

Wouldn't they prefer a nice game of chess?
posted by Kadin2048 at 6:22 AM on October 3, 2011 [3 favorites]


If Folding@home was on the list, it would be right up there at the top.
posted by Western Infidels at 7:04 AM on October 3, 2011


In twenty years my wristwatch will run at umpteen yottaflops and consume one point twenty-one jiggawatts phemtowatts, making these seem like so many antiquated Altair 8800s.

FTFY.
posted by IAmBroom at 7:05 AM on October 3, 2011


...burns 10 megawatts. No word on what the air conditioning plant looks like.

No AC needed. All 10 MW turn directly from electricity to information with no waste heat.


Can't tell if that's a geeky piece of sarcasm, or serious naivety, DU.
posted by IAmBroom at 7:07 AM on October 3, 2011


that list is inaccurate. the largest supercomputer in the world currently is conficker or a similar botnet. seriously, what's more powerful than 10,000+x home computers?
posted by krautland at 7:10 AM on October 3, 2011


The top entry on the list uses 548,000 SPARC64 cores and burns 10 megawatts

Yes. One of the biggest problems in computing today is heat -- the big supercomputers magnify it, but even on the notebook level, it is a real problem. When you hear about computers saving power, the real driver is waste heat -- yes, it's nice to not spend as much on electricity, but compared to the cooling costs, it's a bonus.

One of the reasons Apple's PCs cost more is the extra engineering effort they put in to minimize fan usage -- fans are noisy and fail.

In supercomputing, IBM has the Blue Gene project, which is an effort to attack the flops per watt issue. They attack this by trading numbers for power-per-object, or in simpler terms, using more processors that are slower and draw less power. The Blue Gene/L used 700MHz PowerPC 440 CPUs, the Blue Gene/P used 850Mhz PowerPC 450s, and next year's Blue Gene/Q is using a PowerPC A2 running at 1.4GHz and consuming about 5W per core, and it looks like it'll offer over 2GFLOPs/Watt.

The first installation of BlueGene/Q is the Sequoia computer at LLNL, 98,304 computing nodes, 1.6M CPUs and 1.6PB (yes, pentabytes) of memory, a 3000 sq-ft. footpint in 96 racks, but only 6MW draw from over three times the CPUs as K, the current Top500 chap, which takes up 673 cabinets. In terms of power, Sequoia will run nearly 2100GFLOP/kW, compared to K's 824GFLOP/kW.

And, yes, there's a list for this, the Green500 list, which is simply the list of supercomputers, ordered by total FLOPS divided by power consumed in watts.
posted by eriko at 7:12 AM on October 3, 2011 [1 favorite]


hypersloth: Found this K Computer (the top entry) explorer.. Each rack is water-cooled, and has a whole server for a system disk.
Wow. The network controllers are water-cooled. Good grief.
posted by Western Infidels at 7:36 AM on October 3, 2011


In twenty years my wristwatch will run at umpteen yottaflops and consume one point twenty-one jiggawatts, making these seem like so many antiquated Altair 8800s.
no, in twenty years we'll all be fighting over water and it'll be apocalyptic, all the weak unworthy people will be punished and the world will have changed to suit my overall lack of optimism, deep-seated anger and untreated depression

--signed, The Internet
posted by This, of course, alludes to you at 7:53 AM on October 3, 2011 [1 favorite]


Weird:

Today's Wall Street Journal has a full page ad on A9, touting the SPARC Supercluster, vs, the IBM P795

"SPARC Supercluster runs Oracle and Java twice as fast as IBM's Fastest Computer

SSC-T4-4 - $1.2M
IBM P795- $4.5M*

*Building planets is expensive."

First off, I have no idea what that building planets is expensive is supposed to mean, and secondly, who the hell uses that stuff? I'm guessing a bank or brokerage firm that needs to keep a billion numbers churning 24/7? I thought the NSA was still using Crays.

Anyway, jebus. That's some computing power right there.
posted by timsteil at 8:08 AM on October 3, 2011


timsteil> *Building planets is expensive."

First off, I have no idea what that building planets is expensive is supposed to mean,


I am taking that as an inside joke reference to Douglas Adams Hitchhiker series, wherein the Earth is nothing more than a huge and powerful computer.

and secondly, who the hell uses that stuff? I'm guessing a bank or brokerage firm that needs to keep a billion numbers churning 24/7? I thought the NSA was still using Crays.

Scientists. Not sure why you think the desire for more computing power peaked out sometime in the past, but demand has continued unabated.
posted by IAmBroom at 8:53 AM on October 3, 2011


1. Japan
2. China
3. United States


USA! USA! USA!
posted by goethean at 8:55 AM on October 3, 2011


You know or not...

United States 255 51.00 %
Japan 26 5.20 %
China 61 12.20 %
posted by bytewrite at 9:55 AM on October 3, 2011


Something something "five richest kings of Europe."
posted by Kitty Stardust at 11:25 AM on October 3, 2011 [1 favorite]


About the power, I'm kind of surprised. I have one machine that should be able to do 8TFlops theoretically at peak performance. And it takes about 700 Watts. Scaling that up linearly to 8 Petaflops you get 700KWats, or a little less then 1 Megawatt. It seems like these places are using more then 14x that amount per flop.

I wonder why. Is it air conditioning?
posted by delmoi at 11:31 AM on October 3, 2011


The TSUBAME 2.0 is the most power efficient of the top 10. Quiet an amazing feat. Very cool green tech.
posted by nutate at 11:47 AM on October 3, 2011


I am a computational scientist and I staff one of the top 50 machines in the world which was in the top 20 when it debuted. The Fujitsu machine made a big splash when it landed on the top500 list in June (the list is updated twice a year, once in the "International Supercomputing Conference" held in Hamburg every May, and once at Supercomputing which is held every November, this year in Seattle). And yes, it is common for the top supercomputer to outpace the next-fastest by factors of 2-3 or more. We expect the installation of /Q at LLNL next year to be running at over 10 petaflop/s.

For all the people pointing out that there are larger distributed clusters or botnets, you are missing the point. These machines are running the LINPACK benchmark, which requires reasonably efficient coupling and coordination between the nodes. You cannot simply run LINPACK over your home or botnet network and expect good performance. The word for one of these machines is supercomputer, not supercomputers because they are designed to work on *one* problem at a time (though they are frequently dynamically subdivided and support multiple jobs running on fractions of the machine).

The comments about energy efficiency coming from newer generations of hardware is partially right, however, significant effort goes into designing CPUs that have lower power consumption than the current general purpose x86_64 chips. For example, IBM uses an embedded CPU core, the PPC 440 in the BlueGene/L and BlueGene/P machines. Embedded CPU cores are simpler, cheaper to design, and usually more power-efficient, but lack such niceties as out-of-order execution and high clock speeds. To make the PPC 440 chip more useful computationally, they designed a special floating-point coprocessor (called the Double Hummer originally, but they had to change the name due to trademark issues I think).

It's very weird to be in a scientific field that is so publicity-driven. On one hand, we get tons of funding just to proclaim silly achievements like "fastest supercomputer in the Midwest", and there are very nice funding opportunities from the NSF and DOE to work in computational science. On the other hand, it is annoying to see how much bad hardware and bad science comes through, simply because it panders to the right vendor or supercomputing site.
posted by onalark at 1:18 PM on October 3, 2011 [7 favorites]


loquacious gets their power output numbers from simcity 2000. Also note that nuclear power plants also cost more than $15 grand.
posted by modernserf at 1:28 PM on October 3, 2011


also.
posted by modernserf at 1:28 PM on October 3, 2011


delmoi, a fair amount of power goes into the network and memory, which explains your power budget mismatch. And where the heck are you getting that 8 teraflop number from? A Tesla GPU, which is one of the most power-efficient accelerators does 0.5 teraflops in double-precision (the top500 numbers are all double-precision). You almost never see more than 2 of these strapped to a workstation for a total of a teraflop.
posted by onalark at 2:18 PM on October 3, 2011


Western Infidels writes "The network controllers are water-cooled. Good grief."

Once you've invested in the infrastructure why not use it for everything.
posted by Mitheral at 5:12 PM on October 3, 2011


« Older Hello, Sweetie!   |   I Am Your Father Newer »


This thread has been archived and is closed to new comments