Towards the exascale
May 31, 2010 3:03 PM   Subscribe

From the BBC, A graphical treemap of the top 500 supercomputers in the world, arranged by country, speed, OS, application, processor and manufacturer. posted by memebake (50 comments total) 13 users marked this as a favorite
 
It goes without saying that the National Security Agency was very helpful in supplying data on what they had on hand.
posted by Postroad at 3:04 PM on May 31, 2010


Interesting that, among industrial applications, telecommunications is represented by only a half-dozen systems.
posted by ardgedee at 3:11 PM on May 31, 2010


That's pretty cool, but I wish there were more ways to manipulate the data. Maybe we need a more super computer?
posted by Hoenikker at 3:13 PM on May 31, 2010


Wow, imagine a Beowulf cluster of these!
posted by klarck at 3:14 PM on May 31, 2010 [8 favorites]


Wow, "By OS" is nearly a useless tab. I didn't realize Linux was that dominant.
posted by Malor at 3:14 PM on May 31, 2010 [2 favorites]


It kinda goes without saying that this is only a list of the known supercomputers, none of the classified government hardware is listed. As the article says, "The spooks have got some pretty big machines".
posted by memebake at 3:15 PM on May 31, 2010


My buddy and I figured out that for about $6000-$10000, you could get a Tesla 1U rack that rivals computational power of the fastest supercomputer circa 1997 (when measuring only TFLOPS). That's 4 orders of magnitude in cost in just 13 years.
posted by spiderskull at 3:20 PM on May 31, 2010 [4 favorites]


If you want better breakdowns, Top500 has their own sublist generator.

I am mad curious as to the ownership of #206, #269, and #390-- all pretty identical Linux-runnng HP clusters, gig ethernet interconnect, nothing too radical, but all pop up under "Entertainment" in the United States. ILM, Dreamworks, Disney? Who knows.
posted by fairytale of los angeles at 3:22 PM on May 31, 2010 [2 favorites]


Then again, WETA Digital has their two-- same setup as the first three I mentioned, Harpertown HP platforms with GigE-- on the list at #282 and #413 under "Media." Damn taxonomies that overlap... now this is going to bother me.
posted by fairytale of los angeles at 3:28 PM on May 31, 2010 [1 favorite]


I suppose that as datacenters get bigger, the definition of what a supercomputer is starts to get a bit blurred. I presume that some of the Google datacenters could give some of these machines a run for their money, but because they are not officially single 'computers' they don't get on the list.
posted by memebake at 3:30 PM on May 31, 2010


I am mad curious as to the ownership of #206, #269, and #390-- all pretty identical Linux-runnng HP clusters, gig ethernet interconnect, nothing too radical, but all pop up under "Entertainment" in the United States. ILM, Dreamworks, Disney? Who knows.

Likely that one of them is the US World of Warcraft cluster.
posted by ten pounds of inedita at 3:31 PM on May 31, 2010


It's not just classified computers which aren't on the list -

It is also a voluntary list and therefore does not include all machines, such as those at the Oxford Supercomputing Centre and many classified machines owned by governments.


No idea what that means in terms of what's not on the list. But what I want to know is when we can hit the singularity.
posted by WalterMitty at 3:34 PM on May 31, 2010


I am mad curious as to the ownership of #206, #269, and #390-- all pretty identical Linux-runnng HP clusters, gig ethernet interconnect, nothing too radical, but all pop up under "Entertainment" in the United States. ILM, Dreamworks, Disney? Who knows.

I'm pretty sure one of them is the main render farm at Pixar. I used to live near their campus in Emeryville, CA, and have talked on a few occasions to some of the IT guys who keep it running. (the happy-hour watering hole of choice for my office also attracted a pretty good crowd from Pixar, as well.) I don't remember specs off the top of my head, but I remember being impressed enough to think that their cluster's got to be on the top500 list.

The Lucas renderfarm is also on the top500, as well, IIRC.
posted by deadmessenger at 3:36 PM on May 31, 2010 [4 favorites]


memebake: I suppose that as datacenters get bigger, the definition of what a supercomputer is starts to get a bit blurred. I presume that some of the Google datacenters could give some of these machines a run for their money, but because they are not officially single 'computers' they don't get on the list.
I should think if you could count the Google "app"/datastore that back-ends Search, GMail, et al, as a single supercomputer, it would dwarf all other contenders. Their core count has to be into the hundreds of thousands or possibly millions.
posted by hincandenza at 3:42 PM on May 31, 2010 [1 favorite]


This is probably a dumb question, but is China's 2nd place with the Dawn Nebulae supercomputer a Pyrrhic victory, considering it's chips are American designed and manufactured (Intel and Nvida)?
posted by Skygazer at 3:48 PM on May 31, 2010


My buddy and I figured out that for about $6000-$10000, you could get a Tesla 1U rack that rivals computational power of the fastest supercomputer circa 1997

I wonder what an old Cray-1 compares to at *googles* 136Mflops. bluray player maybe?
posted by ROU_Xenophobe at 3:51 PM on May 31, 2010 [2 favorites]


I don't remember specs off the top of my head, but I remember being impressed enough to think that their cluster's got to be on the top500 list.

I'd assume so too, if they ever bother to do the entire Linpack song and dance for Top500. I am not certain that Pixar's using HP hardware, though-- the last reference I can find to what's in their racks mentions Verari Systems' blade servers-- and I am certain that DreamWorks is (if you've ever seen an HP commercial full of Shrek, you know what I mean).

ILM had approximately 5700 processors in their render farm as of this article in June 2009. In 2002, they were using Dell hardware; that could have changed.
posted by fairytale of los angeles at 3:53 PM on May 31, 2010 [1 favorite]


Security expert Peter Gutmann once put forward the argument that, if you counted the raw amount of CPU power, Windows-based malware botnets would count as the world's most powerful supercomputers, easily beating conventional supercomputers.

Microsoft can thus claim to have produced the technology that powers the worlds most powerful supercomputers, though it's not a claim they'd want to make.
posted by acb at 3:53 PM on May 31, 2010 [6 favorites]


This is probably a dumb question, but is China's 2nd place with the Dawn Nebulae supercomputer a Pyrrhic victory, considering it's chips are American designed and manufactured (Intel and Nvida)?

Not dumb. China's developing their own silicon for the Dawning projects; the Intel/ nVidia thing is temporary.
posted by fairytale of los angeles at 3:58 PM on May 31, 2010


Hmm, I don't think WoW works that way, ten pounds. They're not supercomputers, but rather little clusters of a few machines, each independent of the others.

Supercomputers have changed over the last decade or so. They used to have fast processors with insane bandwidth, but the big iron hit the same 3Ghz wall that everyone else did. So because they don't have 1000Ghz CPUs to put in them, they assemble them from hundreds or thousands of server-class x86 CPUs, in custom server chassis. They tie all the machines in the computer into one cohesive fabric, using advanced interconnection technologies. It's basically a whole hell of a lot of fairly standard computers that are heavily cross-linked.

Programs that can be well-parallelized are broken up (usually automatically by special compilers), and tiny pieces of a giant overall problem are sent to the individual CPUs all over the cluster. Eventually, they finish chewing on a piece, and report their results back, probably picking up more work to do.

CPUs only run fast on local data, stuff that's in the same box. Those interconnects will let them get data from other servers much faster than normal networking would allow, but it's still very slow compared to local memory. Algorithms that need much non-local data tend to do very poorly on this architecture. However, the unified address space of all those machines lets you approach enormous problems that you just can't do on regular PCs.

Programs like WoW, where every actor in the game (be it PC or NPC) can potentially affect any other, require lots of cross-checking... the values aren't independent. Every tenth of a second or so, almost any state anywhere in the program can be changed by some bit of code, meaning that each process needs almost instant access to the entire running state of the whole machine. This problem domain is just about worst-case for a supercomputer.

WoW is just typical lightly-threaded programming, probably using standard tools, running on standard OSes in the standard ways. They use fast machines with lots of RAM and disk speed, but but you could probably run a WoW cluster on cheapo $500 Dell PCs, if the load wasn't too high.

That's why they break up the whole system into realms, to keep the scaling issues down to something that ordinary PCs can handle. Supercomputers, even though they're made of PCs, have a batch-oriented architecture that's poorly suited to running highly interactive game code. Same physical stuff, but very different organization.
posted by Malor at 4:01 PM on May 31, 2010 [6 favorites]


I am mad curious as to the ownership of #206, #269, and #390-- all pretty identical Linux-runnng HP clusters, gig ethernet interconnect, nothing too radical, but all pop up under "Entertainment" in the United States. ILM, Dreamworks, Disney? Who knows.

Dedicated porn supercomputers, streaming blindness-inducing quantities of smut at high speed to the farthest reaches of the globe, perhaps. Or perhaps not.
posted by kersplunk at 4:11 PM on May 31, 2010


Not sure if you're correct, Malor: a couple of articles about WoW's supercomputers in China. I guess it's possible that The9 in China uses a different architecture than Blizzard in the US, but I would suspect otherwise.
posted by ten pounds of inedita at 4:13 PM on May 31, 2010 [1 favorite]


Cumulatively, this would probably be able to run Crysis 2 at next-to-highest res. Nice.
posted by turgid dahlia at 4:20 PM on May 31, 2010




Blizzard's WoW setup that they were willing to publically discuss at GDC Austin in 2009 was along the lines of 13,250 blades with 75,000 cores (ie, a quad-core in every blade) and 112.5TB of RAM across those, in ten different AT&T data centers worldwide with dedicated support via AT&T's specialized gaming operations group.

There are 13 US player realms in four US datacenters, per WoW Wiki. I've seen these 13 realms referred to as "clusters" before, but I am not certain that that's an accurate assessment of how they're rigged up infrastructure-wise.

If render operations-- which generally start from a centralized queue and fire off a bunch of instances of a rendering application across different nodes of a networked system-- count as supercomputers, I'm betting MMO operations infrastructure does too.
posted by fairytale of los angeles at 4:25 PM on May 31, 2010 [1 favorite]


Also, pardon my math; that's five cores per blade average, not four.
posted by fairytale of los angeles at 4:29 PM on May 31, 2010


Malor: "Wow, "By OS" is nearly a useless tab. I didn't realize Linux was that dominant."

Unless there's some political reason to use something else, there's very little reason to use anything but linux (usually Redhat) for computing clusters. It's cheap, runs on anything and it's easy to find people to administer it.

It's neat to know that I worked on the storage components of more than a couple of these, including #3 which was the fastest a few years ago when it was built. I work in QA so none of my code is being run by the customer but hopefully I contributed a little to the uptime.
posted by octothorpe at 5:07 PM on May 31, 2010 [1 favorite]


> My buddy and I figured out that for about $6000-$10000, you could get a Tesla 1U rack that rivals computational power of the fastest supercomputer circa 1997 (when measuring only TFLOPS). That's 4 orders of magnitude in cost in just 13 years.

I remember visiting the Bradbury Science Museum at Los Alamos National Labs with some friends in 1995. As we stood in front of the decommissioned Cray-1 on exhibit there, we read the notes on how huge and fast it was, the world's most powerful supercomputer of its time in 1976. Somebody did some quick math and determined that it had roughly the capabilities of a PowerMac 6100, Apple's then-new low-end computer.

Today, 15 years later, you can get a smartphone, for $100 in a buy-one-get-one-free sale. It has a 32-bit mumble-hundred GHz CPU capable of addressing gigabytes of storage. It is several times more capable than a PowerPC 6100 or a Cray-1, once the world's most ambitious supercomputer.
posted by ardgedee at 5:18 PM on May 31, 2010 [1 favorite]


I don't think it would, fairytale. What makes a supercomputer a SUPER computer is that all the machines are tied into a single large fabric; they look like one computer with a huge number of processors and ridiculous amounts of RAM.

I'm almost certain that Blizzard uses more normal technology, clustering, where each machine is logically separate at the CPU and memory level, but cooperate in gangs. For example, when you zone into an instance, you're handed off to a server with free space in the instance cluster. You 'live' there, on that one computer, until you zone back out, and return to your zone server for your realm. Your instance stays live on that machine for about a half-hour after you leave it, and then the memory is released and the zone is reset. If you're in a raid dungeon, as part of the shutdown process, it stores your current state into the database server(s).

I'm not sure whether world servers are singular or not, but I suspect they are. The three separate zones (old world, Outland, Northrend) crash separately. Every player in those broad zones is knocked offline at the same time, but players in the other two zones are often unaffected. If you were on a supercomputer, everyone in the whole world should go at once, and usually that doesn't happen, unless the database goes down.

I think people are misusing the term, thinking that 'lots of networked computers' = 'supercomputer'. Blizzard has easily as many computers as most supercomputing facilities, with just as much raw processing power, but they're actually architected more like, say, Google or Facebook.

I'm speaking from a fair bit of expertise, plus lots of playtime, but of course Blizzard doesn't talk about their architecture, as it's their biggest trade secret. So I don't know for sure, and I don't think very many people outside Blizzard do. But, in my estimation from setting up vaguely similar (though much smaller) services, the heavily partitioned architecture, plus the terrible scaling problems they've tended to have, points very strongly at clustering, not supercomputing.

If it were a supercomputer, you wouldn't see failures like the inability to zone into an instance because the servers are full. You can function fine otherwise, there's plenty of power on your local machine to take you, but there's no room in the instance cluster. That's also why lag is local; everyone in a zone will be lagged at once, but if you zone out, you can be perfectly fine again. Only global lag is consistent with supercomputing.

Usually, the only global slowdowns will be database-related. Combat runs normally, but looting or trading can be very badly lagged, sometimes by minutes, and when this happens, it happens to everyone in the realm at once. Even auctions are slow, way over in Ironforge. That is, again, consistent with individual machines talking to a database. Local load is fine, and you can play normally, but as soon as you touch the database, you're stuck for awhile.
posted by Malor at 5:21 PM on May 31, 2010


I wonder what an old Cray-1 compares to at *googles* 136Mflops.

This is only about 60 times faster than my phone, which is a last-generation Android model--the current ones are something like 25 to 30 times slower than the Cray-1 was, in a tiny, tiny fraction of the space.

Living in the future is awesome.
posted by Mr. Bad Example at 5:45 PM on May 31, 2010


Calling Google's system a "supercomputer" really stretches the definition. Supercomputer-style interconnects don't work across continents. Europe will always be >100ms away from the US, due to the speed of light.

From the outside, something like Google search looks like one unified application. But it can't possibly look like that from the programmer's point of view. The various scattered bits can talk to each other, sure, but that's true of anything on the internet.
posted by ryanrs at 5:47 PM on May 31, 2010


In terms of architecture, 74 of the machines on the June 2010 list are massively parallel boxes with some kind of sophisticated interconnect, while two are constellation configurations and 424 are more generic clusters using InfiniBand or Ethernet.

--The Reg, May 31st 2010, "Top 500 Supers: The Dawning of the GPUs"

Architectually, it seems Top500 doesn't give a crap if you're a massively-parallel setup, a constellation, or a cluster-- all you have to show are appropriate and verifiable Linpack numbers.
posted by fairytale of los angeles at 5:51 PM on May 31, 2010


Yeah, but 100ms ping times trip up Linpack something awful.
posted by ryanrs at 6:10 PM on May 31, 2010


Yeah, but 100ms ping times trip up Linpack something awful.

It's true. I don't think whatever the WoW cluster setup might be is in the Top 500, though-- it was theorized upthread, but the geographic separation/ 13 US realms thing make me think it's likely not so. I do think there are clustered render operations in there, though.
posted by fairytale of los angeles at 6:13 PM on May 31, 2010


It is really a shame that the five or six Windows clusters on that list spend most of their I/O and CPU cycles downloading security patches.
posted by Threeway Handshake at 6:18 PM on May 31, 2010 [2 favorites]


but there must one day come a computer whose merest operational parameters they are not worthy to calculate, but which it will be their fate eventually to design...
posted by randomkeystrike at 6:25 PM on May 31, 2010 [1 favorite]


(I should mention that I don't actually know how Linpack works. I assume it stresses the interconnect. I would be a really crappy supercomputer benchmark if it didn't.)
posted by ryanrs at 6:41 PM on May 31, 2010


clustering, where each machine is logically separate at the CPU and memory level, but cooperate in gangs

This is effectively how modern supercomputers work as well. What makes the result a supercomputer rather than a cluster is just a quantitative, not qualitative difference: much higher bandwidth and lower latency in between the nodes.

Only global lag is consistent with supercomputing.

This is sort of true, but it's more shades of gray than black-and-white. Modern supercomputers are hierarchies of lag.

Just about the only latency-free operations are operations that you can perform on data that's already in registers. Hitting on-chip cache hurts, higher levels hurt more, and having to hit main memory hurts even more. You can try to mitigate these problems by changing your code. I saw a colleague's program get an order-of-magnitude speedup, primarily from reindexing arrays so that a critical inner loop would take contiguous steps through memory rather than long strides. You can try to mitigate these problems by changing your method. Higher order approximations are more expensive, but if the expensive computations can be done on little dense subsets of your data then they might be effectively free, because lots of codes are memory bandwidth limited and your CPU might as well do more work on this chunk of data while it's waiting for the next chunk.

And there's more lag the further away you get. Hundreds of nanoseconds from main memory, but microseconds (even on the best supercomputers) from a different distributed memory node. That's why scientific computing has pretty much settled on message passing instead of trying to do distributed shared memory; we can't access remote memory as quickly as local memory, so we have to force/allow applications' programmers to recognize and work with the distinction. The MPI code you write is the same for a cluster as for a supercomputer; the only difference is that the MPI library you link against might be optimized for ethernet on one and infiniband on the other.

The hierarchy is getting more complicated, though. Where there used to be two or four processors per node (allowing you to pretend that they were just two separate nodes without much efficiency loss), there are now 16 cores and growing fast. So you can't just do message passing; you want to do message passing between distinct nodes at the same time as shared memory between threads operating on cores on the same node. Oh, and depending on what computer you're using, not all messages may be passed equally; your domain decomposition may need to generate a virtual topology that is informed by your computer's interconnect topology, to try to ensure that the most data is traded between processors with the best links.

Then on the last level you've got "grid computing", which is the overhyped catchall term describing attempts to standardize the "send distinct jobs to distinct clusters" work that Blizzard and kin are doing in-house. A dirty open secret about all those supercomputers is that they don't often run jobs that need that kind of fully-connected power. My biggest individual runs were 512-processor jobs for my dissertation, and that was higher end than most of the others I saw in queue then. Ranger, which looks to be in the top ten still, has over 60,000 cores... but the normal queue will only give you 4,096 at once, and anything over 16,384 requires a special request. Right now there's a half dozen 3,000-core jobs in queue, a half dozen 1,000-core jobs, and everything else is 512 or less.

It's fantastic to have the full machine available for huge jobs - Ranger was just in the news for pulling off a very rapid very massive simulation of oil dispersion across the entire gulf and gulf coast, for example. But for much of their day-to-day work, I'll bet that the setups at Google etc. would be competitive.
posted by roystgnr at 6:45 PM on May 31, 2010 [1 favorite]


Some notes:
1) The test used in the Top 500 benchmark (LINPACK) is effectively a very large floating-point matrix operation. Systems that score well on the Top 500 tend to have processors capable of fast floating-point math, high-speed interconnects between systems, and large amounts of RAM.
2) There are two LINPACK numbers: Max and Peak.
* Peak is the maximum theoretical performance possible. If you have 10 processor cores that are capable of 2 floating-point operations per clock cycle and they run at 2 GHz, your maximum theoretical performance is 40 billion floating point operations per second (or gigaFLOPS).
* Max is the actual sustained performance, and is how these systems are ranked. Sustained performance relies on the ability to keep the processors actually working: the more local memory, the less data you need to request from other processors (which causes the processor to wait until that data is retrieved from the remote processor). And the faster the network, the less time the processor needs to wait.
* Efficiency (Max/Peak) is the really interesting number: how many floating-point cycles were wasted while waiting for memory access? Systems with gigabit Ethernet max out at about 55% efficiency, usually with all systems in the same data center and a fully non-blocking network fabric (any system can talk to any other system at full speed regardless of what the other systems are doing.) Systems based on 40 igabit Infiniband and proprietary network connections can achieve efficiencies of over 90% efficiency. Consider the highest-ranked Gigabit-only system on the Top 500 list: GPC, at the U of Toronto at #28. They have 30,240 cores with a maximum theoretical performance of 306 trillion (tera) FLOPS. However, because they only have a gigabit network between each node, they can only sustain 168 TFLOPs. Compare this to #25: The French government Infiniband-based system. It has 5,000 fewer processor cores than #28 and a lower theoretical maximum performance (247 TF), but outperforms #28 (at 180 TF.)
* Of course, if your application doesn't behave like LINPACK it may be more useful to add additional cores rather than allowing them to communicate among each other faster.
3) LINPACK has it's flaws; many HPC system designers complain that it overstates the importance of fast, tightly clustered floating-point focused systems. Various attempts to address this have been attempted (HPC Challenge among others) but they have not approached the mindshare of LINPACK.
4) Google (and Folding@Home, Condor, Amazon EC2, etc.) would likely score very well in the theoretical maximum FLOPs, but would not likely be able to sustain any level of required network performance to place on the Top 500 list.
5) A significant majority (~80% by system, 65% by performance) of these systems are not single system image machines that appear as one OS but are large clusters of machines, each running an independent OS image. Most teraflop-scale scientific computing applications are designed to run on multiple clusters. Cray and IBM's Blue Gene systems represent the majority of systems that present a single system image. The rest are mostly commodity X86 clusters that could run a generic server workload. WoW or most server applications would run reasonably well, with the understanding that any single process running on the system is not likely to run significantly faster than on a general server processor.
6) Scaling is a very, very hard problem. For example, some tightly-coupled scientific applications saw no speed-up above 32 cores. Actually building real-world applications that can use petascale systems is an unsolved problem from both an application design perspective as well as a system management perspective.
posted by theclaw at 8:06 PM on May 31, 2010 [3 favorites]


ryanrs: I would be a really crappy supercomputer benchmark [...]

That might be why top500 uses Linpack instead of ryanrs.

Linpack used to have much more interconnect stress, but these days it doesn't have anywhere near as much. My first big super, the nCube 2 in 1994, had 1024 nodes each with 4 MB of memory and a 25 MHz cpu. My most recent one was the Cray XT4, with ~32K nodes, each with two to four 2.5 GHz CPUs and 8 GB of memory. Based on these two data points, we can see that the amount of data per node has increased 2048 times, while the speed of nodes has only increased 100 times. If we assume there is a linear relation between the time spent computing and the amount of memory available to store the matrix (fairly safe, since linpack is dominated by the DGEMM BLAS call), then the ratio is on the order of ten times more than it was before.

As a result, interconnect matters for linpack, but no where near as much as it used to.
posted by autopilot at 8:08 PM on May 31, 2010


ROU_Xenophobe: I wonder what an old Cray-1 compares to at *googles* 136Mflops. bluray player maybe?

A standard 1080p, 24fps video stream contains about 50 million pixels of image data per second. When you take into account the complexity of decoding the H.264 codec, I'm fairly sure that playing Blu-ray movies would be considerably beyond the capabilities of a Cray-1, even if it had the necessary peripherals. DVD quality might be feasible, barely.

The Cray-1, in turn, was more than a thousand times faster than the Apollo Guidance Computer, which topped out at a whopping 85,000 instructions per second.

Mr. Bad Example: This is only about 60 times faster than my phone, which is a last-generation Android model--the current ones are something like 25 to 30 times slower than the Cray-1 was, in a tiny, tiny fraction of the space.

I suspect you're looking at benchmark numbers from Java code running in the Dalvik VM, which has a huge amount of overhead. With native code, recent phones can probably do floating point math at least as fast as the Cray could.
posted by teraflop at 9:39 PM on May 31, 2010 [2 favorites]


I'ma believe a person named teraflop about these things.

But it's still weird to think that things people carry around or semidisposable pieces of consumer electronics pack more computational punch than that big beautiful Cray. Makes me want to render all the requisite scenes from The Last Starfighter on a ps3.*

*TLS was done on an X-MP, not a Cray 1
posted by ROU_Xenophobe at 10:27 PM on May 31, 2010


Needs a "By Cracky" tab for us older folk.
posted by pracowity at 12:58 AM on June 1, 2010


Also, some of these views would make nice tiled bathroom floors or walls.
posted by pracowity at 1:01 AM on June 1, 2010


You know what strikes me most in this graphic? If you go to the "By application" tab, you'll see that there's an IBM cluster doing "unspecified" government work at 28,000 teraFLOPS in Russia. Hey guys, what happened to export controls? Or was this sold when Bush looked into Putin's eyes and saw his soul?
posted by Skeptic at 1:41 AM on June 1, 2010


Hey guys, what happened to export controls?

The "IBM" guys who who did the final assembly were all humming "Back Door Man" while they worked.
posted by pracowity at 3:10 AM on June 1, 2010


"an IBM cluster doing "unspecified" government work at 28,000 teraFLOPS in Russia."

AHA. So that's where Conficker is hanging out.
posted by Twang at 4:05 AM on June 1, 2010


an IBM cluster doing "unspecified" government work at 28,000 teraFLOPS in Russia.
This is clearly used for chess.
posted by Threeway Handshake at 9:14 AM on June 1, 2010


Skeptic: If you go to the "By application" tab, you'll see that there's an IBM cluster doing "unspecified" government work at 28,000 teraFLOPS in Russia. Hey guys, what happened to export controls?

The thing they're not mentioning here is that IBM didn't necessarily make a supercomputer in Russia. Due to the nature of how supercomputers are now composed, it's possible to buy a large amount of fairly standard (if nice) PCs and use them to construct a supercomputer after the fact. So the restrictions in this regard are nearly completely meaningless.

Full disclosure: A very large percentage of the machines on this list run software which I maintain.
posted by atbash at 10:30 AM on June 1, 2010 [1 favorite]


Uncle Sam cooperates quite closely with the Russians in quite a few endeavors. The Russian IBM machine may be somehow connected with the dual nation cooperative efforts underway in aerospace or nuclear technology.

Remember, we're going to be riding bitch with the Russians into space for a few years. It's possible that this was part of their bargain to let us ride along.
posted by Sukiari at 2:52 PM on June 2, 2010 [1 favorite]


« Older Listen to presences inside poems. Let them take...   |   Opera star Rene Fleming pulls a reverse-Sting and... Newer »


This thread has been archived and is closed to new comments