Join 3,512 readers in helping fund MetaFilter (Hide)


"Don't say 'reflected acoustic wave'. Say 'echo'."
September 27, 2010 6:08 PM   Subscribe

Richard Feynman and The Connection Machine

Danny Hillis on Richard Feynman's time at Thinking Machines.
posted by A dead Quaker (28 comments total) 40 users marked this as a favorite

 
Hey, I used to work at Thinking Machines! Summer of 1992 and 1993. They hired a bunch of undergrads to work on sensible scheduling algorithms for parallel machines being given a lot of demands of unpredictable size at unpredictable times. The algorithm I wrote didn't work very well. Never knew there was a Feynman connection. But it was a great company, filled with really smart people given license to think big. My officemate was an MIT creative writing major who was a massively better coder than me.

You could type "coke" at the Unix prompt and a Coke would drop out of the vending machine in the hall, the 60 cents being deducted from your salary.
posted by escabeche at 6:20 PM on September 27, 2010 [22 favorites]


Got to like a post that starts, "One day when I was having lunch with Richard Feynman...."

People used to call Thinking Machines Corp "Thinko."

As I recall.
posted by cogneuro at 6:23 PM on September 27, 2010 [2 favorites]


Huh, I never new Feynman ever did anything with computers.
posted by delmoi at 6:42 PM on September 27, 2010


You could type "coke" at the Unix prompt and a Coke would drop out of the vending machine in the hall, the 60 cents being deducted from your salary.

that. is. awesome. i work in kendall square and we don't even get coke in our vending machines, only pepsi.

great article, feynamn was such a badass.
posted by pwally at 7:00 PM on September 27, 2010 [2 favorites]


Meant to add: via a comment in this HN thread.
posted by A dead Quaker at 7:15 PM on September 27, 2010


One of the things I recall from reading Danny Hillis's thesis (The Connection Machine), was that in some ways, he predicted the infrastructure of the web.

Not in the specific way that Tim Berners-Lee did, but in the sense of compute power being widely available. Hillis specifically said that we would be able to get computer power out of a wall socket similar to how we get electricity.
posted by CheeseDigestsAll at 7:21 PM on September 27, 2010


Ooooooh thank you. I love me some Richard Feynman! I even ordered the American Scientist commemorative stamps a few years back since he was featured on one of them.
posted by PuppyCat at 7:28 PM on September 27, 2010


One loves stories of smart people meeting other incredibly smart people. One being "me."
posted by Turtles all the way down at 7:30 PM on September 27, 2010


Classic Feynman:

He asked, "How is anyone supposed to know that this isn't just a bunch of crap?"
posted by PuppyCat at 7:38 PM on September 27, 2010 [2 favorites]


Oh I'll tell you a good CM story. Back in the 80s, I used to sell subscriptions to an obscure service sold by Dow Jones News Service. It was a very complex WAIS server to crossindex business news and reports to create strategic intel, it ran on one of the early CMs with a huge disk storage (well, huge for that time). I was pretty good at selling the subs, because it was kind of complex to demonstrate and get it to do anything useful.
So one day the DJNS rep told me how she went with Danny Hillis to a corporate CEO to sell him a CM, the DJNS rep came along to give examples of practical business applications. She described her app, and he said, "well why the hell would I need a CM to run that? I can do that sort of thing on my IBM PC!" They couldn't believe a dinky IBM PC could do the same thing a CM could do, so she asked if he would demonstrate it to her.
They went to his office, and he fired up the DJNS CM WAIS server via modem.
The CEO didn't know what he was running, and thought it was inside his PC. They didn't sell any CMs to that company.
posted by charlie don't surf at 7:53 PM on September 27, 2010 [3 favorites]


So when Feynmann tells you that you only need 5 buffers, I don't care what fucking fancy ass pants degree you have, you go with the 5 buffers.
posted by geoff. at 7:57 PM on September 27, 2010 [7 favorites]


The Long Now Foundation uses five-digit dates, the extra zero is to solve the deca-millennium bug which will come into effect in about 8,000 years.

Groan.
posted by Ratio at 8:14 PM on September 27, 2010 [1 favorite]


See also: The Rise and Fall of Thinking Machines and Daily WTF: Thinking Machines

These articles are in many ways pure schadenfreude, but I think they illustrate an important principle that I've seen described as the Great Hacker Tragedy - the notion that, in many cases, it really doesn't matter how many super-smart engineers you have working on solving a problem if it's not a problem that someone else is willing to pay you to solve. In other words, it's not sufficient for a product to be capable in order for a company to survive, it also has to be viable.

That's really the true function of marketing within a well-run company - marketing is not (simply) advertising, or branding, or collateral. Marketing means meeting needs, profitably, and if you're not doing the first bit, you can never do the latter. Furthermore, marketing is (or should be) the function that helps you determine what needs should be met by your product before you build the product, not the other way around.
posted by kcds at 8:22 PM on September 27, 2010 [5 favorites]


This was a great article! I know Feynman would be proud to see the work that has followed since his time.

One of the followup companies to Thinking Machines is the recently-dead SiCortex. Same general principle of massively parallel computing to achieve scalability and energy efficiency. As a grad student I helped administer a machine sold to our University. I was impressed by how much talent they had in their team, many of the older guys in the company were the younger generation at Thinking Machines, and were now for their part carrying forward the legacy.

It never fails to amaze me how such talented people can fail so spectacularly when it comes to developing and selling computer technology, especially in high performance computing. I would like to put my finger on a specific thing that goes wrong or a specific trait that guarantees success, but I'm not so sure there is one. I do know that there's a terrible monster in taming these system's complexity.

These systems are stunningly complicated. From the custom processors to the ASIC circuits they're printed on to the stripped down microkernels that operate them to the blazingly fast toroidal networks that connect them, the hardware smokes with hostile complexity. The compiler writers who port their tools to work on these hostile architectures are geniuses, but frequently even they fall short of delivering theoretical performance on these machines.

For the last 20 years, a delivered supercomputer is judged on one number alone, the speed at which it can compute the LINPACK benchmark. The LINPACK benchmark is fairly simple code, written in Fortran, that computes an inversion of the linear system of equations Ax = b using Gaussian elimination (with partial pivoting). Every addition or multiplication operation performed on a pair of numbers is called a floating point operation. Supercomputers back in the CM days were impressive for performing hundreds of billions of floating point operatings a second (today a modern workstation can achieve similar performance). The fundamental number that comes out of the benchmark is the 'peak performance', or the sustained pace of the computer when it is performing as nearly many floating point operations a second as it can theoretically achieve based on clock speeds, its memory pipeline, and its instruction set. Getting anything close to theoretical peak requires perfectly functioning memory, cpus, and network. On top of that, the code has to be perfect, making absolutely optimal use of the memory pipeline, caching, and cpu instructions. Modern cpus can fetch data from memory and submit into the floating point pipeline a fused and vectorized (multiple operands being simultaneously multiplied then added) operation in the same clock cycle. In order to achieve near-theoretical performance, the programmer has to perfectly load the floating point pipeline, while making sure that she doesn't miss the cache or trigger a TLB miss any more than she has too. Fortunately, the dense inversion is full of dense matrix-matrix multiplies, an algorithm which features high spatial and temporal locality in data reuse, giving the programmer some room to breathe.

Getting LINPACK to run efficiently on a supercomputer is mindblowingly difficult. This is how hundreds of engineer-years can be reduced to a single number, the rate at which floating point operations per second are being calculated. The current fastest machines in the world perform over 10^15 floating point operations per second. The exponential growth outpaces even Moore's law because over the last decades not only have cpus gotten faster but we've learned how to use hundreds, then thousands, and now hundreds of thousands of cpus together efficiently (The CM was a machine way ahead of its time).

Yet, the LINPACK benchmark is still just an incredibly small piece of the puzzle. All the results of a LINPACK run do is tell you that the machine itself has been built right, and that some geniuses have managed to figure out how to get the relatively simple algorithms in the matrix inversion humming at optimal performance on it. There's a whole science beyond that though, in bringing other scientific models and simulations to efficiently run on supercomputers.
posted by onalark at 8:48 PM on September 27, 2010 [21 favorites]


Feynman stories are the best science stories, but man was I disappointed with Feynman's description of cellular automata. By no means do you have to go into finite state machines to explain cellular automata, but ball bearings do not Conway's Game of Life make. It really fails to capture their essence.
posted by Bobicus at 9:24 PM on September 27, 2010


Bobicus: to be fair, I believe that Feynman was describing how a discrete model can apply to a continuous physical system with a conserved volume, not cellular automata the way that it is used now. In numerical partial differential equations, you are frequently considered with making sure that you end up with the same amount of water on the far end of the pipe as you started with on the near end because it is incredibly easy to lose conserved quantities like energy and mass in simulations. I found his ball bearings description apt for this, but then, maybe I spend a little too much time around discrete physical models :)
posted by onalark at 9:30 PM on September 27, 2010


considered = concerned... time for bed!
posted by onalark at 9:43 PM on September 27, 2010


Feynman stories are the best science stories, but man was I disappointed with Feynman's description of cellular automata.

Cellular automata are shortcuts for lazy mathematicians who don't understand Turing's papers on how to describe morphogenetic processes as functions.
posted by charlie don't surf at 11:49 PM on September 27, 2010 [1 favorite]


the notion that, in many cases, it really doesn't matter how many super-smart engineers you have working on solving a problem if it's not a problem that someone else is willing to pay you to solve. In other words, it's not sufficient for a product to be capable in order for a company to survive, it also has to be viable.
That's because we have an economic structure for doing pure research with no (immediate) practical applications: Academia. There are all kinds of research being done on parallel computations and other stuff that won't be useful for years (although with modern GPUs, you actually have examples of mass processor machines)

And on top of that lots of companies do their own R&D on things that might not be ready in years or decades. But of course you can't start a company just to sit on your ass.
Feynman stories are the best science stories, but man was I disappointed with Feynman's description of cellular automata. By no means do you have to go into finite state machines to explain cellular automata, but ball bearings do not Conway's Game of Life make. It really fails to capture their essence.
Conway's game of life isn't the end-all be-all of CAs, though. Feynman was probably talking about discrete simulation of Navier Strokes equations.
posted by delmoi at 6:39 AM on September 28, 2010


Cellular automata are shortcuts for lazy mathematicians who don't understand Turing's papers on how to describe morphogenetic processes as functions.
That doesn't even make sense.
posted by delmoi at 6:43 AM on September 28, 2010


And so my crush on Richard Feynman grows a bit larger. Thanks for this.
posted by Splunge at 6:51 AM on September 28, 2010


Really, really interesting article. Thanks. But reading about modeling connections in "20-dimensional lattices" of processors makes me feel like a member of an inferior species.

I think I'll go see if I can make fire :^(
posted by richyoung at 8:05 AM on September 28, 2010


richyoung, it's really not as bad as it sounds. The lattice just describes how many connections each processor has to its neighbors from a network topology standpoint. The size of the lattice is just a mathematical way of describing the layout of the network. For all practical purposes when hooking things up, it just means that every processor connects locally to 40 other processors. It helps to think of these things inductively:

Lets take a group of processors and connect them with a one-dimensional lattice. Easy, put them all in a line and have them 'hold hands', i.e. each processor connects to its neighbor on the left and right. If you want to simulate a system where the boundaries wrap around, you'll also connect the two processors on the end of the chain to form an unbroken circle.

A 2-d lattice instead looks like a checkerboard. Each processor connects to its four neighbors. If you add wraparound connections you can imagine that the connection graph looks something like the surface of a cube.

3-d lattice? Imagine a cake that has been cut horizontally, vertically, and once across the middle. Now you have connections above, below, left, right, front and back: so 6 processors. Wraparound here is called a torus, and this is the sort of network you see in many of the world's fastest supercomputers because it is extremely useful for doing 3-dimensional discretized simulations of everything from flow across a wing to climate modeling.

4-d? This is the same idea as a hypercube. Even though we can't 'see' 4 dimensions, we still know how things connect! Every processor connects to 8 neighbors. Even though the processors live in 3-d space, we connect them as if they lived in the higher dimension. We intuitively do this sort of dimension reduction (a mathematician would say projection) when we draw a cube on paper.

Extrapolating, a 20-d lattice will connect every processor to (2*d=40) other processors.
posted by onalark at 10:03 AM on September 28, 2010


“So we sent him out to buy some office supplies.”

I'm going to laugh at this for days.
posted by wobh at 12:50 PM on September 28, 2010


"if we had had any understanding of how complicated the project was going to be, we never would have started."

marriage, anyone?
posted by emhutchinson at 6:54 PM on September 28, 2010


Feynman Lectures on Computation
posted by neuron at 9:49 PM on September 28, 2010


That doesn't even make sense.

It would if you spent any time grappling with Turing's morphogenetic functions. I'm not even talking about Computable Functions as Turing postulated, I'm talking about his plain old math functions.

CA are quick and dirty digital estimations of analog functions that basically produce the same effects as a CA. Turing's functions are much more elegant, although more difficult (to the point of incomprehensibility). Procedural iterative computer code is not in the same league as a function that can map out the entire iterative process in one pass.
posted by charlie don't surf at 5:06 AM on September 29, 2010


“So we sent him out to buy some office supplies.”

P1: So how did your interview go?
P2: I couldn't go through with it dude. I got there, and I was too intimidated by how smart all these guys were, so I panicked and went home.
P1: That's crazy! You have a graduate degree from MIT, how much smarter could these guys be?
P2: Well let me put it this way. They have Richard Feynman buying paperclips.
posted by atrazine at 5:53 AM on September 29, 2010 [3 favorites]


« Older The hidden wonders of a British landmark....  |  Jay-Z, Buffett and Forbes on S... Newer »


This thread has been archived and is closed to new comments