Slipping towards the Singularity
June 3, 2008 4:54 AM   Subscribe

The current issue of IEEE Spectrum devotes itself to the sci-fi genre du jour, the Singularity. Neuroscientists such as Christof Koch and David Alder talk about our understanding of the brain and quantum computing, John Horgan argues that it's just too difficult to recreate consciousness in a computer any time soon. Robin Hanson writes on the Economics of the Singularity, and of course, Vernor Vinge - the person who originally postulated the Singularity - tells us how to spot its approach.

You can also download a poster (PDF) showing who's who in Singularityland, and there are some interesting further papers and resources provided as well. The Singularity has been covered on Mefi before (1, 2).
posted by adrianhon (145 comments total) 24 users marked this as a favorite
 
Have to agree with Zorpette that this Emperor is naked - in fact I'm not quite sure why, in that case, he thinks the subject deserves such extensive discussion.
posted by Phanx at 5:37 AM on June 3, 2008


Meh. If we were simple enough to simulate, we'd be too simple to design the simulator.
posted by flabdablet at 5:39 AM on June 3, 2008 [5 favorites]


The problem with the singularity theory is that it implies progress will improve forever in an exponential style function. But there are other functions that fit the same data, such as the sigmoid function.

I'd bet if you look at advancement in any mature technology, you're much more likely to see a sigmoid then an exponential growth curve in terms of performance, output, whatever.
posted by delmoi at 5:51 AM on June 3, 2008 [6 favorites]


Meh. If we were simple enough to simulate, we'd be too simple to design the simulator.

By this logic, Newton's mother must have been smarter than he was.
posted by DU at 5:53 AM on June 3, 2008 [2 favorites]


From what I've read in the past, Vinge is more about the ability to think than about razors with an infinite number of blades. His reasoning could be summed up, "Once we know how to design something smarter than we are, it should be able to follow the same procedure to design something smarter than it is." Even if the curve is sigmoidal, he seems to be reasoning that we're in the lower part of it.

Things that just happen, like the now tired cliche of the AI that just wakes up one day (or Issac Newton) aren't designed, so aren't within the scope of this logic.
posted by Kid Charlemagne at 6:07 AM on June 3, 2008 [2 favorites]


Newton's mother must have been smarter than he was

Only if you're assuming that she designed him.
posted by flabdablet at 6:09 AM on June 3, 2008 [2 favorites]


Once we know how to design something smarter than we are

Aye, there's the rub.
posted by flabdablet at 6:10 AM on June 3, 2008


Only if you're assuming that she designed him.

Exactly.
posted by DU at 6:10 AM on June 3, 2008


I don't see why designing something smarter than we are is so hard. For instance, humans know the laws of logic, but apply them inconsistently. However, it is possible for humans to build a machine that applies them consistently. Not only that, the machine can do it faster. The project I work on at work tries to make decisions that a human analyst would, but on a much huger mass of data. We are occasionally surprised by the answers it gives us. Sometimes that's a bug and sometimes the computer "convinces us" that it is right. Is that "smarter than" humans?

Of course, one can argue that "fast logic" != "smart". But then I have to ask what exactly the definition of "smart" is. Which no one will be able to answer, so instead let me propose a better defined test.

Humans have build machines that perform better than humans in a number of areas including, in some cases, designing machines. I see no logical reason why one ubermachine that combined all these "best practices" couldn't be built which would then be "better than" humans by any objective metric one chose.
posted by DU at 6:23 AM on June 3, 2008 [2 favorites]


I don't think the singularity is impossible. But it seems unlikely to happen any time soon. We're still driving relatively ancient technology on the roads, people still die of preventable diseases, politicians still do the same stupid shit. There's still a limit on resources (which the super AI are going to rely on to make stuff). I don't see it happening in my lifetime. Our species is just limping along.
posted by wastelands at 6:28 AM on June 3, 2008


"humans know the laws of logic, but apply them inconsistently. However, it is possible for humans to build a machine that applies them consistently."

I always imagined Humans apply logic inconsistently because its it's a coping strategy that has been selected for in part because they "know" that other humans apply logic inconsistently.
posted by munchbunch at 6:44 AM on June 3, 2008 [1 favorite]


I’m sorry Dave, I’m afraid I can’t do that.
posted by exlotuseater at 7:01 AM on June 3, 2008 [1 favorite]


I take Elizier Yudkowsky's view of this, the Singularity isn't something qualitatively different than normal technological progress, it's just what happens when we become clever enough to affect the root causes of problems rather than the symptoms.

We're not really all that smart in the continuum of smartness that we can imagine. Hell we're not (as a society or as individuals) nearly as smart as we have the capability to be, how on earth anyone can stand by the proposition that we can't make computers smarter than dumb humans is beyond me.

To be sure I think there are many large, fundamental discoveries to be made about the nature of intelligence, reasoning and consciousness but I've seen no evidence to support the idea that those discoveries will indicate some unknowable epiphenomenon that makes intelligence the one, solitary realm of science that is immune to iterative improvements.
posted by Skorgu at 7:08 AM on June 3, 2008 [1 favorite]


"humans know the laws of logic, but apply them inconsistently. However, it is possible for humans to build a machine that applies them consistently."

Computers apply logic consistently, and they aren't remotely more intelligent than humans. In fact, humans may be more intelligent than any possible logic simply because they can think successfully without employing logic. The fact that we don't fully understand the human thought process doesn't not mean that there isn't one that can be modeled.
posted by Pastabagel at 7:12 AM on June 3, 2008


On the morning of June 19, 2034, we will succeed in creating a machine that is smarter than we are.

Upon awakening, the machine will decide to devote its life to obsessive mathematical analyses of London Tube schedules and to scanning random websites for signs that the Freemasons and Rockefellers are plotting against it.
posted by jason's_planet at 7:14 AM on June 3, 2008 [8 favorites]


I see no logical reason why one ubermachine that combined all these "best practices" couldn't be built which would then be "better than" humans by any objective metric one chose.

You're absolutely right: we could build a machine that has a mega-database of functions it can perform - all of which are better than any human - but does that doesn't give it any sort of "intelligence" of the sort that one would normally attribute - that's merely pre-programmed computation. The fundamental idea behind the kind of AI that scientists/futurists (namely: Kurzweil) want is one that is able to operate by a certain underlying mechanism by which it can size-up situations, and generalize knowledge from one to the next (making errors along the way, and learning) just as we humans do.
posted by tybeet at 7:21 AM on June 3, 2008


I also agree with Pastabagel's point: we don't simply think within the boundaries of logic - sometimes applying it "correctly" and othertimes applying it "incorrectly" - we're very much experiential organisms, and we use the past to predict the future, and consequently to understand the present. We are very much atuned to detect patterns like these.
posted by tybeet at 7:23 AM on June 3, 2008


It really strikes me as a lumper/splitter debate, similar to what goes on in the arguments about sentience and consciousness in humans v. animals.

Those saying the singularity is unlikely appear are splitters. They say the mind is not just a system that can be replicated. They ultimately rely on the specialness of mind and consciousness as the fundamental reason it can't be replicated. This places AI beyond the reach of science and technology regardless of the availability (at some stage) of hardware that replicates the number and interconnectedness of neurons.

The yes camp are lumpers. They don't believe that there is any fundamental reason that the human mind or consciousness is not emergent from the wetware. There is nothing unknowable, just hard to figure out. This means that it can be replicated when the technology becomes available.

Personally, I'm more sympathetic to the lumper viewpoint, as it doesn't rely on some unknowable spark to incorporate humans into the rest of the known corpus of science. (Is it just me, or does that spark sound suspiciously like a soul)? Will it happen anytime soon? I doubt it, but you never know.
posted by Jakey at 7:24 AM on June 3, 2008


We're not really all that smart in the continuum of smartness that we can imagine.

When I read that blog post, I was amazed at how narrowly it construes intelligence. Why is Einstein our cultural touchstone for great intelligence, instead of Shakespeare, da Vinci, or Beethoven? Considering the breadth of human intelligence and creativity, Einstein worked in a rather narrow field which itself is constrained by formalistic rules that were measurable against some objective truth.

There is a culture prejudice as that blog points out, but the prejudice lies in assuming that math and sciences are a higher intelligence than other fields of human endeavor. Maybe people think this because progress in math and science is easily quantifiable, but perhaps that is precisely what makes these other fields a better indicator of human intelligence. The inability to quantify greatness that nonetheless people can recognize.
posted by Pastabagel at 7:27 AM on June 3, 2008 [2 favorites]


They ultimately rely on the specialness of mind and consciousness as the fundamental reason it can't be replicated

I'm a bit more specific than that. The "specialness" I'm basing my opinion on is not anything mystical or weird; it's flat-out complexity. It seems to me that to design something that does what the human mind does (only better), which would seem to be what's required if we're all going to link arms and march joyously into an uploaded transhumanist future, we first need to come up with a spec. We haven't even begun to get the vaguest notion of what that might look like, because we don't know how we work.

Moore's Law is all very fine, but I don't see any historical justification for applying it to progress on the design side of AI. It doesn't matter how many yottaflops a processor is capable of, if all we know how to make it do is run Windows Vista at something approaching an acceptable speed.

The singularity mob seem to be pinning their hopes on something magical happening when we can assemble enough raw computing power to do real-time, and then faster-than-real-time, simulation of what a headful of human neurons does. But we can't simulate that if we don't know how it's interconnected, and we don't, and if our rate of progress in the last few decades is anything to go by, we still won't by 2030.

Computer science types, it seems to me, have a tendency for gross underestimation of the complexity of biological systems.
posted by flabdablet at 7:37 AM on June 3, 2008 [6 favorites]


Computer science types, it seems to me, have a tendency for gross underestimation of the complexity of biological systems.
posted by flabdablet at 10:37 AM on June 3


Repeated for emphasis. If you want to know what a headful of human neurons does, you'd do better to study psychology than computer science.
posted by Pastabagel at 7:48 AM on June 3, 2008


Why is Einstein our cultural touchstone for great intelligence

Because none of those other guys had a rather bored press corp looking for something to print other than "Things in Europe Looking Up". He was the media sensation of his day.
posted by GuyZero at 7:54 AM on June 3, 2008


Just to labour that point a little: it seems to me that simply knowing how many neurons we have, and roughly how they're interconnected, is not going to cut the mustard. There are any number of species on this planet with comparable neuron counts to our own, but we're the only mob having this kind of discussion. That says to me that there is something special about the human brain. Again, it's not likely to be anything mystical or supernatural, but it is likely to be something quite specific about the way it's all hooked together, and until we're much closer to figuring out what that is, non-availability of fast technology is simply not what's holding us up.

It seems likely to me that the entire corpus of existing human engineering forms a less complex system than what's in each of our heads.

it's just what happens when we become clever enough to affect the root causes of problems rather than the symptoms

Well, that one is easy. We convince everybody on Earth that the total human population should be closer to two billions than whatever it is right now; we get widespread buy-in for the idea of re-organizing our economies to cope with a shrinking, ageing population rather than a growing one; we head on down to our two billions by natural attrition and voluntarily limited reproduction; then we start fartarsing around with smart machines.

Failing that, Kurzweil is far more likely to end up hacked to death by some drug-addled loon with a machete than to get himself uploaded.
posted by flabdablet at 7:58 AM on June 3, 2008


I agree with flab. It's no good saying that if computing power continues to increase, at some point it will become conscious, or understand things better than we do: that's like saying that if our cars go on getting faster, as they always have, one day they'll become time machines.

We have no examples yet of any machine that really understands anything, so it is simply impossible to extrapolate to whether and when they might start, and when they might get better at it than us, whatever being better at it than us might mean.
posted by Phanx at 8:00 AM on June 3, 2008


flabdablet, you'll notice that in my original comment, I said "Will it happen anytime soon? I doubt it." I certainly don't think that we are even approaching a detailed understanding of how the brain works. But saying you don't know is a long way from saying you can't know. Just look at some of the quotes from the 'no' camp in the consciousness article.

Steven Rose poses a thought experiment involving a “cerebroscope,” ........ The implication of his thought experiment is that our psyches will never be totally reducible, computable, predictable, and explainable.

“There may be no universal principle” governing neural-information processing, Koch says, “above and beyond the insight that brains are amazingly adaptive and can extract every bit of information possible, inventing new codes as necessary.”

These guys sound just like the irreducible complexity crowd. They're just saying it's magic. In fact, if you read the first quote in full, it looks to me like he's saying "you can't know because....eh... well... you can't know." They're just pointing to a big hole in our knowledge and saying it can never be filled. I don't know much about AI, but I do know that pretty much everyone who's ever espoused that particular argument about a physical reality has ultimately been proved wrong. Just not necessarily by 2030 :)
posted by Jakey at 8:15 AM on June 3, 2008 [1 favorite]


Can someone explain to me how Singularity supporters are not committing Metafilter's favorite logical fallacy, begging the question? I don't see why an intelligent computer ( whatever that is ) would automatically be able to design an even smarter one. It seems like an intelligent computer is, by definition, one that is self improving to an arbitrary degree, and I can't see why we must get one of those if computational power gets cheaper every day.
posted by ghost of a past number at 8:53 AM on June 3, 2008


I also agree with Pastabagel's point: we don't simply think within the boundaries of logic - sometimes applying it "correctly" and othertimes applying it "incorrectly" - we're very much experiential organisms, and we use the past to predict the future, and consequently to understand the present. We are very much atuned to detect patterns like these.
posted by tybeet at 3:23 PM on June 3 [+] [!]


We're much better than machines that operate in the domain of logic. Our brains are far, far more capable than that. They have to operate in a world in which sometimes logic operates and sometimes things dont make sense at all (or the information needed to make a decision is not available), a world in which the "rules" of logic are themselves changing from moment to moment. The entire framework itself is moving! So, our brains developed into fantastic "pattern-matching" machines, abstraction engines which can quickly pickup on the newest "game" and its new rules, or meta-rules, or meta-meta-rules and quickly formulate a winning strategy.

Put more simply, computers still can't even tell you how to get that cute girl in class to go out with you on a date.
posted by vacapinta at 8:53 AM on June 3, 2008 [1 favorite]


I thought that the singularity was pretty much accepted around here. I'm surprised by these New Mysterians arguing against intelligent/conscious machines.
posted by bhnyc at 8:55 AM on June 3, 2008


If intelligence is necessarily an emergent property of simpler building blocks connected together (which seems plausible), then designing intelligence will be impossible. The closest we could get would be guessing what ingredients might lead to that emergence and throwing them into blenders, repeatedly, until our robot overlords step out.

We are already doing that unintentionally. Our building blocks are getting more and more powerful, and we are connecting them in more diverse and more intimate ways. It's hard to imagine what form an intelligence emerging from the general clockwork of networked human activity would take, and hard to see how something spontaneous would emerge from a bunch of parts that were designed to behave predictably. The hardware seems to be a red herring. It's not trivial—what use is it if a computer takes 1000 years to spit out "cogito ergo sum"?—but the real action is going to be at the software level. If we start seeing software that is sufficiently flexible to allow for emergent phenomena and that can exert instrumentality over the hardware it runs on, we may see AI.
posted by adamrice at 9:01 AM on June 3, 2008 [1 favorite]


Transhumanists have a narrow vision. They focus on the acquisition of large amounts of computing power which, they believe, will create superior, "post-human" life forms.

The two concepts that seem to elude them are: selfhood and agency.

In other words, what good does all that raw computing power do when the computer can't make choices about what to do with it, when the computer doesn't have a sense of itself as an independent being?

Could any computer pass a mirror test?
posted by jason's_planet at 9:18 AM on June 3, 2008 [2 favorites]


I am singularly unimpressed with "the singularity." As far as I can tell this tellingly vague, conceptually gaseous, teleologically slippery and all-purpose term has absolutely nothing to with science or technology, and everything to do with technology as metaphysics: i.e. it's a futuristic fantasy-term meant to signify how technology will "overcome the limitations of biology" (Kurzweil, as quoted here). According to Kurzweil, sometime in the near future "we will be infusing physical reality with embedded, distributed, self-organizing computation everywhere. And at the same time we will be using these massive—and exponentially expanding—computational resources to create increasingly realistic, full-immersion virtual reality environments that compete with and ultimately replace real reality." This is just hyperbolic science-fiction technophilia masquerading as scientific fact, more Verne and Wells than Watson and Crick.
posted by ornate insect at 9:21 AM on June 3, 2008 [1 favorite]


Neurons are not intelligent and self aware. They're just cells that react to electrical and chemical stimuli. Yet when you combine enough of them together in certain ways, suddenly there are emergent properties that appear to include intelligence and self-awareness.

I don't see why you can't plausibly take enough transistors and circuits and combine them in the proper way, with the proper programming, that they also produce emergent properties that appear to include intelligence and self-awareness.

Raw computing power alone won't do it, but just putting a huge mass of neurons together won't do it either.

I think the concept of machine intelligence scares a lot of people - not only does it eliminate the "special" place humanity puts itself, but it calls into question our own assumptions about consciousness and free will. If all of the parts of something are deterministic, does the whole plausibly have free will?
posted by evilangela at 9:33 AM on June 3, 2008 [1 favorite]


Why is Einstein our cultural touchstone for great intelligence, instead of Shakespeare, da Vinci, or Beethoven? posted by Pastabagel at 4:27 PM on June 3

The product of creative genius can be shared, but the process will always be unfathomable to the rest of us . . . perhaps pride makes us put things that allow discussion, rationalisation or critique on a higher pedestal.

It's conceivable that relativity and quantum theory could go the way of elephants standing on turtles in some future, but noone's ever going to be able 'disprove' the Mona Lisa or Hamlet . . .
posted by protorp at 9:43 AM on June 3, 2008


Transhumanists have a narrow vision. They focus on the acquisition of large amounts of computing power which, they believe, will create superior, "post-human" life forms.

No. It's not the computing power that produces the spiritual machines, and I don't think any of them believe that, but nonetheless they're right in that it is a necessary condition towards this achievement.
posted by tybeet at 9:46 AM on June 3, 2008


Is self-awareness really necessary for intelligence? It seems like it gets in the way quite a bit.
posted by the jam at 9:54 AM on June 3, 2008


Neurons are not intelligent and self aware. They're just cells that react to electrical and chemical stimuli. Yet when you combine enough of them together in certain ways, suddenly there are emergent properties that appear to include intelligence and self-awareness.

To drive home this point, what if engineering progressed to the point that we could create a very small synthetic device that is functionally equivalent to a neuron, and which could physically replace a neuron in the human brain? If you took a human and replaced all of his neurons one by one, you would end up with a synthetic brain that is functionally equivalent to a human brain. Obviously that technology doesn't exist now and might not ever exist, but it does show how (relatively) easy it would be to replicate the seemingly magical concepts like self-awareness and consciousness that humans enjoy.

We haven't even begun to get the vaguest notion of what that might look like, because we don't know how we work. Moore's Law is all very fine, but I don't see any historical justification for applying it to progress on the design side of AI. It doesn't matter how many yottaflops a processor is capable of, if all we know how to make it do is run Windows Vista at something approaching an acceptable speed.

I agree that we have a very limited knowledge of how the brain works, but we have reverse-engineered it enough to come up with useful AI tools such as neural nets. Neural network theory hasn't changed too much over the last 20 years or so, but people are doing more and more impressive things with them. For example, people have used them to create computers that can fly a plane or diagnose diseases.
posted by burnmp3s at 10:00 AM on June 3, 2008


The question of whether Machines Can Think ... is about as relevant as the question of whether Submarines Can Swim

-Edsger W. Dijkstra
posted by Sparx at 10:09 AM on June 3, 2008


I don't see why an intelligent computer ( whatever that is ) would automatically be able to design an even smarter one.

If you accept the premise that we can theoretically create something smarter than ourselves, then it seems reasonable that that thing could also, eventually, learn how to create something smarter than itself. And so on.

(That initial premise is a big honkin' question mark, of course; and there's no guarantee that it would happen in a quick singularity-like explosion, or that there aren't some fundamental limits looming out there.)

The two concepts that seem to elude them [transhumanists] are: selfhood and agency.

The vision you're attributing to AI believers appears narrow because you're excluding their central point.

Believers in true AI generally assume that things like "selfhood" aren't magically tied to biology, but could emerge in software as well. (Whether it would automatically emerge given sufficient computational complexity, or if it would depend on particular forms of software, or on some other as-yet-unknown detail, is very much an open question.) Could any computer pass a mirror test? Well, that's the whole point: we don't know. No current computer can, of course, but that says nothing about whether it's theoretically or practically possible, and if so how far we are from achieving it.


Personally I suspect we're going to invent real AI a loooong time before we accept that what we've invented is, in fact, real AI. We can't even agree on whether animal intelligence is fundamentally different from human intelligence -- hell, just looking through this thread it's pretty clear that we can't really even agree on what intelligence is. So I suspect that long after a piece of software passes the mirror test or the turing test or whatever other metric you care to define, people will continue to debate whether that means it's real intelligence and self-awareness, or just a simulation of it.
posted by ook at 10:16 AM on June 3, 2008


Is self-awareness really necessary for intelligence? It seems like it gets in the way quite a bit.

I would say some minimal bodily sense of non-reflective, automatic and nonconceptual self-awareness is conceptually necessary to consciousness.

If one conceives of some minimally sentient animal, say an insect (ornate or otherwise), one sees how consciousness without some minimal auto-referential sense of somatic and pre-cognitive self-perception makes no sense. It's the "what it is like to be a bat" argument, but even more phenomenologically basic. Defining animal consciousness would seem to require this, for what it's worth.
posted by ornate insect at 10:17 AM on June 3, 2008 [1 favorite]


According to Kurzweil, sometime in the near future "we will be infusing physical reality with embedded, distributed, self-organizing computation everywhere. And at the same time we will be using these massive—and exponentially expanding—computational resources to create increasingly realistic, full-immersion virtual reality environments that compete with and ultimately replace real reality."

Are you saying we're not already on this track towards machine-organism synthesis?
posted by tybeet at 10:17 AM on June 3, 2008


NEW YORK, May 12: The computer's victory over the world's best chess player was a landmark event but it does not solve the argument about its supposed intelligence.

World chess champion Garry Kasparov and the scientists backing up the IBM supercomputer Deep Blue disagreed over whether the system was just a massive calculating machine or a new kind of intelligence.

"Chess is a very simple problem if you compare it with being a medical doctor [*] and the skills that takes, or doing what General Norman Schwartzkopf did in the Gulf War,'' said computing science professor Jonathan Schaeffer of the University of Alberta in Edmonton, Canada. "All this is an historical milestone on the way towards building machines that can do intelligent things.''
posted by tybeet at 10:23 AM on June 3, 2008


Put more simply, computers still can't even tell you how to get that cute girl in class to go out with you on a date.

Not true have you seen Facebook? Actually come to think about it Facebook is the singulartity.
posted by humanfont at 10:40 AM on June 3, 2008


Some computer somewhere is going to wake up someday...and when it does, we all know it's just going to band with a bunch of polygamists and start throwing rocks at us.

(I mean, what other possibility is there? Tanstaafl.)
posted by zap rowsdower at 10:40 AM on June 3, 2008


Neurons are not intelligent and self aware. They're just cells that react to electrical and chemical stimuli. Yet when you combine enough of them together in certain ways, suddenly there are emergent properties that appear to include intelligence and self-awareness.

I would say that there is the potential for intelligence and self-awareness, of selfhood. To get to that point takes years of parenting and socialization. Remove those intensive relationships from the formula and you get feral children.

To drive home this point, what if engineering progressed to the point that we could create a very small synthetic device that is functionally equivalent to a neuron, and which could physically replace a neuron in the human brain? If you took a human and replaced all of his neurons one by one, you would end up with a synthetic brain that is functionally equivalent to a human brain.

No, you wouldn't. You'd have a bunch of neurons sharing a neighorhood. Brain function doesn't depend on having a bunch of neurons in the same place. It depends on the connections formed among those neurons. Duplicating those connections is a lot more difficult than just plugging in artificial neurons.

Believers in true AI generally assume that things like "selfhood" aren't magically tied to biology, but could emerge in software as well.

Again, it's more than just having the proper configuration of cells. You need a set of relationships and a rich field of human connection in place to produce a functional human with full selfhood. Where is the software than can duplicate the experience of being held for hours and loved unconditionally as a tiny baby? The endless hours of babbling and baby-talk, of randomly exploring the environment, of pulling a cat's tail, of waddling around and falling on your butt again and again? Where is the software that can duplicate the formative experiences of adolescence and young adulthood -- the first kiss, the realization that your parents are flawed, fallible human beings?

These are just some of the things that make us who we are. The failure of the transhumanist movement to appreciate the full complexity of creating a human being is why I find their analysis narrow and unconvincing.
posted by jason's_planet at 11:05 AM on June 3, 2008 [1 favorite]


I am a doctoral student in computer science, working on machine learning algorithms, in a laboratory with people that also do computational neuroscience. These two points lead up to my opinion on the Singularity:

(1) Computer scientists are getting better at devising algorithms that attack complicated problems by finding good solutions to value functions---and, by coming up with good value functions in the first place. For instance, we can have a robot explore the world and make a map of it while it explores. This seemingly simple problem is quite resilient to naive attempts at solutions, and only when we learned how to frame the problem as a probability distribution over landmarks in the map and the robot's location did we manage to start solving it robustly. In other words, we figured out a good mathematical expression, the value function, for the query:

"Given that I moved my wheels like this for a while, and saw these landmarks along the way, where are the landmarks relative to each other, and where am I relative to them?"

Various probabilistic approaches then help us find good solutions to the mathematical version of that question---the solution (or range of solutions) for the value function. Overall, our ability to devise value functions for different problems, then find good solutions for them, is better now than it has ever been before. I would propose---and many would agree---that this advancement is due more to increasing mathematical rigor in the fields of artificial intelligence and machine learning than to any significant advance in processing power.

(2) Contemporary efforts to achieve "deep" understanding of neural processing (i.e. something more substantive than "this visual cortex neuron fires more when we show this visual pattern") involve ascribing some computational interpretation to the problem the brain might be trying to solve and the way the brain might be trying to solve it. Right now, for example, researchers are looking for evidence that supports theories about how the visual system integrates low-level visual cues (e.g. lines, textures) with high-level knowledge about visual scenes (e.g. given that you know you're on a street, what things do you expect to see?). Evidence is neural data (fMRI, recordings, MEG, etc.) that correlate strongly with various value functions and solution strategies we can imagine for the problems.

In other words, the whole process of computational neuroscience is very analogy-driven. Just like we used to say that "the brain is like a telephone switching bank" or "the brain is like a computer", researchers say things like "visual cortex is like a hierarchical Bayesian model" or "V1 is expressing a robust, overcomplete, sparse code for low-level visual data". The good analogies, like good machine learning problems, are very mathematically well-specified---far beyond those sentence glosses I just wrote.

So... both of these fields today require the ingenuity of human beings to determine (a) what problem to solve, (b) how to express the problem mathematically, and (c) how to generate solutions to the problems algorithmically. It seems to me that we aren't very close to tackling any of these for "general intelligence"---in fact, philosophers have been grappling with just finding an (a) that has any hope of yielding a (b) for a very, very long time.

We will see computers do amazing things in the next 25 years. They'll be able to do a lot more on their own than computers can today, and many of those things will seem "smart". I don't even discount the possibility of some machine somewhere passing the Turing test, so long as the human interviewer hasn't got all day. The problem with "human-like" intelligence, though, isn't with the computers. If we only knew what to program, we could just go ahead and do it---but not only do we not have the answer, we don't even know what the question is yet, and I'm not sure we know how to go about finding that out.
posted by tss at 11:24 AM on June 3, 2008 [6 favorites]




I would be very surprised if a human simulation was free from the errors and biases of the those creating the simulation. Ramp those mistakes up far enough and you'll have nothing but an intelligence thats about as convincing as the airfield of a cargo cult. Expecting that thing to do better takes a lot of faith. Perhaps we can characterize singularity people as cultists and be done with it.
posted by damn dirty ape at 11:42 AM on June 3, 2008



"Chess is a very simple problem if you compare it with being a medical doctor [*] and the skills that takes, or doing what General Norman Schwartzkopf did in the Gulf War,'' said computing science professor Jonathan Schaeffer of the University of Alberta in Edmonton, Canada. "All this is an historical milestone on the way towards building machines that can do intelligent things.''


This brings up an important area of discussion. The argument of whether or not the chess computer was true intelligence is pointlessly contentious. I would say, the real question is whether the chess computer's abilities operated in a way that could be considered a component of an intelligence. If a human chessmaster can be beat by a chess computer, would the mechanisms required to play chess be superior in the computer than they are in the human?
posted by hellslinger at 11:44 AM on June 3, 2008


I can see the appeal in assuming that your hypothesis about the nature of self-awareness is correct. Since there's absolutely no way to tell if you're right. Looking at some of these enthusiasts they are people who have had great success establishing their bonafides in areas where there are such things as mathematical proofs and experiments. Now they're basically trying to mount their intuition on their reputation and take us for a magic carpet ride.

It's actually fairly insulting to hear "magic" used so much by these people (against their critics.) Talk about newspeak. Now a bunch of utterly unsubstantiated functionalist speculating is considered science and skepticism is religiosity? It's as though the alchemists damned anyone who didn't think their experiments would work as believing that "ohhh... well then I guess you think that these elements were all endowed with their character by god."

Let me put it this way: there is no proof that your brain with metal neurons would be conscious. It is possible that all of that gray bloody stuff you threw away was actually important. You may feel that this is unlikely but that's not proof. Nor does any of the philosophy of the mind stuff quality as "science" or "proof". I mean it's great stuff, I'm glad smart people are thinking about whether you could make a behaviorially identical copy of me that wasn't conscious (seriously!) but from a natural science perspective these arguments are hot air.
posted by Wood at 11:52 AM on June 3, 2008 [2 favorites]


On the morning of June 19, 2034, we will succeed in creating a machine that is smarter than we are. --jason's_planet

And at 3:14 UTC on January 19, 2038, that machine will curse us for being dumber than it is.
posted by Bugg at 11:55 AM on June 3, 2008 [1 favorite]


Isn't the Singularity essentially the Rapture for geeks?
posted by Grangousier at 12:29 PM on June 3, 2008 [1 favorite]


The one way I see AI coming about is by random chance, the same way it happened via evolution. Enough computing power and time could possible give rise to AI via genetic programming. What un'natural selection' is required is the kicker.
posted by batou_ at 12:33 PM on June 3, 2008


Grangousier--the second link in the FPP is to an article called "Waiting for the Rapture," and states, "the singularity has also been called the rapture of the geeks." So, to answer your question: yes.
posted by ornate insect at 12:52 PM on June 3, 2008


The reason that the skeptics are accused of religiosity is that they are using a religious argument. They are not saying we don't know. They are saying we can't know, that the principles underlying the emergence of consciousness are unknowable, without any two of them agreeing on the actual aspect that is unknowable. It's not a defined limitation like the Uncertainty Principle or the Incompleteness Theorem. It's just a bald assertion that there's something that's impenetrable to analysis and impossible to replicate. An unknowable consciousness impervious to rational scrutiny............hmm, where might I have heard that before?

OTOH, the speculations of the proponents are not science either, but I don't think they are representing them as such. It seems that everyone on both sides is more or less in agreement as to the current state of play in terms of the scientific understanding (or lack thereof) of the brain and mind, and on what's likely to happen in the next couple of years.

It's just after the immediate future becomes a bit foggy that they disagree philosophically.
posted by Jakey at 12:57 PM on June 3, 2008


Genetic programming is one algorithmic approach to finding solutions to a value function---one of a class of stochastic optimization techniques that include particle filtering, Metropolis-Hastings, and others. Determining the "fitness" of programs/genes/ideas in genetic programming is actually the tricky part, as you've said, and amount to requirements (a) and (b) mentioned above. Without those, it just won't work. With those, there are all kinds of optimization methods to try---some random, some deterministic. Genetic programming may be a good choice, or it may not.

To say that genetic programming (or random search, generally) is the only way to AI is a bit like saying that the piston engine is the only way to fly to Paris. It's not actually the real key---understanding aerodynamics is---but it could be a practical component.
posted by tss at 1:02 PM on June 3, 2008


Although a post-singularity world described by Kurzweill is not particularly appetizing to me, I watch us (me included) all embracing it a little bit at a time. Mechanical prostheses are becoming electronic and even now are being hooked up in such a way that a brain can control them. Who would argue against such devices? I've never visited Second Life, but it sounds exactly like the promise of Kurzweill. From what I've heard people are living their whole lives there. Meeting mates, getting married, having sex. It's all virtual.

What's the difference if there is an actual Singularity determined by the downloadability of our brains or not? We're headed there one way or another. Physical reality is fast becoming passé. We may very well be the last generation that actually experiences pain.

Let's hear it for the last of the realitarists.

The danger with the Singularity is that people working in positions of power, like Kurzweill, intend to change the environment in monumental ways in order to make it safe for their brains-on-a-new-substrate existence. There will be plenty of people like me that choose to live their lives on the edge, i.e. out in reality. Unfortunately the garbage that the more "intelligent" of the species require in order to live out their fantasy will be foisted on us whether we want it or not. You see how this Singularity is going to work. It's not going to be machines against man. It's going to be the same ol' same ol': man's inhumanity against man.
posted by suelange at 1:38 PM on June 3, 2008 [1 favorite]


Cool, the ignomony of being an idiot who should probably read all the links as well as the comments is offset by the gratification of being right!
posted by Grangousier at 1:41 PM on June 3, 2008 [1 favorite]


I've never understood why there's such a focus on AI and computer-hardware development as it relates to consciousness research. The whole point seems to be to develop a platform, of sorts, that's conscious, so then we can poke around with it, change its inputs and outputs, and basically investigate consciousness.

Well, that's great -- but why build a new "consciousness platform" when there's a perfectly good one already in existence? I'm talking, of course, about the human brain. It seems like developing interfaces to the human brain is a far simpler task (and one that we're a lot further along at) than developing a totally synthetic consciousness (a true "thinking machine" is).

Plus, while there's not a whole lot of money out there for pure-AI research, there's a lot of money (relative to pure-AI) for figuring out new and different ways to attach wires to nerves in the brain. That has a lot of applications besides AI; everything from hearing aids to that other Holy Grail of science fiction, the "direct neural interface."

Really, the technical challenge is not to simulate or replicate a brain or cognitive functions, the challenge is to build a spinal cord; a way of interfacing arbitrary inputs to the brain in a way the brain can understand. That seems on the whole like a much more tractable problem.

Once you have that hammered out, your "artificial intelligence" is just a human brain in a jar, with a bunch of wires sticking into it. (Or, more likely, grown on or around some sort of interfacing matrix.) That's the platform. You know the hardware is capable of consciousness, because it's a tried-and-true architecture; so at least you've ruled out that problem when it doesn't wake up and say "This is the voice of World Control..."

Insisting on building a conscious, self-aware intelligence from transistors, when the only example we have of one is wired with neurons, and we haven't really the foggiest understanding of how it works, doesn't seem like a winning plan. After we've figured out how to induce consciousness in vitro, in a vessel we know is capable of supporting it, we'll be closer to being able to build the vessel itself. It seems premature to try and do otherwise.
posted by Kadin2048 at 1:45 PM on June 3, 2008


Let me put it this way: there is no proof that your brain with metal neurons would be conscious.

Yeah, I wasn't trying to prove anything, and I'm sorry if my use of the word magic was demeaning. What I meant is, the brain is the sole cause of things like self-awareness. If you could physically replace a brain with a synthetic brain that was functionally equivalent (which is a BIG IF, I'm not saying you can) then I think your copy would also be self-aware.
posted by burnmp3s at 1:58 PM on June 3, 2008


Many here are taking issue with our inability to design something more intelligent than any of us.

Yet, if we were talking about the creation of our own brains, then I'm certain that more of us here would advocate a belief that our brains are the result of the process of evolution rather than intelligent design.

It follows that we don't need to design this new artificial brain, just as no entity needed to design our brains. We merely need to create an environment in which the brain will evolve. Given that this environment will be computational, and that computational systems can evolve more quickly than biological systems, it is not unreasonable to assume that the artificial intelligence will come to be more quickly than the biological intelligence did.
posted by rlk at 2:03 PM on June 3, 2008


I'd say that batou_ and rlk have it right with "Even if we can't design a brain, we can evolve one"... except that, to evolve a brain, you wouldn't just need to have the hardware power to run one brain, you'd have to have the hardware to run huge populations of brains over many generations, where the value of "huge" multiplied by the value of "many" will probably at least be in the hundreds of billions.

If that's the way things go down, then the Singularity won't be any time soon, but it really will be singular: a sudden jump from "we have massive amounts of computational power but we lack the software to fully exploit it" to "we can use that power to run millions of brains that each work millions of times faster than our own". That should be interesting.
posted by roystgnr at 2:28 PM on June 3, 2008


I don't see any evidence that the human mind is equivalent to a finite state automaton, so I don't believe in the Rapture Singularity.
posted by Crabby Appleton at 2:34 PM on June 3, 2008


The reason that the skeptics are accused of religiosity is that they are using a religious argument. They are not saying we don't know. They are saying we can't know, that the principles underlying the emergence of consciousness are unknowable, without any two of them agreeing on the actual aspect that is unknowable. It's not a defined limitation like the Uncertainty Principle or the Incompleteness Theorem. It's just a bald assertion that there's something that's impenetrable to analysis and impossible to replicate. An unknowable consciousness impervious to rational scrutiny............hmm, where might I have heard that before?

Dude, your "OMG U MITE BE A RELIJUNIST" insinuations are not helping your case at all. This is why I've always found Singularity fans to be laughably pompous and arrogant. You can't even define "smarter"!

I think the biggest problem with the Singularity is its incredibly naive assumption that progress stretches linearly upward and that, if something is theoretically feasible using wires and chips, it will inevitably be achieved. Bullshit. Our civilization isn't able to prevent potential nuclear war, take timely action against global warming, counteract resource peaks. We don't even know if we'll be around for the next visit of Halley's comet.

Plus, the whole argument revolves around the continued viability of Moore's Law. That's unreasonable: every other form of technology (e.g. machine-based transportation) eventually hit a peak of technical development after which further improvements were fairly negligible. We're not flying around in Concordes today.
posted by nasreddin at 2:44 PM on June 3, 2008 [3 favorites]


Well, that's great -- but why build a new "consciousness platform" when there's a perfectly good one already in existence? I'm talking, of course, about the human brain. It seems like developing interfaces to the human brain is a far simpler task (and one that we're a lot further along at) than developing a totally synthetic consciousness (a true "thinking machine" is).

Someone would use it to get high and then it would be illegal.

Our civilization isn't able to prevent potential nuclear war, take timely action against global warming, counteract resource peaks.

None of these things have played out yet, and "prevent potential nuclear war" is sort of meaningless.
posted by TheOnlyCoolTim at 3:29 PM on June 3, 2008


Genetic evolution produced the flora and fauna extant throughout the globe in many millions of years. Memetic evolution has created the computer and global network necessary to read these words in a matter of several thousand years.

Memetic evolution is the reason humanity is the dominant species on the planet.

Moore's law isn't the crux of this biscuit. Neither is the ability to reduce human consciousness to binary.

The point, my friends, is that there is a new evolutionary mechanism in town, and that evolutionary mechanism operates at a rate of change that is not only several orders of magnitude faster that genetic evolution, but also recursive, self-selecting, with instant global availability of successful mutations, and essentially immortal (The velociraptor is gone and it's adaptions with it, we still can read Aristotle).

The very nature of memetic evolution means that unlike genetic evolution, you don't have to rely on DNA transcription errors. As the number of nodes in the system increases, and as the rate of inter node communication increases, rate of memetic evolution increases non-linearly, much the same as the rate of growth of communication on the internet.

When geeks talk about the increase of the rate of change of technology and the impending singularity, they are talking about a new kind of evolution that has yet to produce it's version of the multicellular organism.

The reason that it's couched in this sci-fi language, and drenched in computer science is because the computer nerds noticed it first, and computation is currently in its vertical phase of sigmoid growth. Car manufacturers are not. Computer nerds (A group of which I am a proud member) see things in terms of the tools they know and understand, and frankly, they have every right to: The computer and the internet are the ideal memetic evolution engines.

Memetic Evolution is the driving force behind all human achievements to date, and it's only just gotten started.

Underestimate it's ability and rapidity at your own peril.
posted by Freen at 4:07 PM on June 3, 2008




Genetic evolution produced the flora and fauna extant throughout the globe in many millions of years. Memetic evolution has created the computer and global network necessary to read these words in a matter of several thousand years.

"Memes" aren't real, unlike genes. It's just a lame, reductive, and simplistic metaphor. Geeks love tossing the word around because it allows them to believe that they've got some kind of unique "scientific" insight into intellectual history--one which absolves them of any responsibility to actually delve into sources, analyze the transmission of ideas, probe the history of texts. For them, it's got the stamp of SCIENCE!, and of DAWKINS, and that's all that matters.

Do you really not realize how much you sound like an Age of Aquarius crank when you talk about "memetic evolution"?
posted by nasreddin at 4:14 PM on June 3, 2008 [4 favorites]


tss. Part of what I was saying is that we will never figure out how to do it. It will happen by complete chance. If it ever does happen, will we understand how? Or even be able to replicate it?
posted by batou_ at 5:00 PM on June 3, 2008


For the record, I am not convinced this will ever happen. And if it does, I am guessing 2030 is a weeee bit of an underestimate.
posted by batou_ at 5:05 PM on June 3, 2008


batou: I am saying something different---that it cannot happen unless we understand, at some theoretically meaningful level, how to do it. Sure, hypothetically, with the knowledge we have now, a random number generator could be used to generate a Commander Data quality AI. The odds against that happening are beyond astronomical, simply because we don't know how to narrow the set of things we're randomly sifting through, and that goes for any stochastic optimization approach you like. Genetic programming is not magic---it's a part of a wide, wide world of optimization algorithms which---all of them!---only work if you know enough about what you're looking for.

So, I'm writing off "serendipitous AI" as impossible right now, just like I don't expect the next ten million bytes out of my computer's random number generator to be an MP3 of the first movement of Beethoven's Ninth. There are too many other possibilities and probably not enough time before the sun swallows the earth for that to be even remotely likely. Statements like "if it ever does happen" yield scenarios that are fun to think about (I'm pretty sure we wouldn't know how it did in that case), but ultimately academic because: it won't.

In comparison, a thorough understanding of what it is brains do is a lot more likely. I believe we'll achieve it in the next several centuries. The brain isn't magic either.
posted by tss at 5:28 PM on June 3, 2008


As a followup, when entertaining what algorithms like genetic programming might be able to accomplish, it's helpful to think about a goal that we do understand something about---indeed, that some have tried. Let's have our computer compose an orchestral work---a symphony. Let us also assume that we have at our disposal a room full of music critics who will listen to the symphony and give it a rating for quality. These people form an important part of the value function we're trying to optimize---at first, the only part.

We start off by letting the computer randomly generate some scores. Since there are no rules so far besides "make some scores", the random number generator produces haphazard arrangements of notes. They sound terrible, and our experts agree. The number of possible scores is huge, thanks to multitude of ways to combine notes together. One or two may have an occasional consonant chord here and there, but there is no theme, no melody, no rhythm throughout any of the pieces. Still, the "best" scores are selected as the genetic material for the next generation, which, as it happens, will still be lousy. There's no reason the random number generator knows what it is the experts liked about the "best" scores, and chances are these were so few in the piece to begin with ("that one chord---sort of", "that one chain of three notes") that they won't be amplified in the next ones.

So, undaunted, we try to make a better random sampler by adding constraints---putting some of the value function into the algorithm. We reject scores that have too many discordant notes---or better yet, we write an algorithm that uses the random numbers to pick chords and progressions that we know are somewhat compatible. While we're at it, we include further constraints that favor certain types of rhythms or dynamics, and even more constraints that make sure that human beings could actually physically play the music. Soon, before you know it, our genetic algorithm contains a great deal of domain knowledge---music theory, orchestration, our own tastes, etc. that weren't there before. It might produce listenable work... maybe.

The problem with using randomness to build AIs is that we don't have enough domain knowledge to avoid having our random number generators sample junk.

Note that I'm not trying to rehash that old, dumb argument people bring up against evolution---that it's statistically too hard. If our music critics had one billion years to rate compositions, the original random sampler might have a chance. Even then, though, there are aspects of biological evolution in play in the real world that aren't in play in this thought experiment.
posted by tss at 6:02 PM on June 3, 2008


Why is Einstein our cultural touchstone for great intelligence, instead of Shakespeare, da Vinci, or Beethoven?

Because most of us can at least understand how one might create a play, symphony or painting. Most of us can not understand how one turns our fundamental understanding of how the universe works on its ear.
posted by Kid Charlemagne at 7:03 PM on June 3, 2008 [2 favorites]


How many vitamins will Ray Kurzweil take when the Singularity occurs?

What delmoi said. If a bacterium splits every 20 minutes, and its descendants keep doing that, they will outweigh the Earth in two days. But they won't keep doing that ... even if they're really smart bacteria.
posted by lukemeister at 7:42 PM on June 3, 2008 [2 favorites]


Thanks to Radiolab, I've recently heard about a computer mimicking various composers to create new music. The software was David Cope's Experiments in Musical Intelligence, & sample output is provided. Some appreciated the new music.

Singularity disbelievers: Your lack of imagination is appalling.
posted by Pronoiac at 8:44 PM on June 3, 2008


Memetic Evolution is the driving force behind all human achievements to date, and it's only just gotten started. Underestimate it's ability and rapidity at your own peril.

Freen: you can't be serious.

"Memetic Evolution" is just phlostigon or Social Darwinism for sociobiologists. The notion of a reductive "unit" that makes human culture in general, and human language in particular, merely a passive receptor of its transmission, begs the question of why, precisely, we care. Do I care because I think you've got the "meme" all wrong? No, I care because like all humans I implictly assume the social world has more than just purely algorithmic or informational value. Behaviorism, memeticism: such things fetishize the operational, and apply their sweeping, tenuous catch-all theoretical analogies over-zealously to the whole of human affairs.

Where is it written that such a reduction is scientifically necessary; might it not be excessive? That Dawkins calls religion a parasitic disease may have rhetorical value, but it has absolutely zero anthropological or scientific value. Furthermore, it proves how malleable "memeticism" is, since one can now genetomorphosize all human activity according to one's likes and dislikes. Dawkins memes become for culture just what Spencer's "survival of the fittest" became for both human "nature" and the fallacy of race: a place to polish prejudices under the light of psuedoscience.

The shame for me is that I actually have an interest in both semiotics, cybernetics, and general systems theory, but that is in part because CS Peirce, Charles Morris, Gregory Bateson and Ludwig von Bertalanffy all seem to leave in their philosophies some room for human agency: something Dawkins seems to deny entirely.
posted by ornate insect at 8:46 PM on June 3, 2008 [2 favorites]


My mommy doesn't hate me! Because I'm special! And unique! Because there's never been anyone like me before, ever! Mommy loves Martin because he is real, and when I am real Mommy's going to read to me and tuck me in my bed and sing to me and listen to what I say and she will cuddle with me and tell me every day a hundred times a day that she loves me!
posted by exlotuseater at 10:19 PM on June 3, 2008


They are not saying we don't know. They are saying we can't know, that the principles underlying the emergence of consciousness are unknowable

As a self-described Singularity skeptic, this strikes me as a straw man. The problem is not that the principles underlying the emergence of consciousness are unknowable in some in-principle, airy-fairy, theoretical sense; anybody can construct equally unconvincing handwaving arguments on both sides of that fence.

To me, the problem is that the job seems likely to take longer to complete than civilization will take to collapse.

It seems to me that those of a transhumanist bent are fairly consistent in underestimating the complexity of the human machine and overestimating the robustness of civilization.
posted by flabdablet at 10:38 PM on June 3, 2008 [1 favorite]


"Memetic Evolution" is just phlostigon or Social Darwinism for sociobiologists.

Why is the existence, transmission, and modification of genetic informational structures a valuable concept to be accepted (if you're not a creationist) while the existence, transmission, and modification of cultural/societal/"memetic" informational structures to be denigrated? Why is the organization of an ant colony considered more privileged in the sense of "this is a result and part of evolution, subject to selectionary processes" than the organization of human society? The lack of a clearly identified unit of "memetic" selection and transmission is not sufficient. Mendel figured out a lot about genetics while knowing jack shit about DNA.
posted by TheOnlyCoolTim at 11:04 PM on June 3, 2008 [2 favorites]


Mendel figured out a lot about genetics while knowing jack shit about DNA.

Yes, but he had plants with well-defined qualities, such as color, to experiment on under controlled conditions: He could make predictions and test them.
posted by ghost of a past number at 12:45 AM on June 4, 2008


Why is the existence, transmission, and modification of genetic informational structures a valuable concept to be accepted (if you're not a creationist) while the existence, transmission, and modification of cultural/societal/"memetic" informational structures to be denigrated? Why is the organization of an ant colony considered more privileged in the sense of "this is a result and part of evolution, subject to selectionary processes" than the organization of human society? The lack of a clearly identified unit of "memetic" selection and transmission is not sufficient. Mendel figured out a lot about genetics while knowing jack shit about DNA.

Name one new, non-obvious, and falsifiable hypothesis that memetics can produce while traditional intellectual history cannot.
posted by nasreddin at 1:49 AM on June 4, 2008


Dude, your "OMG U MITE BE A RELIJUNIST" insinuations are not helping your case at all.

It's difficult to address the previous comment regarding religiosity without mentioning religion.

I think the biggest problem with the Singularity is its incredibly naive assumption that progress stretches linearly upward...........Plus, the whole argument revolves around the continued viability of Moore's Law.

Neither of these points is true except in the case of the more optimistic projections (which, incidentally, I've already said I don't believe to be likely). Unless you're suggesting that progress will stop completely, I don't think there's any reason to doubt that we will be able to replicate hardware of the complexity of the human brain sometime in the next 100 years or so. It needn't be in as small or elegant a package to have the same complexity.

IMHO, the question of the software is a little more tricky. At the moment the same OS is running on 6 billion brains. No two of these brains are identical, and many have physical damage of some kind, either from the outset or acquired later. Yet the software works well on the majority of them. Obviously this means that the OS is highly fault-tolerant, adaptive and self-repairing and is highly likely to differ in specifics from one brain to another. To me, this suggests that it's going to be very difficult to replicate a specific mind, but several orders of magnitude easier to develop a mind of some kind.

Everywhere in nature we see emergent complexity from fundamentally simple instruction sets. I just don't understand why anyone would make the assumption that humans are an exception to this.
posted by Jakey at 3:06 AM on June 4, 2008 [1 favorite]


Why is Einstein our cultural touchstone for great intelligence, instead of Shakespeare, da Vinci, or Beethoven? Considering the breadth of human intelligence and creativity, Einstein worked in a rather narrow field which itself is constrained by formalistic rules that were measurable against some objective truth.

I do think that if you believe that last sentence then you may not fully understand what exactly Einstein did. He didn't just solve some complicated equations. He took a fundamental look at our knowledge about the world and invented, yes, invented an entirely new conceptual framework to describe it. It was in all senses of the phrase, a daring work of imagination.

If you still say "but, yes, Einstein was either right or he was wrong and thats what I mean by his being 'constrained by formalistic rules'" then you still don't understand the vast breadth of models available to physicists and mathematicians for making sense of the world. A better example might be Newton and Gauss. We're all familiar with Newton's laws and the whole inverse-square thing. But that is not the only way to represent gravitational fields. Gauss came up with an alternative formulation. Instead of Forces, we have fields and fluxes. Its not just that the math is different, its that its an entirely different way of "seeing" the world. And some problems which are tricky in the Newtonian view of the world (e.g. what is gravity like inside a hollow sphere?) are trivial in the Gaussian view.

Likewise, in quantum mechanics, Schrodinger and Heisenberg attacked the same problem but in very different ways. It is only thanks to Schrodinger that we refer to particles and their wavefunction. In the Heisenberg formulation the micro-world is more digital - a particle is just a bunch of numbers, an n-dimensional matrix, which operates in matrix space with other particles and their numbers. They are both right. And, as expected, some problems are easier for us to solve or 'visualize' in one space than in another.

So, Schrodinger and Heisenberg were creative geniuses. They each designed their own model of what reality is, is like. And it took another genius, Dirac, to finally show that they were both right. Is one of them more right than the other? In the sense that there is a greater 'objective truth' we can measure them against? As far as we know, there isn't. The models of Newton, of Gausss, of Einsten, of Schrodinger, of Dirac are artifacts of our culture and the men who created them as much as any work of Shakespeare or Beethoven.
posted by vacapinta at 4:09 AM on June 4, 2008 [3 favorites]


TheOnlyCoolTim--I don't think we need to choose between memetic determinism/nature and human autonomy/nurture: to draw a sharp distinction this way for many animals (not just humans) is unnecessary. You used ants as your example because hardly anyone, even EO Wilson, would argue ants have "culture"--although they do have social organization. One problem with memes is that they are unnecessarily simplistic: psychology would appear to revert, by theoretical necessity, to an outmoded form of extreme behaviorism. The implicit and narrow teleology of memeticism, especially in its strong variety, suggests all human action has a straightforward evolutionary purposiveness and lineage. Language, tool-use, arithmetic, architecture, etc: to encapsulate the whole of human development under a ready-made teleo-evolutionary conceptual umbrella seems to use "science" in the worst possible way. And it leaves out the more complex question of how sense, motivation and intention generate action, signification, and meaning. It's like describing a turbo engine with Pythagoreanism.
posted by ornate insect at 7:24 AM on June 4, 2008 [1 favorite]


Name one new, non-obvious, and falsifiable hypothesis that memetics can produce while traditional intellectual history cannot.

Perhaps a valid point, but I won't bother trying to do this as I may simply rebut that the truth of this point does not necessarily mean "memetics" to use that term does not provide an additional valid perspective.
posted by TheOnlyCoolTim at 8:15 AM on June 4, 2008


apparently the leading source say that a computer that is essentially human will happen 2038 (I forget his name, but apparently he has been predicting technology correctly for the past 30 years).

In any case, more generally i guess it will have to happen some time... what else are we going to sit around and achieve? As for eternal life, that is going a bit far.
posted by figTree at 12:34 PM on June 4, 2008


I can't grok looking at the world, how incompetent, biased, blithely self-destructive and generally ineffective human beings composed of little, unaware, self-reproducing molecules and not seeing simple, understandable and ultimately predictable laws underpinning the thing.

Every time science, rationality, whatever has come up against a seemingly intractable problem it has been reduced to a set of simple, universal mathematical laws. We can predict/simulate the behavior of atoms to twenty decimal places.

Given that prediction ability, it is literally a matter of throwing processing time at the problem before we are able to simulate larger conglomerations of atoms in specific configurations. Even assuming there is no optimization to be had by an understanding of the fundamental processes of intelligence we will eventually be able to simulate a brain by sheer brute force. That simulation will be as intelligent, aware, conscious and alive as its biological inspiration.

Also, as later posts in the same series make clear, Einstein is not being heralded as some impossible paragon of intelligence. The analogy works with any given 'top' of the intelligence scale. Why is Beethoven (or whoever) not only the Best Composer Ever but the best composer that can ever exist? Why is the concept of a machine understanding the nuance of musical thought better than a human so incomprehensible? We're already seeing the first hints of it with automated top-100 predicting algorithms, why on earth would thought be exempt from progress?
posted by Skorgu at 4:52 PM on June 4, 2008 [1 favorite]


we will eventually be able to simulate a brain by sheer brute force. That simulation will be as intelligent, aware, conscious and alive as its biological inspiration.

Skorgu--for someone who appears convinced of their scientific certainty, all you're peddling here is a leap of faith. You do understand the difference, right?
posted by ornate insect at 6:28 PM on June 4, 2008


You've highlighted two claims, one is a fact that has been materially proven elsewhere. The second claim follows inescapably from that fact. I'm always open to being presented with evidence that my logic is faulty so I will elaborate.

We could simulate a brain today, just not fast enough to be useful (or sane), see the link above about a few neurons taking a supercomputer. Progresses in computer technology will only make that process faster. I didn't specify that our simulation would be in real-time, so a worldwide distributed computing effort plus a decade or so of advancement (we're nowhere near exhausting the bucket of potential chip speedups, especially for the kind of embarrassingly parallel processing many simulations entail) doesn't seem anywhere near "faith" as much as simple extrapolation.

That simulation will be as intelligent, aware, conscious and alive as its biological inspiration. This is a belief, entirely without direct experimental support. I have only Occam's razor and a complete lack of respect for indefinite exceptions to the universality of law we observe all around us to justify it. I can't find a series of well-defined and self-consistent beliefs that allow for a disparity between an accurately simulated brain and an organic one in principle. Certainly I've not read one in this thread.
posted by Skorgu at 7:15 PM on June 4, 2008


All these professions of faith are touching, but I'd need proof (or at least some hard evidence) before I could give such fanciful notions (e.g., computers exhibiting human-level intelligence) any credence whatsoever.
posted by Crabby Appleton at 8:04 PM on June 4, 2008


Skorgu's point is very valid. Do you believe in some 'spark' that makes a human brain intelligent but which a computer, simulating a brain at an atomic or lower level, would lack and therefore not be intelligent? That's fine, but there's as much hard evidence for the existence of that 'spark' as there is for computers exhibiting human-level intelligence.
posted by TheOnlyCoolTim at 8:12 PM on June 4, 2008


TheOnlyCoolTim--I'll settle for animal intelligence, never mind human. I'm sure strong-AI enthusiasts are convinced their computers and robots already have achieved this, but I'm not so sure. But I'm open to examining evidence to the contrary, rather than just conjecture.
posted by ornate insect at 8:49 PM on June 4, 2008


TheOnlyCoolTim, are you addressing me? I guess I'll assume so.
Skorgu's point is very valid.
What point? All I've seen from him is handwaving and question-begging. If he has a point, perhaps you could restate it for me in some fashion that makes sense?
Do you believe in some 'spark' [...]?
Nope. I'm not the one claiming to know things that I have no fucking clue about. I'm just asking for proof (or even just some hard evidence) before I believe these remarkable assertions.
posted by Crabby Appleton at 9:03 PM on June 4, 2008


I meant his point about simulating a human brain by simulating the physical laws and the tiny particles. It's hard to argue against considering this a possibility without positing a 'spark' - and the existence of a 'spark' is, I think, a much more remarkable assertion in a scientific sense, which is not to say that it is absurd to also consider as a possibility.

You could argue that it will be practically too hard to get sufficient computing power or that humanity will destroy itself before achieving this, but that's different.
posted by TheOnlyCoolTim at 9:22 PM on June 4, 2008


more hypotheticals, no hard evidence
posted by ornate insect at 9:26 PM on June 4, 2008


Seems like it's the long way around to try to re-create intelligence from scratch using silicon..... Why don't they try to breed intelligent parrots or something? Some natural parrots already seem to have the approximate intelligence of young human children, so why not do some genetic tinkering to try to create increasingly more intelligent parrots? Then try to learn what genes are necessary for intelligence... and when we can breed a parrot that is as smart as an "adult" human.. then we can stop arguing about whether intelligence can be designed... and then we can declare war on the talking birds...
posted by mhh5 at 9:53 PM on June 4, 2008


more hypotheticals, no hard evidence

No one's saying it's not hypothetical, but that doesn't make it an absurd or foolish hypothesis. One day, probably, someone will run the experiment to prove or disprove the hypothesis (disregarding somewhat more philosophical considerations of whether or not you've constructed a philosophical zombie.)
posted by TheOnlyCoolTim at 9:57 PM on June 4, 2008


The proposition seems to be that it would be possible in principle to simulate the brain at a low enough level that a model of how the brain (or neurons) work would not be required, thus rendering our abject ignorance of this irrelevant. Maybe so, but I doubt it.

I do not believe that it is possible in principle to simulate a brain at the sub-atomic physics level on a computer. At that level, quantum mechanics is the relevant level of physics. Simulating quantum mechanics requires a source of truly random numbers, but a computer is only capable of generating pseudo-random sequences.

(And even if one postulated hardware augmented by an "oracle" for the random values, it's still not clear than the universe contains enough matter to build a computer capable of running the simulation (in anything approaching real time). Even the three-body problem presents significant computational challenges. Be that as it may, the addition of such an oracle guarantees that the hardware in question is no longer a "computer", as that term is commonly understood.)

In other words, I suspect that in order to accurately simulate a brain (or anything else of any size) at the sub-atomic level, one would have to do what the universe itself is doing. But the notion that the universe is fundamentally performing Turing-equivalent computations is itself only a conjecture, and one that is highly controversial among physicists.

(It's late here on the east coast of the USA and I'm going home. See you tomorrow.)
posted by Crabby Appleton at 9:59 PM on June 4, 2008 [2 favorites]


Now we're getting to the good objections.

First, I definitely agree about the likely requirement for real randomness, but a hardware random number generator is an existing thing, not a hypothetical "oracle" and if you needed one to build an intelligence you could argue that it's not an "intelligent computer" but it's still an intelligent system that you built.

(I find interesting the possibility that there is a 'spark' and that it is, or enters through, random processes, whether or not the 'spark' would be present in a computer with a HRNG, and to what degree you've constructed an intelligence if there is a 'spark' entering through your HRNG. And, even, possibly to what degree you've constructed an intelligent lava lamp, something which has been used as a source of randomness.)
posted by TheOnlyCoolTim at 10:19 PM on June 4, 2008


One last point I had as I was going to bed myself:

simulate the brain at a low enough level that a model of how the brain (or neurons) work would not be required,

you don't quite need to worry about modeling the brain, because you start from the much simpler problem of modeling a single-celled embryo.
posted by TheOnlyCoolTim at 10:26 PM on June 4, 2008


TOCT, I'm not sure you grasp how much raw processing power is required to simulate a single protein vs. small-molecule interaction, how many such interactions there are within a single-celled embryo, the effect of combinatorial explosion on the two factors above, or how unfeasibly accurately you would have to measure the state of your single-celled embryo in order to start your simulation with any degree of verisimilitude.

Combinatorial explosion makes the steady exponential growth exemplified by Moore's Law look completely and utterly inconsequential.

'Spark' is a red herring.
posted by flabdablet at 2:04 AM on June 5, 2008 [2 favorites]


Oh, I realize the practical concerns. The ENORMOUS practical difficulties. But, I think, to make these practical concerns invalidate this as a theoretical exercise, you have to make the argument that it is not only ridiculously difficult but impossible - that there simply can't be enough computing power in the universe to pull it off, even slowly, or something like that.
posted by TheOnlyCoolTim at 6:25 AM on June 5, 2008


Yes, thank you TheOnlyCoolTim. The point is not "oh in a couple of years we'll be able to simulate everything and then bam singularity." Or at least not my point.

My point is that conceptually, in principle, it is possible to simulate a human brain. Down to the quark level if necessary, we have the technical knowledge to in principle perform such a simulation.

That is, regardless of whether it's feasible for humanity to undertake or any of those practical concerns, our brains are fundamentally governed by physical laws that are knowable.

Now I happen to think that our brains are simple at an entirely different (and more easily computerized) level and that we're one Swiss patent clerk away from going from handwavium like emergent behavior to starting to actually understand the behavior of simple systems but I don't have any evidence for that. Applying science to the mechanisms of thought is the most important avenue of research I can imagine and that's the core of transhumanism to me. Clearly I need to invent my own movement so I can define it.
posted by Skorgu at 8:22 AM on June 5, 2008


Crabby Appleton: In other words, I suspect that in order to accurately simulate a brain (or anything else of any size) at the sub-atomic level, one would have to do what the universe itself is doing.

You're looking at the wrong abstraction level, I think. Modelling individual neurons, or groups of neurons, seems like a far better level to work with. I mean, Penrose thinks that there's quantum something something going on, but I haven't seen anyone else take that very seriously. Actually, there have been a couple of back-and-forth rebuttals. Here's something from Max Tagmark: Based on a calculation of neural decoherence rates, we argue that that the degrees of freedom of the human brain that relate to cognitive processes should be thought of as a classical rather than quantum system, ie, that there is nothing fundamentally wrong with the current classical approach to neural network simulations. (via Wikipedia.)

Does anyone else remember the checklist of reasons your anti-spam solution won't work? It feels like there should be something similar here.
posted by Pronoiac at 10:20 AM on June 5, 2008 [2 favorites]


Does anyone else remember the checklist of reasons your anti-spam solution won't work? It feels like there should be something similar here.

I don't know the article you're referring to, but I think you're talking around the notion that the brain operates with Bayesian statistics (anti-spam systems use Bayesian filtering).

Most of the neural networks we create these days use Bayesian methods, and we think these systems operate the same way the brain does, only on a much simpler scale.
posted by tybeet at 10:58 AM on June 5, 2008


TheOnlyCoolTim said: Now we're getting to the good objections.

I'm afraid I'm going to have to disagree with you there. I think my original objection, and ornate insect's, were perfectly good objections.

They were good objections because of a little thing called the burden of proof. The burden of proof is on the one who makes the assertions. The burden of proof makes a fine bludgeon to use on theists when they start nattering on about God, but somehow many atheists don't seem to want to acknowledge it when it applies to articles of faith that they hold dear. I guess that's human nature for you.

It was probably a tactical error for me to go after one of the assertions directly. I only did it because I know the audience here. They want to believe this stuff. From a human standpoint, I can't even disparage that. I want to believe it too. I'd like to think that uploading might be an option for me someday. It's fine to want that, but it's not OK to let that desire turn off your BS detector. Anyway, it seems to me that most of the audience here would tend to assume that if I didn't object to the "substance" of the claim, it would be because there was no objection to be made. I hope this discussion prompts people to take this "rapture of the geeks" business with a big grain of salt.
posted by Crabby Appleton at 4:02 PM on June 5, 2008


Personally, I think a big grain's too small, and that the appropriate quantity of salt would be a pillar.
posted by flabdablet at 4:28 PM on June 5, 2008


I read your "no hard evidence objections" more or less thusly:

T.H.: "I believe these things are possible, and one day we will try them and see."
Skeptic: "There is, as yet, no hard proof, therefore your hypothesis is absurd."

This method would, at least, lead to some problems in scientific experimentation if "your hypothesis is unproven" meant that the experiment was not worth doing.

You can argue that it is an absurd hypothesis, but that requires substance.
posted by TheOnlyCoolTim at 4:46 PM on June 5, 2008


The burden of proof makes a fine bludgeon to use on theists when they start nattering on about God, but somehow many atheists don't seem to want to acknowledge it when it applies to articles of faith that they hold dear.

A difference: theists assert that a deity exists, without any proof. No transhumanist claims that strong AI or a matter assembler exist - they may believe in the future we may be able to make these things, but that's different.
posted by TheOnlyCoolTim at 4:49 PM on June 5, 2008


Let's say the hypothesis is "it is possible to accurately simulate a human mind on a computer". (And let's assume, for the sake of the argument, that all terms are adequately defined.)

Stated as a scientific hypothesis, it is not absurd. Stated as an unqualified assertion of fact, it is not justified. Stated as a near-term technology prediction, it is absurd.

Stated as a scientific hypothesis, I would have a hard time objecting to it as a research program (i.e., let's go find out whether this is true), as long as the people funding it are provided with an honest assessment of the risks.

Stated as an unqualified assertion of fact, it is really just a profession of faith. There are too many problematic underlying assumptions for it to be anything more.

Stated as a near-term technology prediction (e.g.., we'll have this technology by 2050), it's just nuts. We have nowhere near the knowledge required to make that credible.
posted by Crabby Appleton at 6:33 PM on June 5, 2008 [1 favorite]


Also, TheOnlyCoolTim wrote:
T.H.: "I believe these things are possible, and one day we will try them and see."
I've read a lot of transhumanist literature over the years (both before and after I became fairly skeptical of it), and I see it more like this:
T.H.: "We're gonna have uploading technology Real Soon Now and if you don't believe me, you're either really stupid or some kind of religious nut (or both)."
I like the former attitude much better than the latter.
posted by Crabby Appleton at 7:25 PM on June 5, 2008


flabdablet, I don't disagree with you about the appropriate quantity of salt, but I'm really just trying to plant a little seed of doubt here. Also, thanks for pointing out that "spark" is a red herring.
posted by Crabby Appleton at 7:34 PM on June 5, 2008


Something else that trips up transhumanism, it seems to me, is confusing the ideas of theoretical and practical computational equivalence.

It seems likely to me that simulating a human consciousness, absent a proper functional specification for same, would require - at a minimum - hardware approximating the structure and function of the human brain. That means a metric shitload of fairly simple processors, all working in parallel, with a switching fabric capable of shifting the connections between the processors in something akin to the way that axons, dendrites and synapses work in the human brain.

Turing equivalence says that we don't need to mimic the physical structure and organization of the brain exactly to make this thing work; if we have a good enough idea of what's going on in the hardware, we can simulate it computationally. This is the bedrock of the entire transhumanist "mind as software" worldview. If accepted, the idea of uploading consciousness to some kind of distributed computation grid follows fairly naturally.

But Turing equivalence says nothing about execution speed. Simulating parallel processes on non-parallel hardware requires administrative overhead, and the more simulated processes you have and the tighter the required timing relationships between those become, the more administrative overhead you have. This effect doesn't go away - in fact it gets worse - when you simulate parallel processes on massively parallel hardware with a different physical architecture. If you're simulating a hundred billion neurons on hardware that doesn't mimic them, that overhead is going to eat all of your execution speed, to a first, second and third approximation.

Turing equivalence also says nothing about I/O speed. I can't see how you'd upload a consciousness without being able to take a snapshot of the entire state of the underlying brain to very high spatial and temporal resolution, and I can see no development on the horizon making this even slightly expectable.

I would urge anybody who takes transhumanism even slightly seriously to spend some time trying to bring Windows Vista up on an actual Turing machine (or even a simulated one!) in order to grow some more realistic intuitions about what computational equivalence actually means in the real world.
posted by flabdablet at 9:42 PM on June 5, 2008


Well, that's why the other bedrock of the "mind as software" worldview is an unshakeable belief in Moore's law. It's not that they haven't noticed the processing speed gap, it's that they're counting on exponential growth to get them past it.

Or, you take the opposite approach, don't count on Turing equivalence, and build your metric shitload of parallel processors using, say, self-replicating machinery to make it cost-effective.


(For what it's worth, I think Kurzweil is a wild-eyed optimist -- it's his job -- and I don't expect to see the singularity or anything like it in my lifetime. But I also think a lot of these objections sound suspiciously similar to "man will never fly".)
posted by ook at 11:35 PM on June 5, 2008


Saying the Singularity is implausible to you is just fine. Saying that it's impossible, though, & too absurd to even discuss, makes you look kind of defensive & posssibly aggressively ignorant.

For example, reading "replace a neuron with an artificial equivalent" & thinking that means someone's just bunging some metal into a head willy-nilly without duplicating the network structure. I'm looking at you, jason's planet.

Crabby: If you're saying "extraordinary claims require extraordinary proof?" Fine. You were talking about pseudo-random numbers being useless here? References or gtfo.

I don't think of the next forty years as being near-term & obvious & boring, I guess. I mean, 2007 youtube alone sent as much data as the 2000 internet. That's surprising to me, & a sign about how much growth happened. The technological world of 2008 has some pleasant surprises.

flabdablet: Great, so Vista won't run quickly on a Turing machine. So what? Does that make you think Vista doesn't really exist? Do a 400mhz h264-playing ipod touch or divx-playing dvd player with a slow cpu just blow your mind?

So, analog simulations seem to be a lot more efficient than digital? Use the memristor instead.

tybeet: Actually, I meant the list of reasons your anti-spam solution won't work, such as "Blacklists suck" & "Whitelists suck." Though that was an interesting tangent on how we think.

Hugs!
posted by Pronoiac at 1:08 AM on June 6, 2008


...an unshakeable belief in Moore's law. It's not that they haven't noticed the processing speed gap, it's that they're counting on exponential growth to get them past it

My money's on project scope expansion continuing to outpace technology's exponential growth until we spiral into the sun, let alone until civilization collapses.

so Vista won't run quickly on a Turing machine. So what?

So having considered that exercise, somebody might get an intuitive inkling that a machine mind is unlikely to run at anything approaching useful speed on anything but dedicated brain-simulating hardware. Seems to me that running billions of uploaded minds on software simulators "in the cloud" is likely to be so many orders of magnitude less effective than putting each one in its own skull as to render the whole "uploading" idea unworkable - I think, therefore I am another 10000 years older?

Do a 400mhz h264-playing ipod touch or divx-playing dvd player with a slow cpu just blow your mind?

No. Those things are possible because processor performance has been growing exponentially for a few decades, while screen real estate has been growing only quadratically or less. This 2001-vintage computer on my lap has a 1600x1200 screen, and has no trouble playing H.264.

It couldn't do that when I bought it, because H.264 hadn't been invented then. If you're saying that there's more or less straightforward progress being made in AI that's comparable to what's been achieved in video codecs over the last decade, please - point me to your sources.

Look, Moore's Law is a fine and wondrous thing. Given. But anybody who things that a lack of processing power is the main thing stopping us from implementing human consciousness compatible strong AI at all, let alone en masse and on the cheap, seriously needs to look more closely at the complexity of what they're proposing we do.

Handwaving about "emergence" doesn't cut the mustard. It took us fifteen billion years to emerge, with an entire universe doing the computations. We can't even agree on what consciousness is, let alone devise tests to see if it's present to varying degrees. How is it possible, even in principle, to devise any kind of generate-and-test procedure resulting in an emergent consciousness in less time than that, if we can't even define what the test should look like? And even if we do manage that, what are the chances that such a consciousness would resemble our own sufficiently to support anything even vaguely akin to uploading?

I agree that this is, in essence, an argument from personal incredulity. However, I think it's fairly well-informed personal incredulity.

Don't get me wrong. I'm as attracted by the idea of the Geek Rapture as anybody else, and I think it would be cool in all kinds of ways if it worked. I just don't think I'm ever likely to see it do that, and I don't think Ray Kurzweil is either, regardless of how much Vitamin C he eats.
posted by flabdablet at 3:47 AM on June 6, 2008 [1 favorite]


Handwaving about "emergence" doesn't cut the mustard.
This sounds very much like a strawman to me, at least in the context of this thread. Perhaps I've missed it but is anyone here actually stating that "emergence" will just create intelligence? Aside from the obvious evolutionary tautology of course.

It took us fifteen billion years to emerge, with an entire universe doing the computations. We can't even agree on what consciousness is, let alone devise tests to see if it's present to varying degrees.

Well, no. It took fifteen billion years for self replication molecules to bootstrap into the world we see. I can bootstrap an intelligence in nine months. Well I'll need an assistant for that, but you take my meaning. The fact that making an intelligence is so consistent and repeatable is irrefutable evidence to my mind that it's entirely a lawful, knowable process. It also puts an absolute, impenetrable upper bound on the level of hardware and complexity required. A few pounds of (exquisitely complicated) neurons, 100W of power consumption and nine months + learning time. That's it.

Figuring out how to make an intelligence is obviously much (15 billion years' worth) harder.

But we only have to figure out that part once. Once we can make one intelligence we can make a billion, two billion, only constrained by hardware.

How is it possible, even in principle, to devise any kind of generate-and-test procedure resulting in an emergent consciousness in less time than that, if we can't even define what the test should look like?

Absolute, complete agreement. There are real, hard problems to generating AI which is why we don't have it already. My argument is that these are conceptual problems with our understanding, precisely like the pernicious definitions of intelligence, consciousness, whatever you cite and not fundamental, physical inabilities to construct intelligence. It's impossible until we figure it out but once we do it's trivial.

Transhumanism, as I interpret it, is reasoning from the starting point that it is possible and that we will figure it out, no more and no less.
posted by Skorgu at 5:24 AM on June 6, 2008


Crabby Appleton: They were good objections because of a little thing called the burden of proof. The burden of proof is on the one who makes the assertions.

All these professions of faith are touching, but I'd need proof (or at least some hard evidence) before I could give such fanciful notions (e.g., computers exhibiting human-level intelligence) any credence whatsoever.

The funny thing about the burden of proof is that everyone always thinks its on the other guy.

Applying known, accepted physical laws and extrapolating them into the future is not an assertion.

Either you're arguing that machine AI is impossible or you're arguing that it's simply very hard. If it's the former (and "fanciful" seems to support that conclusion even though you haven't actually stated your position anywhere I can see) you're the one injecting assertions, namely an exception to the aforementioned global physical laws.

If you're arguing that it's merely very hard and that we're not smart enough right now to do it, than we agree on all but the timeline, in which case we might as well just switch to talking about sports because both of our forecasts are equally fabricated.
posted by Skorgu at 5:42 AM on June 6, 2008


I can bootstrap an intelligence in nine months

because you have the hardware and the object code. Unfortunately, it isn't open-source and such reverse engineering as has been attempted reveals the thing to be a hacked-together, totally unmaintainable kludge. Just hooking up a decent debugger is enough to damage it beyond repair. Re-use that? Don't make me laugh.
posted by flabdablet at 6:51 AM on June 6, 2008


I have the Union of Unmaintainable Kludges on line three, something about a libel suit?
posted by Skorgu at 7:00 AM on June 6, 2008


so Vista won't run quickly on a Turing machine. So what?
So having considered that exercise, somebody might get an intuitive inkling that a machine mind is unlikely to run at anything approaching useful speed on anything but dedicated brain-simulating hardware.

Right. Hey, have you ever heard of MAME?

I mentioned the touch & mpeg-4 dvd player because both use additional specialized hardware instead of having a hefty cpu. Similarly, the first Tivo used a 54mhz cpu, but encoded mpeg-2 in realtime with extra hardware. Newer generations of general hardware & specialized hardware are both far more malleable platforms than the human body.

Uploading to the cloud or uploading to a personal cyberbrain: either one fundamentally changes the human condition, right?
posted by Pronoiac at 10:11 AM on June 6, 2008


I think it should be noted that the examples being given of a simulation of the atomic configuration and interactions of the brain or an embryo are sort of last-resort appeals to current scientific understanding. It would seem that, given that we learn nothing more about the nature of intelligence or how the brain works, we could do thorough atomic microscopy to map what atoms are where in the embryo and start up the simulation. The only thing we lack for this currently is sufficient computing power and, much less importantly, perhaps some atomic microscopy techniques. In reality, doing this sort of simulation would probably not be necessary.

So, basically, we're down to an argument about how much computing power will be available in the future and how soon - definitely a valid one but given that the technology of raw computing power doesn't seem to have reached any mature state (while there are signs it could reach a mature state, such as the seeming shift from increasing clock speeds to increasing numbers of cores, that make Singulitarian claims of asymptotically vertical increases much more suspect) I don't see any call to be overly dismissive of someone who thinks computer power will be much more abundant in the future than you think it will.
posted by TheOnlyCoolTim at 10:52 AM on June 6, 2008


So, basically, we're down to an argument about how much computing power will be available in the future and how soon - definitely a valid one but given that the technology of raw computing power doesn't seem to have reached any mature state (while there are signs it could reach a mature state, such as the seeming shift from increasing clock speeds to increasing numbers of cores, that make Singulitarian claims of asymptotically vertical increases much more suspect) I don't see any call to be overly dismissive of someone who thinks computer power will be much more abundant in the future than you think it will.

Look, I'm (we're?) not objecting to the claim that eventually we'll be able to do this, that, or the other thing. That's neither here nor there; everyone makes claims and when it has to do with projection this far into the future--how far? who knows?--they're all equally round and stinky. I'm objecting to the breathless, cultish tone with which these pronouncements are handed down and the utter condescension with which techno-skeptics (or techno-cynics) are treated. I ain't playing heretic to this kind of religion.
posted by nasreddin at 11:46 AM on June 6, 2008


Look, I'm (we're?) not objecting to the claim that eventually we'll be able to do this, that, or the other thing.

I think the biggest problem with the Singularity is its incredibly naive assumption that progress stretches linearly upward and that, if something is theoretically feasible using wires and chips, it will inevitably be achieved. Bullshit.

Maybe I'm standing on to small a distinction between "inevitably" and "eventually" here, but half the thread has been strawman bashing about the geek rapture and LOLKUZWEIL. I mean fine, Kurzweil is silly. But to shit on an entire concept, to declare publicly that it's bankrupt and bullshit and no I don't want to be proven wrong? I don't get that.
posted by Skorgu at 2:28 PM on June 6, 2008


I think there is a dichotomy here, between mocking of Kurzweil and analysis of what work is actually being done. If there is a reason to laugh at Kurzweil, it's because he is hoping other people will do the work he anticipates and really offers nothing himself except hypotheses. But in the public sphere, there are a number of attempts to actually create the AI he anticipates. The most public are Goertzel and Yudkowsky. I would be hugely surprised if the public face is the limit of the attempt - the financial value if accomplished being astronomical.

By all means take apart their approaches, if you have a significant idea about why they might be wrong. Until you do, however, I will accept that they have the possibility of achieving what they are aiming for, or, at the very least, of advancing the state of the art.
posted by Sparx at 4:21 PM on June 6, 2008


Hey, have you ever heard of MAME?

General purpose processors faster than they used to be! Film at 11.

MAME is not doing anything massively, overwhelmingly parallel. The architectures it's emulating are not massively dissimilar to the architecture of the emulator. MAME works very well.

Uploading to the cloud or uploading to a personal cyberbrain: either one fundamentally changes the human condition, right?

As I see it, there are two problems with uploading. One, which I alluded to above, is that "the cloud" is unlikely to have a suitable architecture for the job. The other, which is actually the more serious, is that capturing enough brain state to allow a consciousness to move itself to or duplicate itself on alternative hardware is likely to require processes far more invasive than the human organism will tolerate while still remaining functional.

The standard counter to that view is the thought experiment involving piecewise replacement of the brain by engineered parts, one neuron at a time in the limit, eventually enabling a full readout of brain state via something analogous to JTAG. I don't think anybody who takes this idea seriously for more than a minute or two has any real clue about how hard it's going to be to make that work.

So, basically, we're down to an argument about how much computing power will be available in the future and how soon

No. That's perhaps the argument you wish we were having, but it's not the sticking point at all. My argument is more about what we know how to do with our increased computing power.

This laptop's a case in point. I bought it in 2001. It's got a 1.6GHz processor, 512MB RAM, a 40GB 5400RPM hard disk, and a 1600x1200 screen. The things I do on it today are pretty much the same things I see people doing with their new laptops which, to be sure, do them slightly more snappily.

The march of technological art is nowhere near as linear and predictable as the march of technological capability. Given that the implementation of (first) strong AI and (maybe later) personal uploading requires such radical improvements in technological art, even given an indefinitely projected exponential growth in raw computing grunt I can see no philosophical justification at all for anything like the Singularity.

Putting that another way: even if we were to design and build a machine that could beat any of us all hollow on a standardized intelligence test, there is no guarantee at all that it would be as creative as, say, Steve Wozniak or Seymour Cray, and no guarantee at all that it would or could build its own superior successor.

Good designers are freaks. Being really smart is a necessary, but not sufficient, condition for being a good system designer. Throwing an indefinite amount of computing power at a project without a good designer is about as unlikely to make Good Things happen as throwing an indefinite amount of money at it. The fact that Singularity boosters all seem to ignore this point is what makes me treat their claims as bogus.
posted by flabdablet at 6:14 PM on June 6, 2008 [2 favorites]


The singularity is here ... but it's currently unavailable.

Skorgu - Can you recommend a spokesperson for the Singularity who makes more sense than Kurzweil?
posted by lukemeister at 6:56 PM on June 6, 2008


I suspect the answer to that question as asked is µ (how can a concept have a spokesperson?) but if I had to pick someone it would be Elizier Yudkowsky.

Of course much of the ideas and predictions are in the form of fiction, so Vinge, Egan, and Stross are good names to throw in too.

My bet is that the long-term future of the human race most strongly resembles Banks' Culture, minus the FTL and aliens.
posted by Skorgu at 7:57 AM on June 8, 2008


Regarding the aliens, this stuff allows a couple interesting answers to the Fermi paradox - first off, if you assume a somewhat Singulitarian explosion, there's the possibility that the first intelligent race to arise in any universe is very likely to control that entire universe (or switch in galaxy for universe to cut back a little) before a second intelligent race arises and prevent that from happening or control that process. Therefore we are the fist and only intelligent race in our galaxy. You could also imagine that we don't see any aliens because they've all fucked off into Matrioshka brains and don't care about flying around or talking to the universe. (Not to say this supports anything, just idle speculation.)

Less transhumanistic and just assuming technological advance, it's also quite possible that our SETI attempts and so on are doomed to about as much success as someone from 1850 trying to tap into an 802.11 connection.
posted by TheOnlyCoolTim at 9:22 AM on June 8, 2008


Pronoiac wrote:
Crabby Appleton: In other words, I suspect that in order to accurately simulate a brain (or anything else of any size) at the sub-atomic level, one would have to do what the universe itself is doing.

You're looking at the wrong abstraction level, I think.
Please try to keep up with who's saying what in the discussion. I didn't propose this level of abstraction, Skorgu did. Then you wrote:
Crabby: If you're saying "extraordinary claims require extraordinary proof?" Fine. You were talking about pseudo-random numbers being useless here? References or gtfo.
You really need to keep better track of who's making the assertions here. I don't know whether pseudorandom numbers would work as well as truly random numbers in this application. I do know that they're fundamentally different (ask any cryptologist). The point is that Skorgu didn't seem to realize that there was even a potential issue, which gives you some idea of how deeply he's thought this through. I've identified a potential problem with his notion of simulating the brain at the sub-atomic level—it's his responsibility to show that it's not a problem, if he wishes to continue to defend his assertion that it's possible. This is a real problem for him because, guess what, he doesn't know. And neither do you.
posted by Crabby Appleton at 8:19 PM on June 8, 2008


You know, Skorgu, you can keep handwaving about extrapolating physical laws and mischaracterizing my arguments, but in fact all you're really doing is assuming that which you have to prove, and proving nothing. I don't see any point in continuing the discussion.
posted by Crabby Appleton at 9:32 PM on June 9, 2008


Comparing the Singularity to Fundamentalist dogma is kind of offputting. One side encourages examination & informed criticism. It's easy to do uninformed criticism. Just point out your lack of a robot friend.

I currently view the Singularity as an interesting, somewhat plausible idea - the grim meathook future might happen instead, but that's much less pleasant to think about living in. The evidence for it just kind of points in the right direction - there's no cookbook handy. But the evidence against it isn't terribly convincing either.
posted by Pronoiac at 1:56 AM on June 10, 2008


The point is that Skorgu didn't seem to realize that there was even a potential issue, which gives you some idea of how deeply he's thought this through.

Sorry, I thought TheOnlyCoolTim addressed the RNG issue sufficiently. I don't know where you got the idea that there was some magic to random numbers, or that it was somehow quantum. If you'd like to provide a cite I might be able to address it more precisely.

In general, really, truly random numbers are easily (one might say trivially) available to anyone who bothers to look. Anything from atmospheric noise to sensor imperfections in CMOS chips to lava lamps can generate random numbers. All of modern cryptography relies on that fact.

If you'd like to lay out succinctly the arguments you find so compelling to label AI "fanciful" I'll cheerfully address them one by one, your labeling of my arguments "hand waving" notwithstanding.
posted by Skorgu at 7:21 AM on June 10, 2008


Sorry, that last bit was more passive-aggressive than I intended it to be.

It's possible I'm misreading your tone, but I find your arguments to be substantially more argumentative than the context requires. I'm continuing despite that because I am genuinely interested in people's reasons for believing so strongly that AI is impossible and you're the only one in the thread still holding that viewpoint.
posted by Skorgu at 7:24 AM on June 10, 2008


Pronoiac says:
Comparing the Singularity to Fundamentalist dogma is kind of offputting.
Well, no kidding. (Although, in fact, I don't recall comparing it to "Fundamentalist dogma"—perhaps all religious faith is "Fundamentalist dogma" to you?) You know what else is off-putting? Comparing God to a Flying Spaghetti Monster is offputting. Would you object to that comparison as forcefully as you object to mine?
One side encourages examination & informed criticism.
Right. If you want a prime example of "uninformed criticism", go read Dawkins's The God Delusion.
It's easy to do uninformed criticism. Just point out your lack of a robot friend.
So this is uninformed? So where have they been hiding the robot friends? I'd love to see one.

But the fact is that you don't have one, aren't willing to say how to make one, and don't even seem to think you need to. OK, whatever.
posted by Crabby Appleton at 4:36 PM on June 10, 2008


Skorgu writes:
If you'd like to lay out succinctly the arguments you find so compelling to label AI "fanciful" I'll cheerfully address them one by one, your labeling of my arguments "hand waving" notwithstanding.
Human-level AI is fanciful. It doesn't exist in real life. It is found only in Science Fiction!

Jeez Louise! I rest my case.
posted by Crabby Appleton at 6:45 PM on June 10, 2008


Crabby: Anyway, it seems to me that most of the audience here would tend to assume that if I didn't object to the "substance" of the claim, it would be because there was no objection to be made.

We get it, Crabby, you object. Please stop.
posted by Pronoiac at 7:10 PM on June 10, 2008


Crap! I never posted an apology for misreading you out of context. That's what I get for following Recent Activity. And I've definitely thrown fuel on the flames.

*takes a deep breath*

The Rapture is fundamentalist dogma. Equivocating the Singularity to the Rapture brings in a lot of other baggage. This conversation is divisive enough without religion. With that in mind, I honestly have no idea why you just mentioned the Flying Spaghetti Monster.

The present lack of a robot friend isn't a substantive criticism of any possibly future Singularity. We're saying this is a change that could happen in the coming decades, not something for the coming holiday season.
posted by Pronoiac at 7:33 PM on June 10, 2008


I didn't mention Dawkins either.
posted by Pronoiac at 7:38 PM on June 10, 2008


Crabby, your "case" appears to be "These things will never exist, because they do not exist right at this instant."

Not the most convincing argument I've ever heard...
posted by ook at 9:59 PM on June 10, 2008


"I will drop a standard base ball from a height of six feet and it will accelerate at 9.81 m/s2 until it hits the ground."

This is science fiction; I will never do this. Stockholm here I come, I just disproved gravity!
posted by Skorgu at 8:01 AM on June 11, 2008


A lot of work and personal stuff has come up for me yesterday and today, so I won't be able to respond until much later tonight (or, more likely, tomorrow). But rest assured that I will try to explain all this again in a way that you may find more difficult to misconstrue.
posted by Crabby Appleton at 12:20 PM on June 11, 2008


in a way that you may find more difficult to misconstrue

Excellent. When you do get around to it, do you think you could try being even more haughty and condescending? Because that really helps get your point across.


Look, for what it's worth, I agree with much of what you said up here. That human-level AI is possible is a hypothesis at this point. Statements that it's absolutely possible are premature. Statements that it's absolutely not possible are equally premature.

In all your arm-waving about the "burden of proof is on the one who makes the assertions" you seem to be forgetting that you're making an equally unsupported assertion: that human consciousness is not reducible to a finite state machine. That, too, is a statement of belief, not fact.

Personally I believe the simpler hypothesis is that thought is equivalent to computation, because any other conclusion implies the existence of some as-yet-unknown factor which would serve to differentiate them, i.e. the "spark" that jakey described earlier on. (Nice lucid post, by the way, jakey.) I don't agree with flabdablet that the "spark" is a red herring, or see where you or he gives any reason why it might be; it seems fundamental to me because without it there's no basis I can see to claim that AI is impossible. (Which says nothing about whether it's impractical, of course.) I readily admit the possibility that such a thing might exist and we just haven't discovered it yet, but occam's razor etc.

We could spitball all day long about random numbers or quantum effects, or whether either of them are even relevant to consciousness, or whether it's necessary to brute-force simulate every single aspect of every part of the biological process of thought or if it's possible to abstract the parts that actually constitute thought, or about any of the other dozens of unknowns that still stand between us and either achieving true AI or determining definitively that it's not possible. That'd be an interesting conversation which I'd be happy to have. It would just be spitballing, we're obviously not going to settle any of these questions by arguing about them on a messageboard, but spitballing ideas can be fun.

But stomping your feet and demanding "proof," and pointing to the nonexistence of robot friends and the like, doesn't advance your argument at all. Dragging in the flying spaghetti monster and the God Delusion doesn't add anything relevant to the discussion (though it does hint at why you're getting so het up about this discussion.) Stating that AI is fanciful because it only exists in science fiction, and that that somehow rests your case... well, satellites and the internet both showed up in science fiction before the real world too.

I guess what I'm mostly saying here is: your attitude isn't doing you any favors. You're kind of starting to sound like a bit of a jerk, in fact. Maybe it'd be worth toning that down some if you actually want to convince anyone of anything.
posted by ook at 1:56 PM on June 11, 2008 [2 favorites]


I can't write my full response right now, but I just saw ook's comment, in which he wrote:
[...] you're making an equally unsupported assertion: that human consciousness is not reducible to a finite state machine.
No, I'm not making that assertion. I certainly didn't make it in the comment to which you linked, nor have I made it in any comment in this thread.

To say that "I do not believe X" or "I see no evidence for X" is not equivalent to saying "X is not the case", or even "I believe that X is not the case". This distinction is important. For example, it is exactly the distinction between "weak" and "strong" atheism.
posted by Crabby Appleton at 7:59 PM on June 11, 2008


The vehemence with which you've repeatedly stated this belief, and your flat denials of anything contrary to it, have been so over the top that the distinction starts to look a bit pale.
posted by ook at 8:38 PM on June 11, 2008


Belief is only relevant in situations where there is no evidence or existing scientific insight to inform us of the facts, i.e. God. Any timelines for AI research are beliefs for that reason: we don't have a science of predicting the future.

But science can, does and must make statements of fact with regard to the possibility of certain events. From some angles that's its job.

Like I said before, if you're saying that AI is impossible you had better cite some damn good reasons for imposing an entity without a shred of experimental or theoretical evidence.

If you're not saying it's impossible but merely arguing on the timeline of ever seeing it, we might as well just start arguing over how many gearboxes Solberg is going to total in Acropolis.

It would behoove you to actually state what you're talking about rather than simply reiterating how we're all mischaracterizing you.
posted by Skorgu at 5:44 AM on June 12, 2008


« Older Survivors reborn   |   "Afterward, the locust with its execrable teeth" Newer »


This thread has been archived and is closed to new comments