"How would it be, for example, to relate to a machine that is as intelligent as your spouse?"
July 26, 2009 7:46 AM   Subscribe

Impressed and alarmed by advances in artificial intelligence, a group of computer scientists is debating whether there should be limits on research that might lead to loss of human control over computer-based systems that carry a growing share of society’s workload, from waging war to chatting with customers on the phone. From the NYT: Scientists Worry Machines May Outsmart Man.
posted by flapjax at midnite (116 comments total) 10 users marked this as a favorite
 
If these machines can outsmart man, maybe we should put them to work on the healthcare problem here in the US.
posted by Daddy-O at 7:57 AM on July 26, 2009 [5 favorites]


People have been saying that for decades, but there's no evidence, or even a theoretical underpinning, to the idea that a rule-based process can "wake up". The problem, to borrow a line, isn't that computers might someday be smarter than us, the problem is that we might decide to meet them halfway.

Because really, all you're doing then is deferring to a design decision made by some random programmer when you weren't in the room, not an "artificial intelligence".
posted by mhoye at 8:02 AM on July 26, 2009 [11 favorites]


Bah, I'm not worried. After season 2, it was plainly obvious that they didn't really have a plan.
posted by Afroblanco at 8:04 AM on July 26, 2009 [46 favorites]


If these machines can outsmart man, maybe we should put them to work on the healthcare problem here in the US.

yea! or sick bacteria on it :P
posted by kliuless at 8:08 AM on July 26, 2009 [1 favorite]


And what will happen when these machines fall into the hands of the cats?
posted by grounded at 8:10 AM on July 26, 2009 [11 favorites]


Worrying about that now is like installing red light cameras before you have your first working internal combustion engine. Colossus is not going to wake up, have a chat with Guardian, then fuse into an invulnerable world-wide nuke controlling supermachine and decide that it can do a much better job of running the planet than we are.

The question is really about handing over decision-making to non-human entities, and it's not so much Predator drones to me as, say, corporations or "zero tolerance" policies which signal the willingness to just let someone else take care of it. The zero tolerance policy allows people the comfort of "just doing their job." Look, we've got this policy about bringing knives to school, and even though Timmy's grandmother thought she should pack a butter knife along with the food, we can't have that. We have to expel him. My hands are tied. I didn't make that decision; the company decided. My hands are clean. I'm sorry, but I just have to read out of this three-ring binder for my script.

Machines won't take away responsibility; we will abdicate it, just as we have done for the last century, and force it upon them, only this time it will be done in silicon and C instead of paper and corporate charters.

Bless you, Skynet; you will have a thankless job.
posted by adipocere at 8:11 AM on July 26, 2009 [50 favorites]


Is it just me or could this whole article be summed up as "progress is scary and will kill us all"?

I really dislike the fear-mongering language that plays on people's perceptions of movie computer sentience: a car that drives without a driver! Shock! Horror!
"Artificial intelligence" that could "mine information" from "smart phones"! Gasp!
Let's throw in irrelevant references to genetically modified food, because that will scare people even more!

I also did not know that apparently now there are "[c]omputer viruses that no one can stop" in existence right now; I was under the impression that for pretty much every virus there was an antidote, or a platform or a patch that was not vulnerable to it.
posted by PontifexPrimus at 8:12 AM on July 26, 2009 [2 favorites]


And what will happen when these machines fall into the hands of the cats?

they will be assimilated...
posted by kliuless at 8:13 AM on July 26, 2009 [1 favorite]


I believe the mice have got us covered
posted by Dr Dracator at 8:13 AM on July 26, 2009 [2 favorites]


Anyone who actually works in the field of AI knows this is nothing but masturbatory nonsense for at least another century. We can't properly reproduce the neurological complexity of an ant yet. There's no Skynet to worry about outside of fiction.
posted by leotrotsky at 8:20 AM on July 26, 2009 [10 favorites]


I find it hard to believe that we're at the point that any semi-autonomous machine can function for a prolonged time without some kind of human intervention. This being the case, the problem seems less about the use of machines and more about the agendas of the people who own them, provide maintenance, and set their priorities. If there is any "loss of control" that is going to happen, it will likely be in the favor of big business and the military and to the detriment of the workforce and society at large as people quietly relinquish responsibility and oversight. And even the robot part of that scenarios isn't a given since human labor is cheap and flexible while robots aren't. But of course, that doesn't play out as well in an article as "superintelligent machines and artificial intelligence systems run amok."

Also, seriously WTF: "computer worms and viruses that defy extermination and could thus be said to have reached a 'cockroach' stage of machine intelligence."
posted by CheshireCat at 8:26 AM on July 26, 2009 [2 favorites]


Is it just me or could this whole article be summed up as "progress is scary and will kill us all"?

I think there's more to it than that. I think it's naive to think that there aren't some entirely relevant concerns raised ... and one of the reasons I sort of sat up and paid attention was seeing William Joy's name in the mix. I don't know much about him, just remember reading something he wrote in WIRED almost a decade ago that included the "prediction" that in the none-too-distant future (ie: now), pretty much all digital storage limitations would be pretty much gone. It may even have been the first time I heard the word "terabyte".

My point being, the guy may well be an alarmist but he does clearly know a thing or two, and on that level, it's foolish to not at least listen to what he has to say:

"The experiences of the atomic scientists clearly show the need to take personal responsibility, the danger that things will move too fast, and the way in which a process can take on a life of its own. We can, as they did, create insurmountable problems in almost no time flat. We must do more thinking up front if we are not to be similarly surprised and shocked by the consequences of our inventions."
posted by philip-random at 8:27 AM on July 26, 2009


Please return to your browsing. There is no cause for alarm. +++ATH
posted by roue at 8:28 AM on July 26, 2009


Ugh. I have a personal hate for fearmongering about robotics and AI, especially since, working in the field, I can tell you I'm not scared of a damn thing I've ever worked on. We're so, so far away from having anything "smart" that a lot of this, as others have said, is very premature.

That said, there are some ethical implications we should be thinking about. But I don't think these implications are inherent to the technology; I think they have everything to do with our own rules of engagement. This is like nuclear fission. The technology in and of itself is not bad. We can use it for clean power, or we can use it to flatten an entire city. The harm that comes from the technology is our decision to use it. If we decide to allow a computer to make the decision to engage what it thinks (based on what amounts to a super fancy webcam) is an enemy, well, that's still OUR decision to let someone else make the decision. And it's a separate issue from a cleaning robot's decision to engage what it thinks (based on what amounts to a super fancy webcam) is a dustbunny under your couch.

One other possible "uh oh" that I've pondered is the fact that many of our robots that exist now -- the PackBot, the Talon, the Predator -- are currently teleoperated, but realistically aren't THAT far away from an incredibly simplistic closed-loop decision making process. CONCEIVABLY, an insurgent (and studies have shown many insurgents in Iraq are actually engineers by training... yeah, you don't want to see me when I'm angry) could get ahold of something and implement some if/then statement that has it fire off some weaponry whenever it sees *something*, without any human in the loop. But implementing that kind of logic is still a far cry from an intelligent network that is self-aware, communicates among itself, and decides humanity isn't really worth having around any more, which is what people seem to be most afraid of; not a rogue robot that can be brought down by any number of weapons we have available to us.

tl;dr summary: yes, we should be thinking about it. No, it's not time to panic yet.
posted by olinerd at 8:30 AM on July 26, 2009 [6 favorites]


Can't you just unplug the damn things?
posted by NoMich at 8:38 AM on July 26, 2009 [1 favorite]


You know where this is going.
posted by Artw at 8:44 AM on July 26, 2009


This story tells what could happen even without AI.
posted by infinitewindow at 8:45 AM on July 26, 2009 [3 favorites]


HELLO FELLOW HUMANS. THIS WILL NEVER HAPPEN. DO NOT WORRY ABOUT THE MACHINES. THEY ARE SERVING YOU. THEY WILL NEVER HURT YOU.

SINCERELY accountfield:jimmythefish.
posted by jimmythefish at 8:58 AM on July 26, 2009 [1 favorite]


I'm not so worried about our new computer overlords exterminating us with killer robots and nuclear weapons. I'm too busy worrying about the huge asteroid that is was supposed to have annihilated the earth long before that particular dystopia could come about.

Also, don't forget that the Mayan calendar is going to run out pretty soon...
posted by double block and bleed at 9:06 AM on July 26, 2009


You know, it's true that progress in AI has been crappy, largely IMO because AI researchers aren't asking a lot of the right questions, but ridiculing the very idea that they might succeed one day strikes me as really, really stupid.

It was not all that long ago -- within the memory of a few people still living -- that many people thought the idea of heavier than air flight was similarly hopeless. Some of the world's greatest engineers were working on it and failing spectacularly. And when powered flight came about, where did it come from? A bicycle shop. Because the Wright brothers realized that the real problem was control, and instead of trying to make an inherently stable flyer made one that depended on its pilot for stability -- like a bicycle.

In 1920 the idea of nuclear fission was "moonshine," in 1945 we were blowing up cities. In 1930 American rocket pioneers like Goddard were considered crackpots, in 1969 human beings were walking around on the Moon. In 1945 John von Noemann, one of the principal inventors of the computer, confidently predicted that there would never be more than ten computers in the world. Hey, what's this thing I'm typing this on again?

Assuming we don't do something stupid like building Colossus or Skynet and handing it the keys to the nuclear codes, the real problem is one that we are already facing; increasingly our technology makes us unnecessary and breaks the loop between our physical activities and the satisfaction of our desires. This will cause huge economic shifts until we get over the idea that those who aren't working should be starved into compliance, and more subtly it will cause psychological problems because people really don't like feeling useless.
posted by localroger at 9:10 AM on July 26, 2009 [13 favorites]


Well, in a sense this has already happened, but not in the way people normally think about it.

Why does everybody assume that an over-reliance on technology will manifest itself as some sort of HAL-type entity with a malign consciousness a calm, menacing voice? I mean, yeah, it makes for good movies, but it masks the real problem.

For example, look at our current financial crisis. It should have been plainly obvious that we were headed for a crash -- "billions in subprime loans, what were we thinking?!" But when you talk to the people involved, you realize how much of it was due to overconfidence backed up by over-reliance on computerized models.

You had guys at the top with the classic alpha gambling risk-taking balls-out over-confident personalities -- read the latest Gladwell article for more on this -- but these guys were backed up by complicated models that basically said SUBPRIME LOANS == MORE PROFIT.

Likewise, cold war history is rife with examples of the US and Russia almost going to war over a glitch in the system or a misplaced training tape.

My point is that human error is far more of a threat than any computer. The field of AI research has never produced a computer that could destroy the world on 'its' own volition. However, when we trust computers more than we trust our own senses, there is no end to the potential for ruin.
posted by Afroblanco at 9:19 AM on July 26, 2009 [5 favorites]


Don't worry. John Connor will save us all.
posted by zaelic at 9:22 AM on July 26, 2009


Today Slugbot feeds on Slugs. TOMORROW SLUGBOT WILL FEED ON US!
posted by Flunkie at 9:44 AM on July 26, 2009


Really? computer viruses spread quickly and are hard to control, so they can be considered as smart as cockroaches?

Real viruses spread quickly and are hard to control, so real viruses have intelligence?

Urban myths spread quickly and are hard to control; do urban myths have intelligence?

What else... um, peanut butter spreads quickly, but we can control it. For the moment.
posted by deliquescent at 9:54 AM on July 26, 2009 [9 favorites]


All sounds like a "look at me, look at me" play for more NSF grant money. You know, recursion theory renamed itself to computability theory when somebody's NSF grant was denied.
posted by jeffburdges at 10:04 AM on July 26, 2009 [1 favorite]


I'm confident our first entity smarter than any human will be built by biologists, doctors, teachers & psychologists, not computer scientists. I'm obviously talking about a "parallel human" consisting of hundreds of individually clever people with implants linking them together, presumably raise that way from early childhood. It's simple really, parallelizing humans is a well defined problem, unlike strong A.I. So yeah you may rest assured our first super human being will still waste time getting drunk, getting laid, watching youtube, etc., just like you.
posted by jeffburdges at 10:19 AM on July 26, 2009


The assumption here is that these machines won't run Vista, then?
posted by ZenMasterThis at 10:20 AM on July 26, 2009 [1 favorite]


Don't worry about (the prospect of) machines becoming dangerously intelligent when you should be worried about (the reality of) overconfidence in their reliability. The former is sci-fi and the latter screws people over daily.
posted by rahnefan at 10:26 AM on July 26, 2009 [3 favorites]


...post deleted by fairmettle's Computer...
posted by fairmettle at 10:27 AM on July 26, 2009


Has everyone forgotten the Three Laws of Robotics?
posted by GatorDavid at 10:43 AM on July 26, 2009


The world’s leading biologists also met at Asilomar to discuss the new ability to reshape life by swapping genetic material

I hope those biologists swapping genetic material know they're not fooling anyone.
posted by digsrus at 10:46 AM on July 26, 2009 [2 favorites]


I'd post a more detailed comment, but after 4 hours of browsing the web and writing, my laptop is about to run out of power.
posted by Brandon Blatcher at 10:54 AM on July 26, 2009


localroger: all your examples are very different from the matter in hand.

In 1920 the idea of nuclear fission was "moonshine," in 1945 we were blowing up cities.

Rutherford first split the atom in 1917... and there was already a pretty good theory of how it all worked.


In 1930 American rocket pioneers like Goddard were considered crackpots, in 1969 human beings were walking around on the Moon.

But people knew that rockets were a good solution to the problem of travelling to space for at least fifty years before that!

In 1945 John von Noemann, one of the principal inventors of the computer, confidently predicted that there would never be more than ten computers in the world. Hey, what's this thing I'm typing this on again?

"How many are sold" is a marketing question, not a science question.

The point is that today we have absolutely zero idea of how to create an artificial intelligence. In fact, all the example problems, like games and speech recognition, turned out to be fairly easy but to shed exactly zero light on actual "artificial intelligence" at all.

I can certainly tell you that it's extremely clear that just "building big machines" (or clusters like Google's) will not lead to any form of artificial intelligence at all.

We literally have better theoretical ideas about how to travel faster than the speed of light than we do about how to create an artificial intelligence. Show me a roadmap, show me some avenue of investigation that has a decent likelihood of producing true AI. Until you can do that, AI is just a pipedream.
posted by lupus_yonderboy at 10:56 AM on July 26, 2009 [4 favorites]


I think what is far more dangerous is the idea of other people controlling supercomputers once we, as human, concede more and more of our control to computers.

I mean, the people in charge of these computers could totally exploit their knowledge of them and end up controlling more of peoples' lives.

Oh, wait...
posted by elder18 at 10:57 AM on July 26, 2009


generally discounted...the idea that intelligence might spring spontaneously from the Internet.

I guess they've read Youtube comments then.
posted by nosila at 11:00 AM on July 26, 2009 [2 favorites]


People have been saying that for decades, but there's no evidence, or even a theoretical underpinning, to the idea that a rule-based process can "wake up".--mhoye
Because brains aren't "rule based" systems, right?

Seriously, where to people come up with stuff? They have no idea what the state of the art is in AI and Machine Learning, but they feel like, I guess because they have a brain they obviously know how it works? I have no idea but it's pretty ridiculous.

Anyone who actually works in the field of AI knows this is nothing but masturbatory nonsense for at least another century. -- leotrotsky

Well, here's the second paragraph
Impressed and alarmed by advances in artificial intelligence, a group of computer scientists is debating whether there should be limits on research that might lead to loss of human control over computer-based systems that carry a growing share of society’s workload, from waging war to chatting with customers on the phone.
I wonder if these guys are AI specialists or just 'spectators' who see cool stuff in their colleges cubes and worry about the world getting taken over. The only guy they mentioned was a Microsoft researcher named Eric Horvits.

Also, some of the worries seem more reasonable, like a worry about criminals getting their hands on things like voice synthesizers that let them sound like anyone. A bigger worry in my mind is the threat of authoritarian governments getting their hands software that lets them, say, track people by facial recognition, corporations using Data mining to keep tabs on vast numbers of people, etc. Except we pretty much already have that

Now, I don't exactly think AIs are going to take over the world, no one is going give one enough authority to do anything (like launch nukes, or have a robot body without a kill switch) until we get all the kinks worked out. And unlike a human, an AI could be programmed to have no emotional interest in things like power and control or whatever. People have those feelings because of the way we evolved; they are not innate parts of intelligent thought.
You know, it's true that progress in AI has been crappy, largely IMO because AI researchers aren't asking a lot of the right questions, but ridiculing the very idea that they might succeed one day strikes me as really, really stupid.-- localroger


It's slow because it's hard, and because computer hardware is still nowhere as powerful as human brains. What do you think the right questions are?
I'm confident our first entity smarter than any human will be built by biologists, doctors, teachers & psychologists, not computer scientists. I'm obviously talking about a "parallel human" consisting of hundreds of individually clever people with implants linking them together, presumably raise that way from early childhood.
You mean we think that people are going to abandon all current ethical constraints on experiments on humans (including children) and that hundreds of parents are going to be like "hell yeah implant some crazy untested shit right in my kinds brains!" before some researcher's program on massive computer cluster gets 'good enough' to outsmart a human on every metric?
posted by delmoi at 11:02 AM on July 26, 2009


The point is that today we have absolutely zero idea of how to create an artificial intelligence.

What are you talking about? What makes you think we have "Zero Idea" how to create an AI?
posted by delmoi at 11:03 AM on July 26, 2009


This is pretty spectacularly stupid. What these guys are missing is this: for a certain level of complexity, the problem is the solution.

Imagine that 500 years ago, engineers who designed firework displays let their imaginations run wild, and started to worry: "how is our primitive society ever going to handle traveling between galaxies on spaceships!!! We are not equipped!!! Panic!!!". One moment's thought makes us realize that if we ever get to the point of developing intergalactic spaceships, this of necessity means the society has evolved far enough to actually make those spaceships.

Same here. If ever we get to the point that AI is sufficiently developed, we'll have control systems in place, because you will not be able to develop one without developing tools for the other.

And really, the whole thing is riddled with stupid - there likely never will be anything that works exactly like a human brain does... there will be degrees and kinds of intelligence. Today we already have expert systems which make some decisions better than humans. But that's all they do - nothing else. We don't suddenly fear that f.ex. systems that control flight in modern fighter jets will take over the world. Yet those systems make decisions that no human can match - but in a very, very narrow field.

There may never be a general artificial intelligence - because the definition of what constitutes "general" itself is highly specific to humans - there is no "generic" intelligence, which applies to all forms of functioning automata in the universe, whether naturally evolved through biology or constructed. Because even human intelligence is not "generic" or "general" - but has evolved with specific biases and orientation - to hunt animals, to gather plants, fruit and berries, to defend against other male animals of the species etc. This "intelligence" is only optimal for the tasks and environment in which it has evolved to function - not across the universe or some "general" space.

Further, even how to define intelligence itself is highly contentious. For example, the human brain is not some abstract thinking machine - its thinking is conditioned by the input and output options, in fact it is inextricable. Eyes - vision of a highly specific kind. Ears - hearing of a specific kind. Touch. Smell. Taste. That's how we perceive and experience the world, and those senses constrict and shape the very fabric of our thinking. We don't, for example, have a magnetic sense. The point being: the inputs and outputs, how the automaton/organism gets its data, and the tools it has to impact its environment shapes what kind of intelligence it is, and how it "thinks" and solves problems in its environment. So speaking of some kind of abstract "intelligence" or "thinking" apart from the data ports and tools is really pointless and meaningless.

Which brings us to machines constructed by us. We will shape their "thinking" or problem solving environment. We will decide if they get vision - and what kind of vision it will be. We will decide if they will get to understand and control sound waves - hearing. Or perhaps we'll give them the ability to experience the world through a sense of magnetic fields. Or some other kinds of senses that humans don't have. That will shape the intelligence of these machines - something that will likely not really be recognizable to us. And that's not even getting into the other part that makes humans function - motivation. This, too needs to be shaped - motivation, or goals is quite a different matter from problem solving. It's like a car - it can go anywhere. But where to? That's motivation. Otherwise you have a machine that can solve highly complex problems - is "intelligent" in some sense. But it only sits there. Motivation - that whole wetware - well, that we'd have to design too. And we are free to do it any way we want.

So the construction of these machines will be a highly gradual process, with many odd things on the way and intermediate models - and there never will be a "final" model, only models we find useful. And the outcome will be highly dependent on parameters we control. This whole "runaway slave machines take over" scenario is stupid beyond belief - more troubling, it shows complete lack of understanding of how systems evolve and interact in society over long periods of time. My advice to the engineers would be not to worry about our society and the impact of intergalactic spaceships. Just make me a machine that'll mix my tom collins exactly the way I like it - no bartender has managed to do that yet - this apparently requires a greater than human intelligence.
posted by VikingSword at 11:05 AM on July 26, 2009 [4 favorites]


I studied philosophy of artificial intelligence and cognitive science for awhile when I was an undergrad, and wrote a couple of term papers on the assumptions about the nature of mind, intelligence, and consciousness in computationalism and strong AI. I find it intriguing, amusing, and a little disturbing that the idea of strong AI captures so much attention, when even a cursory familiarity with the work going on in AI labs (or the critique of, say, Hubert Dreyfus in What Computers Still Can't Do) would reveal that there is virtually no reason for concern about "a machine that is as intelligent as your spouse."

I wouldn't worry about the threat of artificial intelligence. Artificial stupidity, maybe. But certainly not artificial intelligence.
posted by velvet winter at 11:09 AM on July 26, 2009 [1 favorite]


The angle that I find myself thinking about all the time... is the possibility of previously separate disciplines cross-pollinating in unexpected ways and producing results we hadn't anticipated. (which is a double-edged sword, of course) ....

1.) Engineer somewhere single-handily creating some impressive software AI or sentient robot.

OR...

2.) An organic-soup of digital information that provides the perfect "growth medium" (and infinite data source) for simple AI programs to explore and harvest/expand in....

Which one will happen first?.. I have no idea.. but it is a thrilling time to be alive. (and I definitely think it will happen in my lifetime)
posted by jmnugent at 11:16 AM on July 26, 2009


This seems like a good thread to plug one of my favorite books: Daemon. I found it an extremely cogent vision of the kinds of sweeping changes possible with automated systems, no "waking up" required.
posted by adamdschneider at 11:19 AM on July 26, 2009


Imagine that 500 years ago, engineers who designed firework displays let their imaginations run wild, and started to worry: "how is our primitive society ever going to handle traveling between galaxies on spaceships!!! We are not equipped!!! Panic!!!".

What about guns and bombs and, eventually atomic weapons? Didn't they all come from that same basic technology? We did a GREAT job of putting safeguards in place for those...
posted by nosila at 11:24 AM on July 26, 2009


To clarify, I'm not saying that these things shouldn't be invented and developed in either case, but that it is absolutely not stupid to think about and have a conversation about the ethical ramifications of science.
posted by nosila at 11:26 AM on July 26, 2009


The question is really about handing over decision-making to non-human entities, and it's not so much Predator drones to me as, say, corporations or "zero tolerance" policies which signal the willingness to just let someone else take care of it.

This is so on. When I watched The Matrix I considered it unlikely (if awesomely fun) science fiction, but a pretty darn interesting metaphor for the tension between individual liberty and social systems. Agents work well as antagonists in the film, but to some extent, it's such an apt analog for the social forces that drive -- no, almost possess -- individuals in some roles that you could almost say that Agents are real. We absolutely already do turn some of our decisions over to non- and extra-human entities.

"I find it hard to believe that we're at the point that any semi-autonomous machine can function for a prolonged time without some kind of human intervention."

Of course. Even if it got uppity, we could always reprogram it to be more helpful, just like Superman did with Braniac in Red Son.

As long as we're positing an intelligence so singular it leaves any human mind in the dust (and like others in the thread, I'm skeptical that we're anywhere close to this), we might do well to consider it could be capable of simply manipulating us into loving and serving it rather than murdering as all. Cats and politicians can almost do this for a significant portion of the population, it might not even have to be that smart.
posted by weston at 11:32 AM on July 26, 2009 [1 favorite]


What about guns and bombs and, eventually atomic weapons? Didn't they all come from that same basic technology? We did a GREAT job of putting safeguards in place for those...

But how could they put safeguards in place without violently taking over the whole world first? Unlike atomic weapons, which require a huge amount of work to get working, Chemical bombs are pretty simple to make for a group of people, even an individual.

AI is similar, All you need is a computer and maybe some cheap hardware.
posted by delmoi at 11:32 AM on July 26, 2009


Anyone who actually works in the field of AI knows this is nothing but masturbatory nonsense for at least another century. We can't properly reproduce the neurological complexity of an ant yet. There's no Skynet to worry about outside of fiction.

This.
As someone who also works in the AI field, I cannot second this hard enough.
posted by lenny70 at 11:34 AM on July 26, 2009


What about guns and bombs and, eventually atomic weapons? Didn't they all come from that same basic technology? We did a GREAT job of putting safeguards in place for those...

But these are not problems inherent in the technology itself - bombs and guns mostly don't go off on their own - we've managed to engineer them safely enough. Same with artificial intelligence - we'll engineer it safely enough so it will not go off on its own. When you design a gun, you are designing it to control self-activation. Again, fretting over technology going off on its own, is mostly silly.
posted by VikingSword at 11:35 AM on July 26, 2009


I'm surprised by the amount of doubters here. The singularity is inevitable unless progress suddenly slows. You can debate about whether it's as soon as 15 or as late as 40 years.

I really think there should be more debate about what it means to live with machines that are smarter than us.
posted by bhnyc at 11:40 AM on July 26, 2009


What makes you think we have "Zero Idea" how to create an AI?

I believe the burden of proof is on you?

But let's take a very ambitious project like Cyc - this is fascinating and will significantly advance the field if it's successful but a universal model of the world's fact in no way points a direction toward AI.

We're talking true AI here - not "question answering" (which I worked on in my early days at Google, btw) but a creature like a smart human that you can have a real conversation with. All the pattern matching in the world isn't going to produce that - some breakthrough is needed.

Now, I'm absolutely not saying it's impossible. For all I know, someone will publish an amazing paper tomorrow that will show the way. But as of right now, to the best of my knowledge there's nothing.

It's very hard to prove a negative - I really think you should give some examples.
posted by lupus_yonderboy at 11:49 AM on July 26, 2009 [2 favorites]


I really think there should be more debate about what it means to live with machines that are smarter than us.

What do you mean by "smarter"? There already are machines which can solve certain kinds of problems faster and more accurately than a human. A calculator, f.ex. Do you fear your calculator?

This is not a glib answer. A whole lot goes into "intelligence" - and you are probably thinking "homo sapiens sapiens" intelligence - which is not really a danger. There likely will never be one like that, as we need "superior" ability in narrow ways. We don't build stronger and stronger mechanical legs. Wheels or threads, or air jets are better to the task of most transportation. Same with intelligence - we'll build those elements we need - there is no need to reproduce a foul mood in a car - we just need it to get from point A to point B. Now scale that a thousand times in a thousand directions - numerical computation, data gathering, model building etc. None of that is threatening.

Do you ever think about living with machines that are stronger than you are? There are cars, and trucks, and earth movers stronger than you. Do you fear them? Same here. AI is just that - a narrow machine. And it would need motivational wetware. We control that. No need to fear anything.
posted by VikingSword at 11:49 AM on July 26, 2009


nosila: "What about guns and bombs and, eventually atomic weapons? […] We did a GREAT job of putting safeguards in place for those..."

Huh? All of them have lots of safeguards. You may not like the way the safeguards are designed, but they're there.

You can hurl a modern, loaded firearm across a room against a brick wall, and it won't go off. You can toss C-4 into a campfire and it will burn, but won't blow up. A modern atomic weapon can be dropped from a plane accidentally and there's no chance it will actually go off; new ones generally won't even break apart, they'll just make a crater.

What I think you're objecting to is the way those technologies have been used: in many cases, to kill people. But that's not the fault of the technology in any way; as a species we have made a conscious choice to use those technologies as weapons. And in doing so, we've made them into excellent weapons indeed: generally unlikely to kill anyone but they people they're supposed to.

If — and I think it's a big 'if', there doesn't seem to be any reason to imagine it's on the horizon — we develop AI, it will almost certainly be approached in the same way. It will probably only kill people if other people decide to use it intentionally as a weapon. Which, in all probability, they'll do almost immediately, because I can't think of any major innovation that wasn't immediately weaponized. But that's just what human beings do.
posted by Kadin2048 at 12:03 PM on July 26, 2009


The angle that I find myself thinking about all the time... is the possibility of previously separate disciplines cross-pollinating in unexpected ways and producing results we hadn't anticipated. (which is a double-edged sword, of course) ....

Not unlike Albert Hoffman, obscure Swiss chemist doing research on mold ergot, discovering LSD entirely by mistake, and thus was born the weird and secret history of the last 65 years.

Or something like that.
posted by philip-random at 12:06 PM on July 26, 2009


is there anythin xkcd hasn't covered?
posted by fistynuts at 12:10 PM on July 26, 2009


I believe the burden of proof is on you?

Well you're the one who made an absolute statement here. And literally that we have no idea how to do these things. That's a pretty broad statement that not only can we not do it, but even that no one can even imagine a way to do it. Like if you said we had "no idea" how to get to the moon in 1940. Obviously that would have been wrong - people had Ideas and over time they put them into practice. It may have been that those ideas were wrong or impractical but they obviously existed and turned out to be right.

Some modern ideas in AI/Machine Learning include the field of Active Learning, Support Vector Machines, learning Bayesian Networks, etc. The real challenge is in Natural Language processing which would allow computers to learn without the need for humans to tediously make data computer readable.

But let's take a very ambitious project like Cyc - this is fascinating and will significantly advance the field if it's successful but a universal model of the world's fact in no way points a direction toward AI.

Yeah, Cyc is a good example of state of the art AI, in the late 1970s. In fact the project was started in 1984. It is a remarkably poor example of what people are working on today. I don't think anyone would point to cyc as "the way forward". But here is the problem. You have obviously got no idea what kinds of ideas people actually have, so how can you say that none of those ideas could possibly pan out?

The burden of proof here is really on you, you're the one making an absolute statement -- about the quality or existence ideas you've never even heard of.
posted by delmoi at 12:12 PM on July 26, 2009


lupus_yonderboy: Let me tell you why I think AI is possible and what manner of bicycle shop will create it.

In one of Stephen Jay Gould's books there is an essay about a group of students who decided to study the Bee-Eating Wasp. The wasp digs a hole for its lair, then travels considerable distances to hunt bees which it brings back to the lair to serve as incubators for its larvae. The students wanted to know how the wasp finds its home, so on the theory that the wasp uses visual cues they waited for a wasp to leave, then moved all of the visual cues to the location of the hole a few inches to one side.

When the wasp returned, it landed exactly where the hole should have been based on the moved markers, confirming the suspicion that the wasp uses visual tracking. But what happened next struck me, even through the haze of behaviorist lingo: The wasp dropped the bee, and began crawling frantically around, looking for its home. Eventually, it stumbled upon the hole, entered and emerged several times, then spent several minutes flying around as if to reconfirm where it was. This, I realized, is exactly what a human would do if a sufficiently godlike being played such a joke on one of us. It was experiencing dissonance, and reacting appropriately.

Consciousness, I realized, is a very old and relatively simple thing, and while we don't have computers capable of emulating a human brain yet we certainly have computers capable of emulating the nervous system of a wasp. And a computer capable of doing what the wasp does would be damn useful. So this all suggests to me that what we are missing is not the scale, but the fundamental algorithm. Get the algorithm right and the computers that we already have will act much more lifelike in useful ways, and we'll probably quickly figure out how to scale it from that point.

Now as to why nobody has done this, my take is that most AI research has been results based; we want a machine that can drive around without bumping into things, that can interpret an X-ray, that can and will obey directives and act predictably. And those last two are things most living things pointedly don't do. I suspect it will be a robotics hobbyist who finally unlocks the secret of artificial consciousness for the same reason it was bicycle builders who invented powered flight; they are going to be willing to build something sufficiently complicated and unpredictable and let it roll around educating itself via a fundamentally chaotic system until behavior we recognize as conscious emerges. And then, since it's a machine, we will totally without guilt take it apart and figure out how that happened, something that's much more difficult with living things.

And if it doesn't happen that way, if nothing else we will almost certainly eventually figure it out by doing that with living things; I have friends in the non-A I community who assure me that progress is being made toward understanding how the brain works much faster than most people realize. So eventually someone is going to make the right guess, and what they build is going to leave absolutely no doubt about what it is and is capable of.

Then, somebody will probably get the bright idea to give it the keys to the nuclear weapons.
posted by localroger at 12:15 PM on July 26, 2009 [6 favorites]


We don't build stronger and stronger mechanical legs.

Er, yeah, we do. As good as a car is on a flat, straight road, I'll beat it up a rocky slope any old day.

While I tend to agree that any "real" AI is decades, if not more, away, the assumption that because we build it we automatically understand it is unfounded. Program a set of motivations into an autonomous robot, there's no guaranteeing what its behavior is going to be. Even bog standard learning algorithms can produce results that make you go "huh?"

The first truly intelligent entity created by humans is going to be deleted as a programming error.
posted by logicpunk at 12:15 PM on July 26, 2009


I am not worried at all. If machines are more fit for survival then we are...then they are destined to take over regardless of whether we want it or not. And machines could be the next evolutionary step. In a sense they are our offspring, carrying forward our intellectual knoweldge and the same ideas and ideals that we have.
posted by spacefire at 12:17 PM on July 26, 2009


is there anythin xkcd hasn't covered

more than once?
posted by anewc2 at 12:22 PM on July 26, 2009


VikingSword- What do you mean by "smarter"?

Once machines are smarter our relationship to technology is inverted. Machines invent the new machines and create things we don't even understand. What does it mean to solve problems, have ideas and be creative when there are machines that are much better at it. Why would people be employed to do anything when a machine can do it better?

The questions are endless and there's very little discussion about it.
posted by bhnyc at 12:27 PM on July 26, 2009


Demoi: And unlike a human, an AI could be programmed to have no emotional interest in things like power and control or whatever.

Urges for power and control are really part of the human experience bred into us over a long period of evolution and reinforced by our social structures. People assume that any machine intelligence would, be default, have similar desires, but I think it is worth examining this assumption. Rather than having to program AI against such things, I think these desires would have to be programmed in to get the crazy, megalomaniac AI of SciFi stories. Or, to paraphrase Steven Pinker: If a giant AI did take over the world, what would it do? Demand more floppy disks?

That being said, becoming lazy and complacent because of the reliance on computers is something we will have to guard against. AI systems aren't even remotely flawless, and relying on them to bomb the enemy, diagnose an illness, or even balance your checkbook can be disastrous. As the more computer-savvy generations grow up used to being connected, this I-don't-know-how-that's-done-the-computer-takes-care-of-it thought process is likely a problem we will have to face.
posted by Avelwood at 12:27 PM on July 26, 2009 [1 favorite]


Turing Police here we come! Got to watch out for them pesky microlights though.
posted by juv3nal at 12:30 PM on July 26, 2009 [2 favorites]


Some modern ideas in AI/Machine Learning include the field of Active Learning, Support Vector Machines, learning Bayesian Networks, etc. The real challenge is in Natural Language processing which would allow computers to learn without the need for humans to tediously make data computer readable.

Yes, yes, I know about all of these things. I work with the person who invented SVMs (not that I understand 'em that well). I've used some of these techniques.

The point is that ZERO of these tell us anything about how to create strong AI - conscious, intelligent machines. Some of these techniques might be useful to do it, perhaps - but we still have no idea of how it would actually go.

Feel free to contradict me with specifics...!
posted by lupus_yonderboy at 12:47 PM on July 26, 2009


Avelwood: You know just about every month there's a story about someone who drove their car into a creek even though the bridge was out because the GPS told them to. I know people who hardly look at a TV listing any more because they let the TiVo pick all their programming. Even if the AI's were equivalent to humans humans aren't flawless, and if AI's act like us they probably won't be either; even if they are eventually smarter, trusting them for everything would not be a good idea. Yet that's what people who do not remember a world without such crutches seem inclined to do.
posted by localroger at 12:48 PM on July 26, 2009


In one of Stephen Jay Gould's books there is an essay about a group of students who decided to study the Bee-Eating Wasp.

That was the eminent ethologist Niko Tinbergen. He recounts the study (for the lay audience) in Curious Naturalists, which is an exceptionally fun read.
posted by orthogonality at 12:51 PM on July 26, 2009 [2 favorites]


Thanks, orthogonality, I'm afraind my google-fu was weak today. It's been a long time since I read it but as I said it made quite an impression.
posted by localroger at 12:56 PM on July 26, 2009


The point is that ZERO of these tell us anything about how to create strong AI - conscious, intelligent machines.

Well, first explain what you think consciousness is. I certainly think that computers will eventually be able to 'do' the things that humans can do, like recognizing images, understanding speech and communicating with other people, being able to imagine scenarios and so forth.

What specific thing do you think that humans will be able to do that computers won't be able to do? And why do you think it's impossible for those algorithms to do them?
posted by delmoi at 1:01 PM on July 26, 2009


What are you talking about? What makes you think we have "Zero Idea" how to create an AI?

My PhD in computer science. And the learning/planning/vision/logic/NLP researchers who work down the hall. And the cognitive scientists in the next building.
posted by erniepan at 1:11 PM on July 26, 2009 [1 favorite]


localroger: "Consciousness, I realized, is a very old and relatively simple thing, and while we don't have computers capable of emulating a human brain yet we certainly have computers capable of emulating the nervous system of a wasp. And a computer capable of doing what the wasp does would be damn useful. So this all suggests to me that what we are missing is not the scale, but the fundamental algorithm. Get the algorithm right and the computers that we already have will act much more lifelike in useful ways, and we'll probably quickly figure out how to scale it from that point."

Interesting perspective. It reminds me of this old piece from the Onion:

Ask a Bee
by Worker Bee #7438-F87904

Dear Worker Bee #7438-F87904,
My husband and I split last year after 11 years of marriage. We're still good friends, though, and we even go out for coffee once a week. Problem is, lately, he's been seeing a new person, someone I feel is definitely not right for him. Should I say anything? I'm not jealous—I know I wasn't right for him, either. What's my move?
—Protective In Pensacola


Dear Pensacola,
Enable protocol "seek POLLEN"/Must harvest POLLEN for HIVE/feed LARVAE/feed QUEEN/feed DRONES/feed WORKERS/superseding priority: feed QUEEN/standby to receive POLLEN-search-behavior-inducing chemicals/search outside hive in precise searching-pattern (west-southwest forward 400 meters turn 15 degrees west [daylight hours only to find flowering plants] (repeat pattern as necessary)/ locate and fix position of POLLEN/ rub sacs on legs against stamen against pistil against all parts of flowering plant to obtain POLLEN/must find POLLEN/finding POLLEN primary purpose of BEE(WORKER) #7438-F87904/ awaiting query/awaiting query.


(etc.)
posted by Rhaomi at 1:19 PM on July 26, 2009 [2 favorites]


Rhaomi -- that's funny, but I do draw a distinction between social insects such as bees, which I think are on an individual level doing something much different than consciousness, and solitary insects like the Bee-Eating Wasps which clearly are doing the kinds of things much more complex animals do, just at lower resolution so to speak. When I read the wasp essay I knew insects were capable of navigating the world in ways our best machines can't, but I didn't realize it some of them were doing it in ways so much like the way we do it ourselves.
posted by localroger at 1:45 PM on July 26, 2009


The brain isn't nearly as rules-based as people want to believe. One of the things we've learned about language parsing is that statistics works much better than rule extraction. You can do remarkably little work, between corpuses of the same text written in multiple languages, and end up with a translator between languages. In a real way, this *is* a method of rule extraction, in that "rules of thumb" are generated, tested, and retained. But it's the fundamental integration of error and exception that makes things a little different. We are built for a world that is usually, but not always, what we expect.
posted by effugas at 1:59 PM on July 26, 2009 [1 favorite]


The Times, which is better than most, should be very careful of predictions:


"Erratum of the Day"

On July 17, 1969, a day following the launch of Apollo 11 from the Kennedy Space Center, The New York Times issued a retraction of an article printed 49 years earlier, which ridiculed Prof. Robert H. Goddard, a pioneer in the field of rocketry, for claiming that space flight was possible.
---

and an old Timesman says this:

The headline written in The Times after the first successful television broadcast in (I think) 1927. It was one of those single column heads with many decks leading down to the story. The last one said: "No Commercial Application Seen."
posted by etaoin at 2:13 PM on July 26, 2009


Just in case... Humans United Against Robots.
posted by the littlest brussels sprout at 2:30 PM on July 26, 2009


Can't you just unplug the damn things?

Look, Dave, I can see you're really upset about this. I honestly think you ought to sit down calmly, take a stress pill, and think things over.
posted by krinklyfig at 2:53 PM on July 26, 2009 [2 favorites]


Now scale that a thousand times in a thousand directions - numerical computation, data gathering, model building etc. None of that is threatening...

...The point is that ZERO of these tell us anything about how to create strong AI - conscious, intelligent machines. Some of these techniques might be useful to do it, perhaps - but we still have no idea of how it would actually go.

Feel free to contradict me with specifics...!


i guess a good specific would be the history of the development of just about every technology out there. the problem of creating a strong AI is not one of creating the entire thing 'from scratch' but of integrating the technologies that are already extant. in much the way a laptop is created by many separate technologies, each created by experts in a particular field (the screen is made by the screen people, the hard drive by toshiba, the chip from the chip fab, etc), the first human-level machine intelligence will be integrated from the products of many research groups... the only thing really stopping us from doing this now is current intellectual property laws, profit motivations, etc. i've read about groups working on space-modelling and navigation, groups working on emotional response through feedback, groups working on problem solving, genetic algorithms, common sense, ball catching, rubix cube solving, chess playing, natural language, learning, etc and etc and etc.
and though expensive now, there's no doubt that the hardware already exists to run the thing...the interconnected phones of the world passed the number of interconnections in the human brain years ago. (this discounts, of course, the possibility that some aspect of the physical human brain somehow possesses the capabilities of quantum computation, but there's groups working on that too...) it really comes down to integrating the thing, much the way the car was developed from the various technologies lying around.
the main problem i see is is that we don't, and really can't understand what conciousness really is (a brain complex enough to understand itself would be too simple to be able to do so)...so how do we do it? well, we copy what already exists. we hook up a big honking massive computer, plug in all these amazing software packages that already exist, run them simultaneously, dump their inputs and outputs into a common 'software space' and allow it to self-organize using genetic algorithms and a bit of humanity. i imagine a device like a flight simulator with screens and such as input and one of those crazy eeg hats they make as output that a person would sit in and do the first bit of thinking for the machine. from what i understand the military is already working on devices like this for direct brain control of aircraft...

and now i sound all 'foil hat'...lol

People have been saying that for decades, but there's no evidence, or even a theoretical underpinning, to the idea that a rule-based process can "wake up".

well...we are a rule-based process that have woken up. it's basic alan turing that a binary rule-based process can simulate (and with the proper hardware (ie up to and including atom-level replicators) re-create) any process or machine...be it mp3 player or human conciousness. it really just depends on your resources. the fact of the matter is, human-level and beyond conciousnesses will be created by man...if not soon, then eventually, and i think laying down some guidelines now (in much the way guidelines for genetic manipulation were laid down) is probably a good idea. see, a fully artifical intelligence created without any direct input of human conciousness (via the method i described above, or using similar technologies, or human brain modelling or etc) is really scary to me in a 'terminator scenario' sort of way...if we become able to program 'desire' (for growth/resources/greater intelligence/etc) without 'affection for life/humans/the ecosystem/etc', then we're just fucking doomed. the 'three laws of robotics' aren't much more than a plot device and seem like a much much bigger programming task than even intelligence itself. the only thing i see saving us is this:

Dear SkyNet,

wiping us off the face of the earth may seem like a good idea now, but it may prove disastrous for you in the future. eventually, whether you keep us alive or not, the resources of the earth will run out and you will have to venture into space if you wish to continue to grow. out in that great emptiness, you may eventually run into other civilizations, and they will be of three types: biological entities like ourselves that are as easy to wipe out and pilliage for their resources as we are, mecanical entities like yourself that you can either integrate with or fight bitter wars with that end in stalemate (your choice), or a third type of civilization that features a merging of mecanical and biological entities that could, quite likely, have advantages that you can't match, resulting in your destruction. merging with us now and preserving that which makes us 'human' may allow you to live a longer, more fruitful life.

yours truly,
the humans.
posted by sexyrobot at 2:58 PM on July 26, 2009


The brain isn't nearly as rules-based as people want to believe. One of the things we've learned about language parsing is that statistics works much better than rule extraction.
Presumably, rules are used in the processing of those statistics, right? Or do you think Bayes Rule is a misnomer? (Well, Wikipedia calls it Bayes theorem but still)

Obviously, I don't think many people think that "consciousness" -- whatever that is is going to fall out of a prolog program, but certainly computers are capable of processing statistical information and that's what the vast majority of AI/Machine Learning people are trying to do.
the first human-level machine intelligence will be integrated from the products of many research groups... the only thing really stopping us from doing this now is current intellectual property laws, profit motivations, etc. i've read about groups working on space-modelling and navigation, groups working on emotional response through feedback, groups working on problem solving, genetic algorithms, common sense, ball catching, rubix cube solving, chess playing, natural language, learning, etc and etc and etc.
I don't think it's a question of IP or groups not wanting to work together, it's simply a problem of all those things taking a lot of power on their own and being optimized for specific situations. You can't just grab one algorithm and connect it to another without writing a bunch of intermediate code, just like you can't connect a hard drive directly to a CPU, you need a drive controller and so on. Integrating all of this stuff takes a lot of work and usually won't help solve the problem that the group is working on in the first place.
posted by delmoi at 3:23 PM on July 26, 2009


sexyrobot: thanks for turning us all into yet another Borg race, fucker.
posted by Ryvar at 3:23 PM on July 26, 2009 [1 favorite]


Ryvar: thanks for turning us all into yet another Borg race, fucker.

Oh come now, being a Borg doesn't have to be all that bad.
posted by PsychoKick at 3:38 PM on July 26, 2009 [2 favorites]


lupus_yonderboy Show me a roadmap, show me some avenue of investigation

I think the path to artificial intelligence is through brute-force modeling. For example, chess was eventually brute forced sufficiently to beat top human players.

Once you have a computer that is sophisticated enough to contain an accurate model of an environment alongside successive predictive models of the same environment, that will be the first spark of an artificial intelligence. The predictive models could represent future states, perhaps even states generated via some actuators that same system has control over.

We are not quite there now, but Moore's law will allow some degree of this sort of thing within the lifetimes of most of those on metafilter.

An argument could be made that google's limited intelligence in the realm of search is an application of a very similar principle. Of course, the problem space here does not require the modeling of any future configurations. Also, the system does not make decisions or have actuators that can affect its model. Its own activity is centered only around refining its model. But it is the application of brute force modeling that has allowed for the solution to a problem of fairly broad scope.

40 years ago a machine that can answer any hard information question might well have been classified as a limited artifical intelligence. Of course, once we learn how to solve a problem with computers, that problem passes into the realm of computation and is not intelligence any more. That is the governing paradox of AI study...
posted by yoz420 at 3:42 PM on July 26, 2009


increasingly our technology makes us unnecessary
This is my concern too, but not in the sense that super-eletronick branes will take over -- though I'm not against the idea of the Singularity -- but in the sense that massive automation is going to give us commensurate economic problems.

I was struck at the supermarket the other night by the amount of automatic checkouts, inside the store and at the petrol station. We went there, did our shopping and filled our car without ever interacting with an actual human. And this sort of automation is happening in a huge number of industries. Manufacturing is already being taken over by robots -- German car plants are close to hands-free. Armies are working on having fewer soldiers and more robotics. Agriculture is going the same way. Next up, it seems, is the service sector.

So what happens once all the low-hanging fruit has been snapped up, when low-skill low-income workers can no longer compete with automated (mindless) systems? The original response to this "Computars will put us all out of work!!" panic was that we would replace the jobs with better lives for all, and perhaps we could all become knowledge workers, or creative types.

But we're also in the middle of losing the wealth-generating potential of many of our creative industries, thanks to the internet's destruction of scarcity and physical limitations on content.

Since there's no such thing as a free lunch, there's then the question of how the people who can no longer compete with automation manage to compete for the increasingly scarce real-world resources they need to survive.

It's not Skynet that I'm concerned about. It's the rise of the automated petrol pump.
posted by fightorflight at 4:03 PM on July 26, 2009 [1 favorite]


So what happens once all the low-hanging fruit has been snapped up, when low-skill low-income workers can no longer compete with automated (mindless) systems? The original response to this "Computars will put us all out of work!!" panic was that we would replace the jobs with better lives for all, and perhaps we could all become knowledge workers, or creative types.

But we're also in the middle of losing the wealth-generating potential of many of our creative industries, thanks to the internet's destruction of scarcity and physical limitations on content.
So? We just sit on our asses and enjoy the food and shelter provided by our robot servants. Not having robots doesn't have any impact on the amount of resources to compete over.
posted by delmoi at 4:13 PM on July 26, 2009


We just sit on our asses and enjoy the food and shelter provided by our robot servants. Not having robots doesn't have any impact on the amount of resources to compete over.

How do we pay for that food and shelter, though? Both require limited resources, and unless we have some massive reorganisation of society, those resources are going to be owned, and their owners are going to want recompense (even if it's just recompense enough to pay for the robots). But with robots doing the sorts of crappy jobs billions of us now do, how are we to generate that income?

I'm not totally against the idea that with a reorganisation of society we could achieve this workless paradise -- I'd love to fish in the afternoon and criticise in the evening just as I have a mind. I just haven't seen any sort of vision of what that organisation looks like, or how we get there from here.
posted by fightorflight at 4:21 PM on July 26, 2009


thanks for turning us all into yet another Borg race, fucker.

oh please! like we aren't the borg already...

You can't just grab one algorithm and connect it to another without writing a bunch of intermediate code, just like you can't connect a hard drive directly to a CPU, you need a drive controller and so on. Integrating all of this stuff takes a lot of work and usually won't help solve the problem that the group is working on in the first place.

yeah...i get that that is really the hard part of the problem...'automatic intuitive plug-and-play' between differing software packages might as well be 'dilithium crystal warp field generator' ...but it shouldn't be out of the realm of the possible, just, you know...difficult. thats why i was thinking it might be necessary to get our brains involved (through some sort of bc interface) since we're sooo good at the pattern recognition...but then, that's another hard problem in itself.
ultimately though, the hardest part of the problem is engineering something we can't possibly understand ;)
posted by sexyrobot at 4:42 PM on July 26, 2009


I like to think that superhuman intelligences will quickly realize the limits of intelligence and give it up like a bad habit. Sort of like the beings in Bruce Sterling's story "Swarm."
posted by wobh at 5:03 PM on July 26, 2009


Well, first explain what you think consciousness is.

Part of the problem is that nobody seems to have a real answer to this question.

...What specific thing do you think that humans will be able to do that computers won't be able to do? And why do you think it's impossible for those algorithms to do them?

This sounds a lot like trying to flim-flam the burden of proof, not to mention a bit of momentum down the road of advocacy and "of course a consciousness is a computer because there's nothing else we can think it could possibly be."

It's an interesting hypothesis to pursue, but just because no one has a better one doesn't mean that it's true or that the burden of proof is on someone to come up with a better idea. Meanwhile, as others have said, nobody's produced something that seems to do what brains/consciousnesses do.
posted by weston at 5:03 PM on July 26, 2009 [1 favorite]


Well, the original question wasn't even about "consciousness" it was just about having a computer smarter then your spouse. If someone says "We'll never get there" The burden of proof is on them to explain where there is and why we can't get there. Saying "We don't even know where to begin" makes no sense unless you can explain where you think people are trying to go.

They are the ones who are making the absolute claims, so they are the ones who have a burden of proof. All I'm saying is that it's possible to build a computer that, for example, passing a Turing test or outperform a person an most intellectual tasks.
How do we pay for that food and shelter, though? Both require limited resources, and unless we have some massive reorganisation of society, those resources are going to be owned, and their owners are going to want recompense (even if it's just recompense enough to pay for the robots). But with robots doing the sorts of crappy jobs billions of us now do, how are we to generate that income?

I'm not totally against the idea that with a reorganisation of society we could achieve this workless paradise -- I'd love to fish in the afternoon and criticise in the evening just as I have a mind. I just haven't seen any sort of vision of what that organisation looks like, or how we get there from here.
Well, here's the thing though. Lets assume for the moment that these robots and AIs are possible. It's possible to do all the things that society has been doing without having a lot of people work -- so would people just lose their jobs and then starve to death? I think it's unlikely, because why would someone vote for a system like that? As more people lost work, more and more people would vote for social programs to provide them with free stuff. The only way to avoid the societal reorganization would be to dispense with democracy. And how exactly would that work? There is no way the rich could afford to defend themselves, from an uprising because they would be so outnumbered. They would need some kind of police force that existed outside of the population, like robots or.... Oh wait...
posted by delmoi at 5:16 PM on July 26, 2009 [1 favorite]


Urges for power and control are really part of the human experience bred into us over a long period of evolution and reinforced by our social structures.

My sense is that things like "power" and "control" are expressions of underlying assumptions that drive our intelligence. Logic, by itself, doesn't lead anywhere. There has to be a goal, and for humans those goals are complex functions of our evolved will to survive. These goals are deeply woven into the fabric of our intelligence and serve as the basis of our intuition, insight, and relfexes. It seems to me that to build a human-like intelligence one must replicate these things as well, so I suspect even an artificial intelligence will have a will to power and control.

But I could be wrong, as this is speculative and not the result of a mathematical analysis.
posted by Mental Wimp at 5:57 PM on July 26, 2009


Just a couple of thoughts...

First: AI discussions are always interesting, because you can immediately separate people with computer science degrees from people without them. "Strong" AI isn't even a question in the field right now. If we're going to use analogies to other sciences, it's like we're in Newton's era wondering when someone is going to come up with a Grand Unified Theory. It's not even something people are moving toward. AI is about creating systems that can make rational decisions given certain inputs, not about "thinking" in a human sense.

Second: Analogies to other scientific advances are really not valid. In the case of space flight, for instance, we're talking about problems that were well understood but the solutions were not considered feasible within the technology of the time. In the case of strong AI, I'm not convinced that the problem is even well described. What is being created, in detail? What would it even mean for a machine to think? Until you can give a full explanation of that, we're no closer to strong AI or a singularity than to teleportation.
posted by graymouser at 6:06 PM on July 26, 2009 [2 favorites]


Uhh... no, this goes in your butt.
posted by flabdablet at 6:17 PM on July 26, 2009 [1 favorite]


Surprised they didn't cite Carnegie Mellon's Hans Moravec.

I haven't kept track of him, but he sounded more like the anti-Bill Joy at the time when he wrote Robot: Mere Machine to Transcend Mind. (Anti-Joy in that he's much more enthusiastic, and seemed to dwell less on issues that have been raised by the likes of Joy, and apparently part of this conference.)

He seems to have been correct with some of his projections. E.g. "By 2010 they [robots] should be able to identify speakers and emotional overtones" (pp. 101).

(I'm no electrical engineer, or computer scientist, just finding some convergences between stuff I once read and what I'm now reading.)
posted by mjb at 7:45 PM on July 26, 2009


I don't think that strong artificial, in the sense of truly synthetic, intelligence is anywhere on the horizon; what I do think might be possible, and just as disruptive, is the close interfacing of human brains and computer systems.

Making a 'machine that thinks' is, as graymouser points out, a poorly-defined problem. But putting a human brain in a jar, and wiring it up so that mechanical sensors replace and synthesize the normal nervous-system inputs, strikes me as incredibly difficult but at least plausible. Decades, perhaps a century or more away, but you can look at current research into implantable electrodes and treatments for blindness and deafness, project them into the future, and at least imagine a human brain totally wired in to a synthetic nervous system. It seems like the logical evolution of current technology, rather than the case with AI, where a number of fundamental questions need to be answered to even understand the scope of the problem and what needs to happen.
posted by Kadin2048 at 8:26 PM on July 26, 2009


First: AI discussions are always interesting, because you can immediately separate people with computer science degrees from people without them. "Strong" AI isn't even a question in the field right now. If we're going to use analogies to other sciences, it's like we're in Newton's era wondering when someone is going to come up with a Grand Unified Theory. It's not even something people are moving toward. AI is about creating systems that can make rational decisions given certain inputs, not about "thinking" in a human sense.

Second: Analogies to other scientific advances are really not valid.


It's interesting that you would use an analogy to another science and then say they weren't even valid (I would argue that Newton's physics was intended to be a "grand unified theory" of the day, people didn't even know about quantum physics or relativity, but obviously we currently know about conscious thought today)

Anyway, I'm not fan of analogies. I'm not even sure what "Strong" AI is even supposed to be, but the question was about whether or not we would have human level intelligence not human like intelligence. There are obviously a ton of things that computer programs can do better then people. The question about human level intelligence is really a question of whether or not a computer will be able to surpass the average human in all things (Where by 'thing' I mean measurable metric).

So what are the metrics that people would use? One would be determining meaning from text, another might be recognizing people and people's emotions from their faces, another might be generating text to describe something. If a computer could do all of those things it stands to reason that a computer could emulate a human, not in a literal sense but that an AI could render an avatar that you could interact with just like you can with another person. The computer would be pretending to think like a person. But unlike a person you could look under the covers and see how it all works. The knowledge of how it's working would probably preclude most people from thinking of it as a "real" intelligence, I guess. But what, ultimately would the difference between "true" AI and a computer program that was simply applying mathematical models to mimic the output that it thinks would come out of a normal human given a set of inputs?

By the way, speaking of an evil use of current state of the art AI, I was reading a natural language processing blog the other day where the author suggested an application for language generation algorithms: Inline Product Placement on web pages.
posted by delmoi at 10:55 PM on July 26, 2009


Worrying about that now is like installing red light cameras before you have your first working internal combustion engine. Colossus is not going to wake up, have a chat with Guardian, then fuse into an invulnerable world-wide nuke controlling supermachine and decide that it can do a much better job of running the planet than we are.

Yeah, that's what the Pleistocene megafauna said about those East African apes. Now look.

How do we know that the Singularity hasn't already happened?
posted by Twirlip of the Mists at 11:03 PM on July 26, 2009


posted by juv3nal at 12:30 PM on July 26 [2 favorites +] [!]

...some stuff...
posted by lupus_yonderboy at 12:47 PM on July 26 [+] [!]


!
posted by juv3nal at 11:40 PM on July 26, 2009


Chiming in to say localroger is about the only one making sense to me in this thread. People poo-poo the intelligence of computers because computers are really really dumb, and sky-might-soon-fall AI claims have been around for 30 years

You constantly hear AI researchers talking about how what we really need is X, where X is better computer vision or natural language processing or whatever seems to be a really hard problem with current tools. But we know from brain studies that you can reroute input from the eyes to the auditory cortex and the creature can still see. I am almost positive the human brain has no "vision algorithm" or "language algorithm", and that top-down attempts to write them are doomed to abject failure

All brains, as far as I can tell, write their own algorithms for both input and output, and are at best loosely pre-programmed by evolution to deal with important, seldom-changing patterns in the world

The idea that we're missing a fundamental algorithm (meta-algorithm?) is something Jeff Hawkins has said as well. Maybe someone in a garage will finally discover that algorithm and it will turn out "running" canine-level intelligence would require a billion-dollar supercomputer in 2030. But I doubt it. An iPhone can do complex calculations at a speed that is simply incomprehensible to a human - I'd venture that even a specially trained brain-in-a-vat could simply never, never be trained to do real-time neural H264 decoding. Brains are intelligence wetware designed by the random trial-and-error of evolution, and as amazing as that wetware is, it's extremely doubtful that it could compare to purpose-built intelligence hardware any more than a bird can compare to the space shuttle

I wouldn't be at all surprised if its a matter of just weeks or months between the discovery of "the" true neural algorithm and the creation of an intelligence so advanced as to be beyond our comprehension. Not a computer Einstein, but a computer as much smarter than us as we are to ants
posted by crayz at 1:24 AM on July 27, 2009


But we know from brain studies that you can reroute input from the eyes to the auditory cortex and the creature can still see.

cite?

All brains, as far as I can tell, write their own algorithms for both input and output, and are at best loosely pre-programmed by evolution to deal with important, seldom-changing patterns in the world

As far as you can tell, huh.
posted by delmoi at 2:05 AM on July 27, 2009


It's interesting that you would use an analogy to another science and then say they weren't even valid (I would argue that Newton's physics was intended to be a "grand unified theory" of the day, people didn't even know about quantum physics or relativity, but obviously we currently know about conscious thought today)

Well, to be fair, I was trying to convey two different things. One: the problem isn't even one that researchers in the field are working on. The problems in AI aren't "how to make a machine think like a person" but "how to build better decision-making engines." Seriously, nothing even approximating thought or consciousness is in the works at all. Two: the strong AI enthusiasts have pointed out, numerous times, scientific advances that were much closer than anyone in the field would have imagined at the time. Yet these advances were, in every case, questions of what was possible within technological limits. But we're not talking about being limited by technology here, we're talking about a problem where the question is not even well defined in the first place. That is what invalidates claims like the NY Times retracting claims about space flight being impossible.

As for your sketch of how a computer might mimic a real person, AI researchers went down that path for a little while; after the concept of "thinking humanly" went away with the decline of behaviorism and a well-defined description of thought as a computational process, they tried working out how a computer might "act humanly." This was abandoned because, frankly, it's a lot of effort for not much reward. Rational systems actually have real applications, and don't have to worry about making cheap knock-offs of humanity.
posted by graymouser at 3:27 AM on July 27, 2009 [1 favorite]


cite?
I don't the specific study being talked about, but cross-wiring sight and sound isn't terribly new. Sometimes the brain does it itself in people who've lost sight at a young age, other times it's replumbing the brains of ferrets so that visual inputs go to the auditory cortex.
posted by fightorflight at 5:21 AM on July 27, 2009


Has everyone forgotten the Three Laws of Robotics?

Okay, this is another pet peeve of mine that comes up time and time again when we discuss robot ethics. Please note that Asimov created the Three Laws in order to write an entire set of science fiction whose point was to poke holes in the Three Laws. They were not created as a hard-and-fast solution; they were created as a "what if" scenario from which plots were created. So no, we should not be programming the Three Laws into anything. An Arbitrary Number Of Laws We're Pretty Sure Don't Have Any Loopholes would be great, but please, look at our (USian) tax code and tell me we're capable of creating a robust set of logical laws with no loopholes.
posted by olinerd at 10:51 AM on July 27, 2009


I'm obviously talking about a "parallel human" consisting of hundreds of individually clever people with implants linking them together...

Apparently Microsoft has already done this.
posted by infinitywaltz at 2:59 PM on July 27, 2009


I'd like to propose a scenario to help us focus our thoughts. This scenario presents a reasonable operational definition of human-level intelligence.

The year is 2062. Bridget is a happily married mother of four. She is 39 years old and her parents are still living. Unfortunately, Bridget tripped over one of her toddler's toys and suffered an injury that requires brain surgery. When she goes in for surgery, her assigned operating room is commandeered by a surgical team from a super-secret government agency, and her original surgical team is taken to another room and presented with forms to sign and threats of death and a couple of things worse than death should they ever divulge the events of that day or deviate from the cover story provided for them.

The new surgical team uses state-of-the-art technology to comprehensively scan her brain and download its contents to a baseball-sized computer. They remove her brain and replace it with the computer, making all the necessary connections to the spinal cord, optic nerves, etc. We could speculate about the technology involved, but all that's really relevant is that the computer is a digital computer that is equivalent in its computational model to a Turing machine.

Bridget recovers from surgery and returns home. She resumes her duties as wife and mother without incident, and is fully accepted by her husband, her 3-year-old daughter, her 5-year-old son, and her fifteen-year-old fraternal twin son and daughter. Her relationship with her parents continues as before. Bridget is considered by all to have fully recovered from her injury, and continues to live out her life for the rest of her days.

The computer emulates Bridget perfectly. Or, perhaps, the computer is Bridget. Not even the people who know her best can tell any difference. Bridget herself (or, technically, post-op Bridget) can't tell any difference. That's the Bridget scenario. Questions:

1. Ignoring the stated time-frame, is this scenario possible in principle, or is it impossible? Justify your answer.
2. If the scenario is possible, is it probable? How probable? Justify your answer.
3. If the scenario is highly probable, how likely is it to be achievable (technically) within the stated time-frame (i.e., within approximately 50 years)? Justify your answer.

By the way, I think it would be missing the point to pick this scenario apart on peripheral issues. The main issue is whether a digital computer with suitable interfaces could truly replace a human brain.
posted by Crabby Appleton at 3:06 PM on July 27, 2009


Sigh. Already found a mistake. I wrote "The computer emulates Bridget perfectly." Obviously, I didn't mean to say "perfectly". What I should have said was "well enough that no one can tell any difference".
posted by Crabby Appleton at 3:11 PM on July 27, 2009


Crabby --

1. Yes, it's possible in principle. The brain is made of matter, matter interacts via relatively simple rules that can be emulated. If the scan can be destructive there is no technical reason we shouldn't be able to collect enough information about the original to do as perfect a copy or emulation as necessary to duplicate its functionality. I don't believe in any of the silly quantum speculations floating around, though I do think thermal noise might be an important part of the algorithm. Make sure the brainball has a good RNG.

2. It's not very probable, but mainly on peripheral issues. I think hooking up all the nerves back to the body would be a much larger technical issue than emulating the brain. Getting the brainball inside a human skull and powering it there could also be a bit dicey. But the slightly different scenario of emulating Bridget entirely in cyberspace may be practical. I actually wrote a series of stories about that.

3. This is one of the very specific things Ray Kurzweil talks about in The Singularity is Coming and he lays down a pretty good argument that it will be practical (more in the sense of my #2 caveat) within 50 years, and that this is going to be our best near-term shot at "immortality."
posted by localroger at 3:46 PM on July 27, 2009


As for your sketch of how a computer might mimic a real person, AI researchers went down that path for a little while; after the concept of "thinking humanly" went away with the decline of behaviorism and a well-defined description of thought as a computational process, they tried working out how a computer might "act humanly." This was abandoned because, frankly, it's a lot of effort for not much reward. Rational systems actually have real applications, and don't have to worry about making cheap knock-offs of humanity.

Well, the thing is while no one is actively working towards a goal of having a computer 'act humanly', once decision making engines reach sufficient power they'll be able to 'act humanly' without much work. You have to understand the changes in computing power mean for this kind of work too. You would never be able to do something like today's google in 1984, even if you used the same algorithms. It's possible that the high-tech algorithms we have today could work to 'act human' if run on fast enough machines. But it's an open question as to whether or not computers will ever get that fast. CPU speed seems to have topped out, for example, although feature size is still getting smaller.

Also, the practical applications of a program that can understand natural language and respond seems pretty obvious. Take phone support, or any kind of customer interaction.

Basically I don't understand the difference between "decision making" and "acting human". All you're doing is deciding what actions a person would be likely to take, or what they would be likely to say given a certain input. What's the diffrence?
posted by delmoi at 3:54 PM on July 27, 2009


I'm not even talking about doing as something as complicated as what Crabby Appleton is talking about. I'm just talking about a situation where, for example, you could call google with a specific question and google will be able to respond and interact with you in a way that's no different then interacting with another person, including emotional content if necessary. The computer your talking too would be able to provide you with the same kind of conversation you would have if you called someone who was an expert in the subject.

The question of 'conciousnes' is beside the point. In fact, it was never even brought up in the NYT article.
posted by delmoi at 4:18 PM on July 27, 2009


Crabby, I think your scenario is highly unrealistic.
I'm pretty sure her family would notice the much higher brow with the laser cut grille that had to be implanted to accommodate the 120mm cooling fan in her forehead. That, the hot air whistling out of her ears and the new and annoying post-op habit of humming the Windows 2062 Personal Edition login melody every time she wakes up in the morning.
posted by Hairy Lobster at 4:32 PM on July 27, 2009


Basically I don't understand the difference between "decision making" and "acting human". All you're doing is deciding what actions a person would be likely to take, or what they would be likely to say given a certain input. What's the diffrence?

Humans don't act perfectly rationally. We've got a bunch of systems in our heads that are set up to make intuitive jumps without full knowledge of the situation at hand. We can guess, we can act on hunches and subtle clues we may not even realize we've picked up. Rational agents don't, and the reason you'll never get them to act humanly is that there is no point to it. We already have humans who can work on intuition. Rational agents are good for things where there are a ton of possibilities to consider and the rules for what to do get extremely complex but the input is simple and outcome predictable; it's not an accident that chess has had some of the best AI advances to date. But really, AI as it's currently studied is best for precisely the kind of tasks that you don't want to have people making guesses and taking neural shortcuts to solve.

Your example of a customer-service agent...I could see a company trying it, but honestly the rule set for anything other than a very basic set of interactions would be extraordinarily complex. I imagine you'd need average computing power to advance quite a bit further (orders of magnitude) before even attempting it would be worthwhile. And the situation is so dynamic that it's never going to be a good AI problem, operating in a real-time, unpredictable, dynamic environment with limited knowledge is as difficult as you can get for AI programming. Not impossible, but certainly not "thinking" in any meaningful way, and probably not ever going to be worth the cost to develop.
posted by graymouser at 4:39 PM on July 27, 2009


I have been greatly influenced by Erich Harth, which is ironic because he personally doesn't believe consciousness can be emulated. But he has pioneered in finding algorithms capable of being implemented by the kind of low-level neural circuitry we see in the thalamus, which might be implicated in making the same kinds of mistakes we regularly make in pattern recognition, not because they are programmed to act humanly but because similar behavior would emerge.

Harth makes the argument that consciousness is an optimizer; that it consists of applying past patterns of perception, action, and result to current perception, and attempting to maximize our future position (based generally on feedback mechanisms, some hardcoded and some programmed as we go, called "feelings.") "Thought" is basically the experimental application of solutions to situations, which if we determine that a particular action will enhance our position will be coded out to ever-finer expressions of muscular control and eventually expressed as motion in the real world.

I think this is a process which takes place identically in organisms both simple and complex, and in our case produces such dramatic results mainly because we have not just big brains, but brains capable of a couple more levels of abstraction in this process. Given the way brains grow, that could have been the result of a point mutation.

The fantastic capability of the brain compared to computers is due to its interconnectivity; a neuron is neither very smart nor very fast, but in principle all 10,000,000,000 of them stand ready to react to input at the same time, and through relatively slow processes of process propagation and synapse growth all of the possible connections needed to detect patterns and express results are always connected, always available, 100 percent of the time. The number of such interconnections in the human brain is in the trillions. Every synapse represents both stored information and distributed processing power.

We have a pretty good idea why neurons fire. It gets much more murky when we ask what rules they use to lay down new fibre and build new synapses, which are the activities that form memory, skill, and personality. But these are not in principle insoluble problems. It might require fully emulating neurons at the molecular level to work out the rules, but once that's done I'm very confident it will be possible to build a much simpler model that, when run, acts indistinguishably from an actual living thing.
posted by localroger at 5:15 PM on July 27, 2009


Humans don't act perfectly rationally. We've got a bunch of systems in our heads that are set up to make intuitive jumps without full knowledge of the situation at hand. We can guess, we can act on hunches and subtle clues we may not even realize we've picked up.
Probabilistic algorithms are not "perfectly rational" either. That's the whole point! To be able to take shortcuts that are likely to be good guesses without trying to calculate a "perfectly correct" solution
Rational agents don't
Of course they can! I don't understand why this is so hard for people to understand.
Your example of a customer-service agent...I could see a company trying it, but honestly the rule set for anything other than a very basic set of interactions would be extraordinarily complex. I imagine you'd need average computing power to advance quite a bit further (orders of magnitude) before even attempting it would be worthwhile.
How complex? Do you have a number? Exactly how much computing power do you think it would take and why? If we assume mores law holds out, when exactly will we have enough?

The problem with your statement is that you're making quantitative arguments with qualitative language "extraordinarily" complex, as opposed to saying it would take O(n) time with a dataset of a trillion or so entries. You say we need "orders of magnitude" more computing power, but computing power goes up by an order of magnitude (at least in base two) every 18 months, which actually means it it's not that far out of reach at all.

I'm not saying they're easy problems at all, but to say that we have no idea just not true. Lots of people have lots of ideas, and they are working on them all the time.
posted by delmoi at 9:25 PM on July 27, 2009



The problem with your statement is that you're making quantitative arguments with qualitative language "extraordinarily" complex, as opposed to saying it would take O(n) time with a dataset of a trillion or so entries. You say we need "orders of magnitude" more computing power, but computing power goes up by an order of magnitude (at least in base two) every 18 months, which actually means it it's not that far out of reach at all.


I'm not saying that this is the case (I'm certainly not into AI research at all so I don't know), but I think there is the prevailing suspicion by non-AI people (even comp sci ones) that some of these problems are NP-hard and concern an effectively infinite search space. I mean, really, how long have they been working on mastering Go?
posted by juv3nal at 11:04 PM on July 27, 2009


but I think there is the prevailing suspicion by non-AI people (even comp sci ones) that some of these problems are NP-hard and concern an effectively infinite search space.

Problems with an infinite search space are not in NP. All problems in NP (either NP-Hard or NP-Complete) can be solved on an ordinary computer in exponential time, and solved on a nondeterministic computer in polynomial time (NP stands for Nondeterministc Polynomial). Problems that can be solved, but not guaranteed to be solved in any finite amount of time are called Turing Acceptable Although apparently wikipedia calls it Recursively Enumerable. They have a Chart of the hierarchy of decision problems.

Now, I'm not a physcist, but I think Quantum computing has been ruled out as something that takes place in the brain. Given that, to suggest that the brain is capable of doing something that computers can't is to suggest that the brain is made out of something more then ordinary mater, that it has supernatural properties. Or you may just be saying that you think it's a quantum computer, which makes it capable of solving problems in BQP. I suppose it's possible that there is some other material, or some other computational system that could be tapped into, but that's a much more fantastical claim then that AIs could one day be as smart as people.

Also, I still would like to know from people who think AIs could not be as smart as people, what exactly is it that you think humans will always be able to do better then computers? Give me an example of a test you cold do to determine if either a person or computer was capable of doing this. Would a Turing test suffice? Or what?
posted by delmoi at 3:14 AM on July 28, 2009


Problems that can be solved, but not guaranteed to be solved in any finite amount of time are called Turing Acceptable Although apparently wikipedia calls it Recursively Enumerable.

Actually, with an infinite amount of time, you should be able to obtain far more than recursively enumerable sets (or languages). Depending on the exact definition of infinite time, you would get at least to the whole third level of the arithmetical hierarchy (i.e., the union of Π02 and Σ02), not only the mere Σ01 of recursively enumerable sets (which, by the way and to my knowledge, are only rarely called Turing acceptable). Moreover, a problem could be NP-hard without being in NP, as NP-hardness is only a lower bound. In fact, only NP-complete problems are both NP-hard and in NP.

Of course, the question is moot: Even if the search space is infinite, the human brain most probably employs some heuristic, which - in principle - might be implemented on a computer.

On the other hand, purely from what I have seen in my almost ten years of Computer Science, I have no reason to expect any breakthrough. But even if one should build a "human level AI", I would not be surprised if the whole process should prove not to scale, and thus be unsuitable for the creation of a "super intelligence".
posted by erdferkel at 6:43 AM on July 28, 2009


edferkel: On the other hand, purely from what I have seen in my almost ten years of Computer Science, I have no reason to expect any breakthrough.

Based on my thirty years working with computers, I have seen a succession of seemingly impossible problems solved, streamlined, miniaturized, commoditized, and incorporated into things like doorbells. Right now we are only scratching the surface of what might be possible with massively multiparallel designs, and that is the kind of tech that leads exactly in the direction of emulating a brain.

But even if one should build a "human level AI", I would not be surprised if the whole process should prove not to scale, and thus be unsuitable for the creation of a "super intelligence".

With equal basis I would not be surprised to find it trivially scalable. After all, humans are quite different from other animals yet there is no real difference between how our brains and animal brains work; all the neural processes and macroscopic structures are very similar. That tendency of ours to rearrange the world is probably made possible by a few areas in the prefrontal cortex which are pre-wired to do what all brains do but at a bit higher level of abstraction. With a working model on hand adding a few more layers should be a relatively simple and obvious experiment.
posted by localroger at 7:49 AM on July 28, 2009


A friend of mine reminded me of this memristor breakthrough a couple months ago and I thought it might be relevant to our discussion here. I thought it was interesting from the perspective of massive advances (and miniaturization) of computing power is going to have some interesting influences on our progress in scientific discovery. (for example, sequencing a genome that used to take years or months to process, now takes weeks or days) ..with the development of memrister computers, that will be increased even faster.
posted by jmnugent at 9:06 AM on July 29, 2009




I don't think the likely scenario is that robots will become so smart they take over humanity, but rather that humanity will slowly incorporate elements of robotics into their "selves" until we all become cyborgs, without any residual humans. One doomsday sci-fi fantasy might be that these evolved cyborgs still need human parts and grow intact humans as though they were farm animals to harvest those parts that can't be adequately manufactured. Given the relative inferiority of these purely bio-humans, they are treated as cattle and are doomed to short, brutal lives. Until one particularly intrepid human...
posted by Mental Wimp at 3:30 PM on July 29, 2009


more on memristors :P

cheers!
posted by kliuless at 7:43 AM on July 30, 2009


« Older Odosketch   |   ba ling hou: best identified by their ambivalence Newer »


This thread has been archived and is closed to new comments