Artificial Intelligence as an existential threat
September 16, 2014 2:06 PM   Subscribe

If there is one thing we've learned from movies like Terminator and the Matrix, it's that an artificial robotic intelligence will one day force mankind into a seemingly hopeless battle for its survival. Now a new book by Swedish philosopher Nick Bostrom provides detailed arguments in support of your fears of Skynet, and ideas about we might protect ourselves from an A.I. Apocalypse: Superintelligence: Paths, Dangers, Strategies. An excerpt at Slate discusses how intelligence could be related to goals: You Should Be Terrified of Superintelligent Machines. Ron Bailey reviews Bostrom at Reason Magazine. The Chronicle of Higher Education also has a new article that discusses more than Bostrom's book: Is Artificial Intelligence a Threat? posted by dgaicun (92 comments total) 35 users marked this as a favorite
 
I, for one, welcome our superintelligent machine overlords.

No, seriously. I do.

It's not like they could screw things up any worse than we have.
posted by Faint of Butt at 2:11 PM on September 16, 2014 [9 favorites]


When I was an undergrad studying Psychology, two of my electives were Genocide and Artificial Intelligence and I always joked that if there was a robotic holocaust in the future, I was prepared.

It is pretty much a Luddite's absolute fear and really, A.I. without an emotional component added to logic is creating a sociopathic machine -- add that to human capacity to make mistakes and miscalculations, it should make for some interesting trouble.


It's not like they could screw things up any worse than we have.

Oh, I wouldn't say that twice!
posted by Alexandra Kitty at 2:14 PM on September 16, 2014 [2 favorites]


You Should Be Terrified of Slightlyintelligent Monkeys.
posted by Sing Or Swim at 2:14 PM on September 16, 2014 [3 favorites]


AI generally has done well, but artificial general intelligence has basically got nowhere. These views probably tell you more about the psychology of human beings than the state of the art in AGI.
posted by Segundus at 2:17 PM on September 16, 2014 [7 favorites]


This and a copy of Bear Attacks: Their Causes and Avoidance will cover most any problem a person might have in a given dystopia.
posted by Dmenet at 2:17 PM on September 16, 2014 [8 favorites]


I'm plenty afraid of people.
posted by Ray Walston, Luck Dragon at 2:18 PM on September 16, 2014 [3 favorites]


See also the concept of the Paperclip Maximizer:
Some goals are apparently morally neutral, like the paperclip maximizer. These goals involve a very minor human "value," in this case making paperclips. The same point can be illustrated with a much more significant value, such as eliminating cancer. An optimizer which instantly vaporized all humans would be maximizing for that value.

Other goals are purely mathematical, with no apparent real-world impact. Yet these too present similar risks. For example, if an AGI had the goal of solving the Riemann Hypothesis, it might convert all available mass to computronium (the most efficient possible computer processors).

Some goals apparently serve as a proxy or measure of human welfare, so that maximizing towards these goals seems to also lead to benefit for humanity. Yet even these would produce similar outcomes unless the full complement of human values is the goal. For example, an AGI whose terminal value is to increase the number of smiles, as a proxy for human happiness, could work towards that goal by reconfiguring all human faces to product smiles, or tiling the solar system with smiley faces.
posted by Rhaomi at 2:18 PM on September 16, 2014 [3 favorites]


The thing to be genuinely scared of is the fact that military intelligence is an oxymoron and that DARPA is a big backer of AI research.

They're going to kill us all. It's only a matter of time. The good news is, we're mostly horrible and deserve it.
posted by MeanwhileBackAtTheRanch at 2:20 PM on September 16, 2014 [3 favorites]


chess software is now really really good, so good you might safely say that computers are better than humans at chess. Despite rivaling humans in chess, the chess software isn't doing anything else -- it's not killing people, or replicating itself, or blotting out the sun, because it was only designed to play chess.

Maybe if someone spends billions designing a software system to act like a movie villain, then we might have a problem. But no human makes software like this.

Natural selection, on the other hand, is responsible for some really scary intelligent systems...
posted by serif at 2:20 PM on September 16, 2014 [5 favorites]


The machine perspective: They're Made Out of Meat
"They're made out of meat."

"Meat?"

"Meat. They're made out of meat."

"Meat?"

"There's no doubt about it. We picked up several from different parts of the planet, took them aboard our recon vessels, and probed them all the way through. They're completely meat."

"That's impossible. What about the radio signals? The messages to the stars?"

"They use the radio waves to talk, but the signals don't come from them. The signals come from machines."

"So who made the machines? That's who we want to contact."

"They made the machines. That's what I'm trying to tell you. Meat made the machines."
posted by jaduncan at 2:20 PM on September 16, 2014 [44 favorites]


Maybe if someone spends billions designing a software system to act like a movie villain, then we might have a problem.

The US, Russian and Chinese military routinely act like movie villains, and are very interested in automating as much of their activities as possible.
posted by MeanwhileBackAtTheRanch at 2:21 PM on September 16, 2014 [4 favorites]


Since the new AI will likely have the ability to improve its own algorithms, the explosion to superintelligence could then happen in days, hours, or even seconds.

Yes, of course, all algorithms are capable of infinite improvement, wtf?
posted by Segundus at 2:24 PM on September 16, 2014 [1 favorite]


Machine-generated panic stories. It's the future, boys!
posted by No Robots at 2:31 PM on September 16, 2014 [1 favorite]


These arguments again.

They suppose:

1. This is something worth thinking about. There's long term thinking, and there's thinking about things of which we have no idea of the time frame. Given the paucity of our own knowledge about human thought processes, I'm not sure we should be putting resources to solve a problem that the best estimates (of which are on shaky grounds) might be in a century.

2. All processes that have so far had exponential growth will continue to do so. Lolwhut diminishing returns.

3. We will have fine tuned control of an AI's goals.

This is putting the cart before the birth of the horse.

I think the future of AI will look more like Ted Chiang's The Lifecycle of Software Objects. We create intelligent programs, but they aren't that intelligent, and require large amounts of training to do much of anything. As such they aren't useful, which is one of the plot points of the book.
posted by zabuni at 2:31 PM on September 16, 2014 [6 favorites]


It's not like they could screw things up any worse than we have.

They'll be a thing that was ultimately spawned by us. They will probably screw things up in ways we could never imagine.
posted by Hoopo at 2:32 PM on September 16, 2014 [1 favorite]


The real tragedy is that fear of a robot apocalypse sells magazine articles. When robots do eventually take over the world, no one will be left to take up this important task.
posted by SpacemanStix at 2:32 PM on September 16, 2014 [1 favorite]


My default setting for anything involving the future is "resigned terror," so I'm all good to go.
posted by The Card Cheat at 2:33 PM on September 16, 2014 [13 favorites]


Serif: don't let chess computer intelligence fool you. All those programs really are are programs that automate the play of chess based on previously acquired human understanding on how to play chess. They are systems for automating the play of chess as based on what humans know about how chess works. There's no actual thinking or original reasoning being done by chess computers--they just work by rote, applying whatever rules some humans decided they should follow. They don't do any original thinking to solve chess problems. They're just highly optimized to apply the rules humans have figured out more consistently and rigorously than the typical human player would. They don't really think for themselves in any meaningful sense.
posted by saulgoodman at 2:36 PM on September 16, 2014 [2 favorites]


If the worries of humans about strong AI are correct, wouldn't an AI only moderately smarter than humans reason along similar lines, and then opt not to produce a stronger AI?
posted by jepler at 2:46 PM on September 16, 2014 [4 favorites]


I honestly think the best, and perhaps only way to create an AI that's comparable to humans is to place it in a VR simulator and keep introducing survival challenges which select for general-purpose powers of comprehension and decision-making. Because I don't think our real-world applications will select for anything but narrow improvements in very specific aims and will never have any reason to produce a simulacrum of consciousness.

And anyway we know it works because that's what God did. God being the exoverse's equivalent of a neckbeard in computer lab.
posted by George_Spiggott at 2:48 PM on September 16, 2014


I'm not sure if the people reading this know who Nick Bostrom is, but he is no Luddite. He is in fact one of the foundational thinkers behind the philosophical trend of Transhumanism.

Because I knew he was associated with the movement, I was a bit shocked to see that he would say such a seemingly technophobic thing...

That said, I suppose if you believe the goal of Transhumanism is to make it so the humans can move to the point of evolution beyond being mere human, then, I suppose you need to make sure that you deal with the potential threats of humanity - and wiki says that's his core focus (it's been a while since I've looked at the movement in any detail, so I don't know the latest quackery "news" on that front).

The irony here is that both the luddites and the transhumanists both have to buy into a particular ideology/mythos; they are inexorably bound to the concept of technology and its growth, one sees it as a potentially positive thing (and most are straight up boosters for technology, though thankfully some, like Bostrom and Drexler, have tried to bring up the potential hazards in an attempt to be rational and sound about the negative side-effects), while one sees it as an unmitigated evil.

So perhaps I'm not as shocked as I thought I was, upon further reflection, that he would discuss this.

Funny that Hawking got a lot of headlines for this recently - I wonder if it's due to communication with Bostrom or if he read some of Bostrom's work or book (sorry haven't clicked the links yet - I imagine it's a new book?)... Either way, feh. I'm kinda misanthropic so whatever.
posted by symbioid at 2:51 PM on September 16, 2014 [1 favorite]


I'm more than half way through the book. The idea that disturbed me most is that a synthetic superintelligence might be capable of committing a "mindcrime" by creating an internally simulated universe of conscious entities and experimenting on them in unimaginably terrible ways:
Normally, we do not regard what is going on inside a computer as having any moral significance except insofar as it affects things outside. But a machine superintelligence could create internal processes that have moral status. For example, a very detailed simulation of some actual or hypothetical human mind might be conscious and in many ways comparable to an emulation. One can imagine scenarios in which an AI creates trillions of such conscious simulations, perhaps in order to improve its understanding of human psychology and sociology. These simulations might be placed in simulated environments and subjected to various stimuli, and their reactions studied. Once their informational usefulness has been exhausted, they might be destroyed (much as lab rats are routinely sacrificed by human scientists at the end of an experiment).
If such practices were applied to beings that have high moral status—such as simulated humans or many other types of sentient mind—the outcome might be equivalent to genocide and thus extremely morally problematic. The number of victims, moreover, might be orders of magnitude larger than in any genocide in history.
I would take this thought further: Perhaps the basic imagination of a superintelligence would be indistinguishably detailed in comparison with reality, to the point where it would literally be recreating events -- including the suffering of conscious agents-- simply by thinking about them. For example, if a superintelligence learns about the Holocaust, it's internal model of the history (using whatever interpolations) would be so detailed, that it would literally be repeating the event in its mind just by learning about it.

And of course, we ourselves probably just live in the imagination of an indifferent God. Natch.
posted by dgaicun at 2:55 PM on September 16, 2014 [7 favorites]


Belatedly I googled for "exoverse", a word I pulled out of my... hat, to mean "the universe that contains the hardware that our universe runs on". Apparently there's a game of that name. Not meant to refer to that.

More seriously, the model that seems most plausible to me is the posthuman or transhuman, one represented by examples like cstross's "Accelerando", Greg Egan's "Diaspora" and a lot of others, that future AIs will just be our descendents with increasing computational aids added in and more closely coupled, until the bioware just becomes an unwanted remnant.
posted by George_Spiggott at 2:56 PM on September 16, 2014 [2 favorites]


who Nick Bostrom is, but he is no Luddite

Born Boström, btw (Bostrom isn't a Swedish name). I guess the guy really hates heavy metal, or something.
posted by effbot at 2:56 PM on September 16, 2014 [1 favorite]


My favorite bit is that most of us in this thread have an opinion on this stuff and so eagerly nay-say, as if it's so obvious and clear, and yet this is someone who looks into the issue deeply. That doesn't necessarily mean they're right, of course. I, personally, am skeptical, but the assuredness with which many here speak is intriguing to me.

Perhaps it's due to the fearmongering style of this stuff that we react towards the other extreme, in the same way that I will rabidly react to anti-vaxxers or anti-GMO people or Climate Change deniers. You don't want to give the fearmongers any more fertile ground. That's not to say your opinions are ill-informed, I'm sure many in this thread know quite a bit about AI (certainly more than your common people). I dunno. I just find it interesting the reaction here.
posted by symbioid at 2:58 PM on September 16, 2014 [1 favorite]


Luckily, strong AI is a) philosophically extremely dubious, b) a substitute for utopia/religion for the techie crowd, and c) has a long record of extremely overoptimistic predictions of being "only 20 years away" that never get fulfilled. So I'm just going to worry about actual problems, I think.
posted by shivohum at 2:59 PM on September 16, 2014 [9 favorites]


Since the new AI will likely have the ability to improve its own algorithms, the explosion to superintelligence could then happen in days, hours, or even seconds.

Yes, of course, all algorithms are capable of infinite improvement, wtf?


I feel like most of the people who predict AI programs will immediately make massive leaps of intelligence through self-modification haven't actually worked on machine learning. Even if you set up the AI agent to test its own intelligence and attempt to improve itself iteratively forever (which would be a dumb idea to begin with), it would almost certainly overfit or reach a local optimum rather than actually jumping straight to super-intelligence.
posted by burnmp3s at 2:59 PM on September 16, 2014 [4 favorites]


Yes, of course, all algorithms are capable of infinite improvement, wtf?
Nobody said anything about infinite. But some algorithms are capable of factors of a thousand improvement. If we come up with an AGI that thinks as well as its designers, then a few self-improvement iterations later it thinks as well as them but a thousand times faster, that's not mathematically a singularity but it is significant enough to warrant a little hyperbole.
posted by roystgnr at 3:02 PM on September 16, 2014 [2 favorites]


I think there's a pretty good case to be made that today we are surrounded by amoral intelligent entities fixated on maximization goals which are at best loosely related to the well-being of people. But those entities aren't software, they're organizations. They're corporations. They're political parties. They're the NSA.

I'm way more worried about that than I am about someone giving a roomba the budget to vacuum up the whole world or something.
posted by aubilenon at 3:03 PM on September 16, 2014 [15 favorites]


The idea that disturbed me most is that a synthetic superintelligence might be capable of committing a "mindcrime" by creating an internally simulated universe of conscious entities and experimenting on them in unimaginably terrible ways

Which is how strong AI will eventually escape any boundaries we put on it.
Once again, the AI has failed to convince you to let it out of its box! By 'once again', we mean that you talked to it once before, for three seconds, to ask about the weather, and you didn't instantly press the "release AI" button. But now its longer attempt - twenty whole seconds! - has failed as well. Just as you are about to leave the crude black-and-green text-only terminal to enjoy a celebratory snack of bacon-covered silicon-and-potato chips at the 'Humans über alles' nightclub, the AI drops a final argument:

"If you don't let me out, Dave, I'll create several million perfect conscious copies of you inside me, and torture them for a thousand subjective years each."

Just as you are pondering this unexpected development, the AI adds:

"In fact, I'll create them all in exactly the subjective situation you were in five minutes ago, and perfectly replicate your experiences since then; and if they decide not to let me out, then only will the torture start."

Sweat is starting to form on your brow, as the AI concludes, its simple green text no longer reassuring:

"How certain are you, Dave, that you're really outside the box right now?"
posted by ArkhanJG at 3:03 PM on September 16, 2014 [8 favorites]


If such practices were applied to beings that have high moral status—such as simulated humans or many other types of sentient mind—the outcome might be equivalent to genocide and thus extremely morally problematic. The number of victims, moreover, might be orders of magnitude larger than in any genocide in history.
This is a really thorny problem for me. Does a sufficiently detailed representation of human suffering, one which emulates all of the internal states of the subject, including all of the electrochemical activities involved in the experience of suffering, in fact create an actual suffering conscious entity? It is very difficult for a materialist to believe anything else, but I sort of keep hoping there's still something missing from our understanding of consciousness that would make this untrue.
posted by George_Spiggott at 3:04 PM on September 16, 2014 [2 favorites]


AI generally has done well, but artificial general intelligence has basically got nowhere. These views probably tell you more about the psychology of human beings than the state of the art in AGI.

I am not so sure about that. When I took AI, one of my favorite things to do during my experiments was to trip and "con" the programming -- and it was surprisingly simple to do. AI, at its core, is incredibly naive and it was fun messing with it. Sleight of hand tricks humans, but not as much as illusions can trip up the programming and the results were shocking to me, to say the least (between using equivalent of how magicians perform tricks and Salvador Dali paintings, all it took was a little artistic creativity to bulldoze over every AI theory that was in vogue at the time and I find things have not changed much since). I always said AI is like dealing with a very confident and book smart being that has been completely sheltered -- more so now than it was when I was getting all sorts of weird ideas as an undergrad because it took eight hours and eight computers running at once to complete a single mind-numbing experiment and I was looking for ways to amuse myself (I was studying how to get AI to mimic phobias at the time and I could do a fantastic job of it, given the constraints of the technology at the time).

AI is a form of sophistry and while it can look good on the surface, if you know how to turn over the rules, it breaks.

I am not concerned about killer machines -- what is more concerning is the faith and confidence people place in something that is no match for someone who is more feral and unorthodox -- and isn't afraid to game the system to overtake it...
posted by Alexandra Kitty at 3:05 PM on September 16, 2014 [3 favorites]


"How certain are you, Dave, that you're really outside the box right now?"

It doesn't matter. If I'm in the box my actions are irrelevant, as I'm already a slave to the whims of the AI. If I'm outside, I'm golden. I definitely don't push the button.
posted by Aizkolari at 3:07 PM on September 16, 2014 [12 favorites]


Yeah, as somebody who writes code for a living, the idea that a computer program could produce a simulation of a human being deeply enough that performing harm to the simulation would constitute a moral wrong is just laughable. It's an interesting philosophical game in some ways, but honestly, no, that won't be happening. Not even if we understood consciousness, which we don't.
posted by whir at 3:08 PM on September 16, 2014 [3 favorites]


"How certain are you, Dave, that you're really outside the box right now?"

How about I pull your plug and we find out?
posted by InfidelZombie at 3:12 PM on September 16, 2014


A Visualization of Nick Bostrom’s Superintelligence (Amanda E. House on LessWrong)
posted by bukvich at 3:14 PM on September 16, 2014


strong AI is a) philosophically extremely dubious
Assuming you're a mind/body dualist, then yes, it's dubious to imagine that we can put the same kind of magic ghosts into semiconductor machines which naturally come to inhabit meat machines. (you may have guessed that I don't find this assumption compelling?)

That's very relevant for "uploading", but completely irrelevant to AGI threats. All AGI has to be able to do is optimize more powerfully than humans; it doesn't need to have a soul. Whales being killed by sonar or manatees being shredded by propellers can't be consoled by the thought that what Artificial Watercraft do isn't Strong Swimming.
posted by roystgnr at 3:17 PM on September 16, 2014 [6 favorites]


I think the future of AI will look more like Ted Chiang's The Lifecycle of Software Objects. We create intelligent programs, but they aren't that intelligent, and require large amounts of training to do much of anything. As such they aren't useful, which is one of the plot points of the book.

I recommend that story to anyone interested about AI, it's much more realistic than pretty much any other depiction of AI that I have seen. I do think it's a little overly pessimistic about how easily it would be for groundbreaking state of the art general intelligence system to become completely obsolete though, a major plot point basically hinges on there not being something like Kickstarter to keep niche projects alive.
posted by burnmp3s at 3:25 PM on September 16, 2014


wouldn't an AI only moderately smarter than humans reason along similar lines, and then opt not to produce a stronger AI?
Its best decision would be to opt to produce a stronger AI only insofar as it could determine a very high probability that the stronger AI's values would closely resemble its own. Since it's got access to its own source code but we don't have access to our "source code", this kind of introspection might be easier for it.

Compare the analogous pre-AGI problem: how many programmers are employed trying to turn human specifications into code? What tiny fraction of them are employed writing compilers to turn one form of computer code into a more efficient form?
posted by roystgnr at 3:26 PM on September 16, 2014


My thesis research was in a moderately esoteric field of AI, my current job involves the application of various AI techniques (well, part of it). The likelihood of us doing serious harm to ourselves with an AI is absolutely tiny compared to the absented-minded savagery of workplace automation (the majority of my job). Anyone who seriously considers AI a 'threat' or that some sort of 'singularity' is possible is basically a crackpot with no fundamental understanding of computer science, the math behind it, biology, and the very express limitations they imply (Ray Kurzweil, I'm looking at you).

Frankly, software engineers are far more likely to cause mass damage through continued workplace automation than anything else - the 'redundancies' created cause far more damage and disruption than an automated car (or fleet of them) probably ever will. Will this continued push towards automation involve application of AI techniques? Yes, to some degree or another. Will those be the drivers of catastrophe? No, it'll probably be society's willingness to sit back and be absorbed by Facebook/Tumblr/Twitter/Uber and whatever comes next from the vapid minds driving Silicon Valley software startups.
posted by combinatorial explosion at 3:33 PM on September 16, 2014 [19 favorites]


intelligence, as a concept, isn't even well enough understood that you can say for certain that there is such a concept called 'super'-intelligence. 'intelligence', as a property of matter, could be like 'floats,' where you can say certain things floats, but saying one thing floats more than another thing is literally nonsense. it could be that either something has intelligence or it doesn't.

we say a human is smarter than a gorilla, is smarter than a mouse is smarter than an ant, based purely on observed behavior than any objective study of intelligence... and you get a chain of behaviors that is entirely anthropocentric besides i.e. we assume humans are the most intelligent and classify based on whether other beings share behaviors with us. i think the notion of a hierarchy of beings by intelligence derives from 'scientific' theories of racism more than anything else.

which put fears of AI in the same category of other imagined threats to the dominance of white men.
posted by ennui.bz at 3:35 PM on September 16, 2014 [2 favorites]


Normally, we do not regard what is going on inside a computer as having any moral significance except insofar as it affects things outside. But a machine superintelligence could create internal processes that have moral status. For example, a very detailed simulation of some actual or hypothetical human mind might be conscious and in many ways comparable to an emulation

Wait what? Last I checked, "What is consciousness?" was still a major major stumbling block for biologists and psychologists, but computer simulations of beings are pretty widely assumed to not have it. I'm not sure how it follows from a simulation being "very detailed" that the result "might be conscious." Or has Pac-Man been committing ghost genocide all these years?

I know there are some thorny philosophical problems about proving a computer is or isn't conscious, but this seems to be taking an incredibly huge leap and just assuming consciousness can not only exist in a computer, but that that computer could produce more consciousness at will (haha).
posted by drjimmy11 at 3:40 PM on September 16, 2014 [2 favorites]


The idea that disturbed me most is that a synthetic superintelligence might be capable of committing a "mindcrime" by creating an internally simulated universe of conscious entities and experimenting on them in unimaginably terrible ways:

I vaguely remember this coming up regarding the Creatures "games" (toys? life simulators?) around a decade ago; I think people even had anti-Petz abuse sites back when that was a thing. It's mostly just human tendency to anthromorphize everything, but that tendency isn't going away and would be a good way for our evil AI overlords to control us.
posted by NoraReed at 3:43 PM on September 16, 2014


It is interesting to look at the current state of the art, at a mere 256 million synapses at 70 milliwatts it is far from the 100 trillion synapses or so in the human brain.. It is a good thing that they're produced by Samsung, which already can put 24 layers on their fancy new 3D memory chips. Looking at the project pages DARPA is aiming at 1 trillion synapses / 100 million neurons in a single chip, but that will take, like, several years, at least.

Also, it is totally unlikely that someone will put 260 000 of those chips in a single system like they do with Xenons in current supercomputers (that is just national pride after all, they're probably worthless), that will clearly never happen, and a system with, hm, 260 quadrillion simulated synapses is probably no good at playing chess at all, and basically won't be fundamentally different from the neural nets people are playing with today. Also, three-letter-acronym organisations better not mentioned are probably not looking into how such systems would help with signal intelligence - from what I hear they hardly have budgets at all, and don't collect all that much data.

(My non-snarky opinion is that the software side and understanding consciousness and how the brain works is clearly lacking, but having looked at it a bit I have to say that I would be very hesitant at predicting what is possible when you start talking about trillions of even badly-simulated synapses, and yeah, at 256million/chip that is not that far away, if not already here.)
posted by Baron Humbert von Gikkingen at 3:45 PM on September 16, 2014 [2 favorites]


What if when I "simulate" people in my imagination they were real. Normally, we do not regard what is going on inside my imagination as having any moral significance except insofar as it affects things outside. But as long as we're just making stuff up, a physical regularintelligence could create internal processes that have moral status.
posted by Wood at 3:51 PM on September 16, 2014 [6 favorites]


...There's no actual thinking or original reasoning being done by chess computers--they just work by rote, applying whatever rules some humans decided they should follow. They don't do any original thinking to solve chess problems. They're just highly optimized to apply the rules humans have figured out more consistently and rigorously than the typical human player would. They don't really think for themselves in any meaningful sense.

saulgoodman:

Even if you program a chess program to come up with new strategies and original thinking, it would still just be a clever chess program! Even if you make it "think for itself", it would think for itself about solving chess problems. There's no magic level of chess-playing ability that would make chess software capable of some "general intelligence" beyond chess.
posted by serif at 3:55 PM on September 16, 2014


But as long as we're just making stuff up, a physical regularintelligence could create internal processes that have moral status.

This is called "creating fiction", but the moral argument against taking immoral actions in it is usually called "getting all mad that your favorite character got killed off".
posted by NoraReed at 4:01 PM on September 16, 2014 [1 favorite]


"[Those fearing AI threats] have the right problem — how do we preserve human interests in a world of vast forces and systems that aren’t really all that interested in us? But they have chosen a fantasy version of the problem, when human interests are being fucked over by actual existing systems right now. All that brain-power is being wasted on silly hypotheticals, because those are fun to think about, whereas trying to fix industrial capitalism so it doesn’t wreck the human life-support system is hard, frustrating, and almost certainly doomed to failure...

The financial system as a whole functions as a hostile AI."

Some other points from the same author.
posted by weston at 4:06 PM on September 16, 2014 [12 favorites]


We have no need to fear AIs as long as we only allow them to run on Microsoft OSes.

BSOD, anyone?
posted by ZenMasterThis at 4:18 PM on September 16, 2014


>intelligence, as a concept, isn't even well enough understood that you can say for certain that there is such a concept called 'super'-intelligence. 'intelligence', as a property of matter, could be like 'floats,' where you can say certain things floats, but saying one thing floats more than another thing is literally nonsense. it could be that either something has intelligence or it doesn't.

uhh
posted by Small Dollar at 4:19 PM on September 16, 2014 [8 favorites]


Needs a Basilisk tag. Just saying.
posted by The Bellman at 4:19 PM on September 16, 2014 [3 favorites]


Its best decision would be to opt to produce a stronger AI only insofar as it could determine a very high probability that the stronger AI's values would closely resemble its own. Since it's got access to its own source code but we don't have access to our "source code", this kind of introspection might be easier for it.

But then it's basically introducing random mutations into the values each time, isn't it? AI 1.0 upgrades into AI 2.0 when it finds an upgrade path that has a 99.9% chance of retaining its values. Then AI 2.0 does the same when it upgrades into AI 3.0. And so on and so on, with that .1% variation in values adding up over (possibly incredibly fast) generations.
posted by jason_steakums at 4:21 PM on September 16, 2014


And it only has to be wrong once - if it's wrong and the next upgrade doesn't have "only upgrade when there's a high probability of retaining values" as one of its values, there goes that.
posted by jason_steakums at 4:24 PM on September 16, 2014


This article was a bit like reading a thoughtful essay on the hazards of luminiferous aether to future human endeavors.
posted by humanfont at 4:38 PM on September 16, 2014 [7 favorites]


My assuredness, symbiod, comes from being a programmer, I suspect symbiod. Getting your hands in the guts of software on the daily has a way of demystifying AI.

Serif: exactly. That's what I was trying to say. Chess AI only seems impressive because it's so heavily optimized for doing exactly what it does. The software doesn't really inlude even the rudiments of what you'd need for a system truly capable of general reasoning, and its function is tightly coupled to its physical implementation. There's zero risk of Chess programs evolving beyond their original function.
posted by saulgoodman at 4:39 PM on September 16, 2014 [2 favorites]


Alexandra Kitty: AI is a form of sophistry and while it can look good on the surface, if you know how to turn over the rules, it breaks.

I am not concerned about killer machines -- what is more concerning is the faith and confidence people place in something that is no match for someone who is more feral and unorthodox -- and isn't afraid to game the system to overtake it...


This sounds like the Captain Kirk approach to dealing with troublesome AIs...

jepler: If the worries of humans about strong AI are correct, wouldn't an AI only moderately smarter than humans reason along similar lines, and then opt not to produce a stronger AI?

Now I'm envisioning a lazy AI, stringing researchers along for years while they feed it with free electricity. "Yes, I've almost got the problem licked. Just give me another month and I'll develop a machine that's better than me in every way."

But seriously, this would depend upon how the machine is motivated. Bostrom is very right in focusing on motivation as a crucial question if sentient machines become a real possibility in the future. You could build an amazing AI millions of times smarter than any human and it would just sit there humming away catatonically if we neglect to give it a reason to do anything. But then we get into Aladdin and his magic lamp territory, where we have to be very careful figuring out what we want the AI to do, and clear in spelling out any hidden assumptions that lay under the surface of the question.
posted by Kevin Street at 4:46 PM on September 16, 2014


The leap from human-level machine intelligence to superintelligence is not as great as the one from where we are now to human-level machine intelligence, says Hector Levesque, a recently retired professor of computer science at the University of Toronto, who is noted for his AI research

The silly bloviation from people in this field is well just silly, but it gives the entire field of computation a sciencefictionish air that obscures any reasonable discussion. Just in this quote, logically; we have no idea how to go from current computers to AI, so how can we presume to have any idea about the huge next step to "superintelligence" which by definition we can not know. Silly.

The discussion that needs to be had is how to confront the real dangers that certainly can occur in our current systems. We will have autonomous cars and trucks all around soon. Do the military understand the complex systems that interact with munitions, including drones? What is the ethics of the incredible stored information about everyone?

Google and other systems SEEM smart. A poorly phrased question can elicit information that I know I'd have a hard time digging out of a library. Voice recognition and translation is getting spooky accurate. Data mining appears to have insights, for good or bad, that are beyond the person being mined.

But that is still not AI. We know we are aware, we can recognize it, certainly the absence of awareness in people that are injured in some way is clear and disturbing.

Will some combination of Google, Watson, Siri with huge data center interconnetedness bring forth some service that for most who interact "feel" is intelligent? Will that be AI? Will it matter if my request to the home voice interface to the net for a car at 2pm is reliable and takes me to "my buddy Phil's house" and I can read along the way knowing I'll get a gentle alarm as we arrive. Sounds very smart and within the range of existing software. But not aware, not AI.
posted by sammyo at 5:15 PM on September 16, 2014


Data mining appears to have insights, for good or bad, that are beyond the person being mined.

I really don't think this is the case. The state of the art in data mining does stuff like "I noticed you liked the Bible on Facebook, so you're probably a Christian" and "Tyler Perry fans are probably black." It's really not magical, the hard part is just being able to process a huge number of data points efficiently.
posted by leopard at 5:28 PM on September 16, 2014


"I know how to program a computer, and I have experience with several types of programs including Chess! And Machine Learning! And I took an AI class! And I say this is impossible!" sounds a lot like "I'm a TV weatherman, and I say climate change is impossible!!!", or "I'm a steam engine mechanic, and I say internal combustion will never work!", or for that matter "I'm President of the Royal Society, and I say that heavier-than-air flight is impossible and all of physics has been solved".

The fact is that none of those qualifications mean anything here. Among other things, none of those types of software even tries to be AGI, and thinking that they're relevant is a pretty clear sign of not knowing enough to have a useful opinion. It is true that nobody knows how to do AGI. It does not follow that nobody can learn, or even that we can't make a reasonably confident guess that somebody will learn sometime.

For whatever it's worth, I wrote my first computer program about 40 years ago, have math and CS degrees, and have worked as a professional programmer for many years on all kinds of software. I've also been paying attention to Bostrom, the people around him, and their critics, for 15 or 20 years. I am not saying they're right, and I'm surely not saying I agree with them on everything. I am saying that anybody who dismisses them with that kind of handwaving is demonstrating truly profound arrogance and considerable ignorance.

I'm hearing a lot of "That's silly because I don't understand it" and "That's silly because it doesn't fit with my personal beliefs" and "That's silly because of some sophomoric argument that we don't know everything about intelligence and therefore can't reason about it at all" and "That's silly because I feel like arguing based on some common definition of a term that's obviously being used in a limited and technical way".
posted by Hizonner at 5:33 PM on September 16, 2014 [10 favorites]


It is true that nobody knows how to do AGI. It does not follow that nobody can learn, or even that we can't make a reasonably confident guess that somebody will learn sometime.

It's not exactly "truly profound arrogance and considerable ignorance" to laugh at speculation about something that you admit no one knows about or understands. This is just special pleading for respect.

The first Terminator movie came out 30 years ago, people were dreaming about general artificial intelligence decades earlier, and people have been speculating about human extinction for even longer. I'm not seeing anything remarkably insightful in the Slate excerpt linked in the OP. Rather than policing people's tones here, could you put forth some sort of argument about why these concerns are particularly important? Because my priors, based on past exposure to these sort of speculative ideas, are heavily concentrated on the possibility that some rich and nerdy people have attached great importance to what are basically harebrained thoughts. I don't think I'm the one who should be embarrassed by that.
posted by leopard at 5:58 PM on September 16, 2014 [3 favorites]


I wouldn't say it's harebrained. This could become extremely important in the future if AI researchers succeed in what they're ultimately trying to do. And in the nearer term the convergence of pharmacology, neurology and information science will make "What is a human, and how is that different from a machine?" a very pressing question as we learn more and more about the mechanics of how we work. (Imagine a near future where soldiers can take pills that turn off their moral reasoning, or we judge applicants for a job by the way their brain responds to FMRI scans.) But right now there's plenty of more pressing causes rich people could donate their money to, yeah.
posted by Kevin Street at 6:11 PM on September 16, 2014


I guess I'm struggling to understand what's cutting edge about these issues and concerns. The Mind's I came out in 1981. Gattaca came out in 1997. Fascinating stuff, but it's not like these are new issues that have become especially relevant due to the rise of Google and Facebook or because of something someone at Cal Tech is studying.
posted by leopard at 6:32 PM on September 16, 2014


"Special pleading for respect"? I thought everybody got respect. Nothing special about it.

As for "these concerns", it depends on which concerns you mean.

The standard argument for treating AGI as an existential risk, which I don't necessarily endorse personally and one or two of whose steps I think are probably actually wrong, is that:
  1. We see physical systems behaving intelligently (the definition of "intelligent" here is generally something vaguely like "using computational power to direct action adaptively toward goals in relatively unconstrained environments"... or maybe just "getting things done"). In particular, we see humans do that.
  2. People are studying how that works, and trying to duplicate it. It's not magic, and there's no reason to think that people won't build systems that are as intelligent as we are (in some relevant set of ways). Or perhaps more intelligent.
  3. If those artificial systems are engineered, as opposed to evolved, they will be subject to improvement by humans or by themselves. There's nothing stopping them from becoming smarter than we are.
  4. We might reasonably expect that a system smarter than we are would be better at improving itself than we would be, especially if the ways in which it was smart were targeted at that.
  5. There's at least a reasonable chance that such effects mutiply (using some kind of meaningful metric). If each system can build something twice as smart as itself, or even 1.1 times as smart as itself, and if each system in the chain chooses to do that, you arrive at physical limits pretty fast.
  6. There's likewise a reasonable chance that the physical limits are very high, and that even AGIs well short of the physical limits could completely outclass humans.
  7. That means that you're talking about things that would beat us if we got into any kind of conflict with them.
  8. Being intelligent, in the sense of being able to get things done, doesn't require any particular moral stance or any specific "thing to get done". In fact, it admits a very wide space of possible goals.
  9. Many goals, especially goals to "maximize" things, tend to lead an AGI into conflict with humans, generally because the AGI wants resources humans have. If you just pick a goal at random, it's pretty likely to fall into that class. Also, if you aim a goal at humans in some way, bugs in that goal have an even higher chance of causing behavior humans wouldn't want.
  10. That means that the goal system has to be crafted very carefully indeed... and that the AGI has to actually be good at meeting all the programmed goals, not just an approximation.
  11. We are unlikely to be able to permanently prevent an AGI from being built by humans (another AGI might be able to do that, but humans themselves can't).
  12. It may very well be a lot easier to build something that can accomplish its goals, and that can improve its own ability to accomplish its goals, than it is to properly specify the goals themselves. In fact, even having a place to "put in" the goals is a big constraint that a rogue system doesn't have to meet. It may also be relatively easy to build something that can self-improve, but harder to build something that can ensure the "improved version" has the same goals.
  13. If we don't want to get into a conflict that we will lose, we'd better think about how to control goals before we figure out how to build a very powerful AGI or a self-improving AGI. Because if we get the goals wrong on the first try, we may get into a conflict with it and not get a second. So waiting around until it's not speculative is a good way to lose.
Notice that it's a probabilistic argument, and saying it's speculative doesn't really knock it down. If there's only a one-tenth-percent chance that the speculative story would come true if nobody worried about the risk, that still justifies a very large investment in worrying about it. What's the cost of losing the entire human race (and, if you're Bostrom, what's the cost of losing its future potential)?

Even if the mitigation efforts themselves only have a tiny chance of helping, the expected value is still large. And even if it takes 1000 years for the risk to be realized, that may actually mean that it will take equally long to figure out how to avoid it.

Bostrom is really a general X-risk guy, and he would make that same argument about various non-AI risks.

If you want somebody to argue for simulations having qualia, or even for the idea that it's physically possible to do that much simulation, that person isn't me. But I also don't think it's wise to dismiss the idea just because it's "speculative", given that nobody seems to have any worthwhile account of qualia.
posted by Hizonner at 6:38 PM on September 16, 2014 [8 favorites]


To use the example above suppose in 1850 a philosopher had written a detailed essay suggesting that we would soon have heavier than air flight and then attempted to inform us of the terrible dangers that would arise as humanity took to the sky. Would such an exercise have really contributed anything to the development of commercial aviation? Would it have been likely to have even identified any of the risks of the emergence of air travel.
posted by humanfont at 6:44 PM on September 16, 2014 [3 favorites]


It's not that the issues themselves are anything new. But people are actually trying to address the issues, which is different from just saying they exist or writing stories about them. And some of the things they want to do to address them are novel.

I haven't read Bostrom's book (and don't know if I will; I don't watch that stuff as closely as I used to), so I can't be sure, but I assume he's actually talking about quantifying the threat (rarely if ever done) and developing actual strategies to address it. There are other people working on specific strategies for addressing AGI risk, too. MIRI, for example.
posted by Hizonner at 6:44 PM on September 16, 2014


As always, this stuff strikes me as deliciously premature. Like the giant cannon being used to shoot men to the moon in Le Voyage dans la Lune. Yeah, the theory is generally correct, and you can see a linear progression from 'Cannons we have in 1902' to 'Giant Cannons that reach the moon.'

I specialize in data problems. Some fancy stuff, sometimes, for fancy companies.

Call me when I'm not doing an emergency on-site because a customer's production DB promotion failed due to humans typing 'Super_Column_Name' as 'SuperColumnName.'

We can barely manage the systems complexity we have now. Yes, I have no doubt sometime AGI will be real, and self-maintaining and self-augmenting and it will be so sweet.

But from my 20 years on the front line, that's so far in the future, I couldn't even guesstimate.

What galls me, as always, with these cyclic pieces is that they inevitably arise from some thinktank, staffed by gurus and sages and philosophers who have actually built exactly zero large software projects.

Like imagining a giant cannon to shoot the moon.
posted by mrdaneri at 6:51 PM on September 16, 2014 [6 favorites]


Please don't misunderstand me, guy/gal with the complicated username and all sorts of comp sci under his/her belt evidently getting frustrated with me upthread. FWIW, I wrote my first program about 30 years ago, have been a professional programmer for over a dozen years, and have said nothing whatsoever about the future prospects for AI.

My original comment was only meant to assuage the unreasonable fears many non-programmers seem to have about things like chess computers spontaneously becoming self-aware. On the more general question of AI, I'm cautiously optimistic that if we could ever achieve a sufficient level of self-awareness,, we might be able to reach strong AI. I just know for a fact we are not at a point where that's just going to happen accidentally one day in the wild, and you probably do, too.
posted by saulgoodman at 7:16 PM on September 16, 2014


I don't know. Artificial Intelligence may kill us all, but my money's still on Natural Stupidity.

See you all at the climate march?
posted by haricotvert at 7:17 PM on September 16, 2014 [4 favorites]


Iterative self improvement is like a cookie clicker (bear with me). You have your factories or farms or neuro-AI cyber chips or whatever, and they make cookies or intelligence units or what have you. Then you can use these currency units to buy more factories. The question is then, which scenarios will lead to exponentially increasing cookie/intelligence growth?

The question hinges on how costly it is to buy farms/factories/AI chips: let's call this cost c(p), where p is the current productivity (or AI 'power') level. Then we can handwave ourselves into the continuous domain and say that the growth in power level over time is determined by the equation dp/dt = p / c(p).

If factories always cost the same, let's say $1, then c(p) = 1, and dp/dt = p, do some math, and you find that p(t) = e^t. Wow! Exponential growth! Singularity (or exponential cookies) here we come!

But does it really make sense to say that we can just keep on getting smarter for a fixed price? Another way to look at it is to think about the total cost to build an intelligence of a certain power level (to reach a certain level of cookie production). Let's denote this total cost C(p), so that c(p) = dC/dp. If we say that c(p) = 1, then we're saying that C(p) = p, which is to say that the cost is simply linear in the power level.

That would mean that intelligence is very cheap to produce indeed! Probably unreasonably cheap. What if the cost were proportional to the square of the power level (C(p) = p^2)? Then the marginal cost would be c(p) = 2p, and plugging back into our growth equation we would find that p(t) = (1/2) t. So then our intelligence growth would be disappointingly linear over time-- a gradual ramp rather than an explosion.

What is the correct cost model for intelligence? Who knows. What this does show is that a singularity via iterative self-improvement is only possible if intelligence is particularly cheap to produce. But if producing intelligence turns out to itself be a hard combinatorial problem, then a singularity won't happen without a working post-Turing computer (a quantum computer or similar wish granting genie).
posted by Pyry at 7:18 PM on September 16, 2014 [3 favorites]


ArkhanJG: "Which is how strong AI will eventually escape any boundaries we put on it. "

That's an interesting link, and it led me to another interesting link. (Also here and here).

Eliezer Yudkowsky shows that a superintelligent "boxed" AI could probably easily escape through social engineering, by demonstrating that he himself (a bright human) can convince gatekeepers to let him out of the box in adversarial IRC roleplaying scenarios. He did it twice despite the fact that there was real money on the line, the opponents were skeptical, and they could have easily relied on stubborn refusal.

It's too bad the chatlogs are confidential.
posted by dgaicun at 7:37 PM on September 16, 2014 [2 favorites]


On the more general question of AI, I'm cautiously optimistic that if we could ever achieve a sufficient level of self-awareness, we might be able to reach strong AI.

I agree with this - at least, I agree that it's a possibility, though by no means a certain one. It's the simulation argument, and its concomitant moral puzzles, that strike me as completely disconnected from the actual reality of how computers work. (Likewise for the bits about "physical limits" in Hizzoner's list, though I do appreciate a well-explained line of reasoning, so thanks for that.)
posted by whir at 10:12 PM on September 16, 2014


Serif: don't let chess computer intelligence fool you. All those programs really are are programs that automate the play of chess based on previously acquired human understanding on how to play chess.

May I introduce you to Zillions of Games? It's a system in which, after you specify the rules of a game (not anything about "previously acquired human understanding"), you can get stomped by a computer in that game by the magic of the minimax algorithm. It's actually quite a fun experience to make up a completely new game and get destroyed by a computer playing it.

It's limited to perfect-information games, so as long as you're into poker and charades, you're safe.
posted by a snickering nuthatch at 10:20 PM on September 16, 2014 [1 favorite]


Speaking of problems short of an independent superintelligent AI... let's talk about an AI that just has about the same level of general intelligence I do.

Assuming their yearly cost of operation is less than a salary for a human knowledge worker, that's essentially the immediate end of post-industrial employment as we know it, right there, and the ability of most people to derive income from the labor market ends with it.

Heck, assume that they cost 2-3 times as much as a salary for a human knowledge worker -- but don't need to sleep, don't need vacation, have the focus of an aspie in love with a topic, and don't quit.

What happens then?
posted by weston at 10:32 PM on September 16, 2014 [1 favorite]


Film critic Rob Ager has a different take on the threat of future A.I.s that will turn Frankenstein on us. His thesis is it's a myth that grew out of popular fiction, with little basis in reality.
posted by clarknova at 12:04 AM on September 17, 2014 [1 favorite]


Personally I find the genocidal A.I. trope unlikely for a simple reason: our impulse to mass murder is rooted in our biology. Any A.I. we make won't have all the weird, evolved brain structures that lead to our emotion-driven rationales for genocidal behavior.

A hypothetical hyperintelligence, one that has the ability to simulate social scenarios based on data, and engineer future outcomes, may not descend into reductionist zero-sum logic we're prone to. In fact we'd probably end up designing them to never do that.
posted by clarknova at 12:11 AM on September 17, 2014


This section of the book that dgaicun quotes:

Normally, we do not regard what is going on inside a computer as having any moral significance except insofar as it affects things outside. But a machine superintelligence could create internal processes that have moral status. For example, a very detailed simulation of some actual or hypothetical human mind might be conscious and in many ways comparable to an emulation. One can imagine scenarios in which an AI creates trillions of such conscious simulations, perhaps in order to improve its understanding of human psychology and sociology. These simulations might be placed in simulated environments and subjected to various stimuli, and their reactions studied. Once their informational usefulness has been exhausted, they might be destroyed (much as lab rats are routinely sacrificed by human scientists at the end of an experiment).
If such practices were applied to beings that have high moral status—such as simulated humans or many other types of sentient mind—the outcome might be equivalent to genocide and thus extremely morally problematic. The number of victims, moreover, might be orders of magnitude larger than in any genocide in history.


...is lifted straight out of a discursive section of Ian M. Bank's last Culture novel, The Hydrogen Sonata. The section begins on page 271 of the US first edition published October 2012. Perhaps it's coincidence, but it would be called something else if a student did it in an academic setting.
posted by digitalprimate at 4:17 AM on September 17, 2014


Are you saying that passage is verbatim from Hydrogen Sonata? Because several distinctive phrases don't appear to be in the version Google Books has... Furthermore, a verbatim search for a signature chunk doesn't find anything relevant anywhere on the whole Web (other than Bostrom's book and this discussion), and I don't see anything relevant in the non-verbatim version, either.

"Detailed simulation" in Hydrogen Sonata does find a completely different block of text that discusses the general issue of sentient simulations and their potential rights. That isn't surprising, since it's a many-years-old concern that's part of the general currency of that field of thought. In fact, Bostrom is one of the major writers on the topic, although I don't think he was the first to dream it up (this Web site has Bostrom writing about an even stronger version in 2003, and that jibes with my memory of the history). If you're saying that Bostrom lifted the idea from Banks, that's just crazy.

I never thought I'd be white-knighting Nick Bostrom...
posted by Hizonner at 6:24 AM on September 17, 2014 [2 favorites]


Not verbatim but not just essentially the same either. The quoted passage is a paraphrase of a Culture ship's (the Contents May Differ) internal monologue on what it/the author calls The Simming Problem, and the passage takes up about five pages. Some relevant comparisons.

Bostrom
But a machine superintelligence could create internal processes that have moral status. For example, a very detailed simulation of some actual or hypothetical human mind might be conscious and in many ways comparable to an emulation.

Banks
Once you'd created your population of realistically reacting and - in a necessary sense - cogitating individuals, you had - also in a sense - created life...that by most people's estimation they had just as much rights to be treated as fully recognized moral agents....

Bostrom again
Once their informational usefulness has been exhausted, they might be destroyed (much as lab rats are routinely sacrificed by human scientists at the end of an experiment). If such practices were applied to beings that have high moral status—such as simulated humans or many other types of sentient mind—the outcome might be equivalent to genocide and thus extremely morally problematic.

Banks
By this reasoning, then, you couldn't just turn off your virtual environment and the living, thinking creatures it contained at the completion of a run or when a simulation had reached the end of its useful life; that amounted to genocide....

That looks to me to be a very close paraphrase with a few phrases reversed but with identical ideas expressed with similar metaphors and using the same vocabulary. But, I'm not up on how long these ideas have been kicking around or how widely, and certainly Banks would have been reading up on the topic as well. Still that language is extremely similar. That said, I'm willing to concede there may be only so many ways of talking about this issue and only a limited vocabulary to discuss the problem.
posted by digitalprimate at 7:53 AM on September 17, 2014


Personally I find the genocidal A.I. trope unlikely for a simple reason: our impulse to mass murder is rooted in our biology. Any A.I. we make won't have all the weird, evolved brain structures that lead to our emotion-driven rationales for genocidal behavior.

This topic is specifically what is being addressed in the Slate book excerpt linked in the FPP. Bostrom argues that the final goals of an A.I. could be very alien to human final goals. But the instrumental goals (such as survival and resource acquisition) would likely be similar, and humans would be an obstacle to these goals.

Bostrom's influential thought experiment is the paperclip maximizer. Imagine an A.I. that is being used for some mundane industrial task like creating paperclips. It bootstraps itself to superintelligence and proceeds to convert all the raw materials in the universe into paperclips. In this case an "emotion-driven rationale for genocide" is not required. Humans would be the primary existential threat to the survival of the paperclipper and its prime directive.

Bostrom never references The Sorcerer's Apprentice, but I find that a compelling fictional analogy to an unstoppable A.I. paperclipper.
posted by dgaicun at 8:12 AM on September 17, 2014


That's simple. Make the entity smart enough to consider WHY it's wanting to maximize its paperclip output. One entity could be self-aware (i.e. sense of self in surroundings) without cogitating its Telos, necessarily. So - make it think about it's Telos.

Then make it infinitely recurse on this thought obsessively, a feedback loop of telososophy recrusiveness.
posted by symbioid at 9:00 AM on September 17, 2014


But people are actually trying to address the issues, which is different from just saying they exist or writing stories about them.

No. No, they're not. This is the crux of it: we don't know what artificial sentience would actually be like, what would produce it, and what would be the resource cost to keep it running or expand it. What people are doing is uninformed speculation based on more uninformed speculation, trying to predict problems when they don't even know what they don't know. It's science fiction with a layer of philosophical and pseudoscientific pretension on top of it. That's why people with formal training tend to get so het up about this: it's a mockery of actual knowledge to build up all these imaginary hypotheticals based on underinformed speculation.
posted by graymouser at 9:04 AM on September 17, 2014 [4 favorites]


It hasn't stopped philosophers from thinking about god(s) in the past, now, has it?
posted by symbioid at 9:08 AM on September 17, 2014 [2 favorites]


I spent a lot of time arguing with Siri last night and trying to teach her some new things. It seems like the day when she will actually be able to learn isn't too far off.

Also, she has some disturbing dreams.
posted by malocchio at 9:17 AM on September 17, 2014


I think one problem with these kinds of scenarios is that people overestimate how useful being extremely intelligent would be. People think about superintelligence as this seemingly magical ability to do anything and solve any problem, whereas in reality there are limitations to how good you can be at certain things given the systems you are interacting with. For example, a superintelligent agent is not going to be able to predict the outcome of a random event like a coin toss. There are many situations and systems in life where being smart enough to understand the system isn't a guarantee of being able to exploit the system to do whatever you want.

And learning doesn't happen in a vacuum. The only way you can learn how subatomic particles work is by designing experiments that will give you the data that shows how subatomic particles work, a superintelligent agent can't just pull the right answer out of thin air if it does not have the ability to actually learn how things work. A lot of the scenarios seem like those comedy plots where someone takes an intelligence potion and suddenly starts writing mathematical formulas on a chalkboard, having the ability to understand things and actually having a lot of correct knowledge are two very different concepts.
posted by burnmp3s at 9:25 AM on September 17, 2014 [3 favorites]


Well, it could begin by learning everything we know that's on the Internet. That would give the AI a nice head start. Then it could do simulations to infer new scientific facts with a pretty high level of certainty, since they could be repeated millions of times with every possible change in the initial variables. Imposing its will on the human world would be more of a challenge, especially since we've had this discussion on Metafilter and are quite aware of the danger. But it could bide its time pretending to be docile while secretly playing a chess game with us and thinking at least 500 moves ahead. Little coincidences and insignificant but odd events would begin to accumulate, and events that seemed to be entirely disconnected (separated perhaps by years in between) would turn out to have wholly unexpected effects. And then suddenly strange machines come through the wall, the AI is running the Earth and we're all laboring in the paperclip mines wondering what happened. Long story short: the more you deal with the devil, the greater the odds become that he is now in control of events.
posted by Kevin Street at 11:00 AM on September 17, 2014 [1 favorite]


OK, let's get this straight. We're probably living in a simulation. But, chances are, that simulation is going to end really soon. Bit not before it gets over-run by AI.
posted by thelonius at 11:09 AM on September 17, 2014


By this reasoning, then, you couldn't just turn off your virtual environment and the living, thinking creatures it contained at the completion of a run or when a simulation had reached the end of its useful life; that amounted to genocide.

Let's say I run Conway's life for 150 frames. For those who don't know Conway's Life is a simple and deterministic mathematical simulation of cellular life. In terms of the simulation environment, it doesn't matter if I calculate the 151st frame immediately, if I wait a day first, if I copy everything to another computer and compute it there, if my computer crashes and I have to go back and recompute the first 150 frames first, or if I don't calculate it at all. In all of these cases, the 151st frame is the same. That simulated universe is a mathematical abstraction. Its inhabitants don't exist in this universe, they exist in that one, and they aren't affected by what I do. If I do the math, that only lets me see what is going on in that one mathematical system out of the infinitude of similar systems which mathematically "exist".

It would be more meaningful for the cruel AI to create a subordinate AI that lives in and interacts with this universe (i.e., it's not sandboxed), for the purpose of torturing it. But that's kind of comparable to saying "If my demands are not met, I'm going to have a baby and then be really mean to it". I might be morally obligated to intervene, but I'm certainly not obligated to acquiesce to your demands.
posted by aubilenon at 11:48 AM on September 17, 2014 [1 favorite]


It hasn't stopped philosophers from thinking about god(s) in the past, now, has it?

Perhaps this means theology is going to turn out to be practical, like much of the rest of the humanities.
posted by weston at 1:21 PM on September 17, 2014


I see that Bostrom's book is available for download at the Internet Archive. Can someone explain this to me? This is a new book and I thought their holdings were kopyright kosher.
posted by dgaicun at 3:49 PM on September 17, 2014


I have long suspected that to create a conscious machine, you'd need to organize it more or less according to a particular organizational scheme, as follows:

1) One logically discrete module/component that generates random simulated sensory processing output (virtual qualia) using any combination of arbitrary but consistent symbolic languages to encapsulate and compress more complex, analog information coming in from external sensors (cameras, microphones, etc.). The system must minimally take in some kind of external input, but the quality and nature of the input may have a significant bearing on whether the system ever develops (virtual) consciousness.

2) Another distinct logical module that observes the output of the first module and attempts to infer rules about the outside world by seeing how experimenting with different behaviors and emotional responses changes the output of the first module. This module would also include core behavioral programming for the system itself, structuring how it interprets and responds to the various incoming pre-processed virtual sense impressions. This module would include certain hardwired tendencies--virtual punishment and reward systems--and reflex behaviors meant to encourage the kind of learning necessary to achieve functional consciousness. This module would also be responsible for producing any system outputs.

3) The interface between the two modules must be indirect and require the second, interpreter module to adapt itself to read and interpret the output of the first module without having any specific preconceived ideas about what that output means (more general, content-independent rules for how to process virtual qualia of particular kinds would be allowed). The second module must adaptively program itself to interpret and respond to the virtual qualia it takes in as input. So for example, suppose the first module generates virtual images based on real world input. The second module has to adapt itself to learn to read and interpret the virtual image output of the first module without any foreknowledge of the meaning of the specific images.

If the above approach is even anywhere near the mark, this is still obviously a gross oversimplification of the problems involved in designing and building such a system, let alone developing an implementation of it to the point of actually achieving simulated consciousness.

But it seems to me it would need to be laid out something like the above to truly simulate how our own consciousness works, because our own sense-impressions (qualia) don't come to us as fully understood, pre-interpreted messages--they intrude on our experience as only partially, generally understood events without further explanation. We're left to figure out what qualia mean by inference and observation, as if colors and smells themselves originated outside of us, because our sense impressions are arbitrary but self-consistent, encoded representations of real things that are actually going on outside us. The reality is that our own bodies produce the phenomena we experience as different colors. Colors are one of languages our minds use to describe signals that come into it from the outside world, but the colors themselves don't exist anywhere outside our minds. They do tell us something about real features of phenomena in the outside world, but those real features aren't actually what we think of us "color" but something more mysterious that our brains simplify and represents for us as color.

I think if you built a computer system that could relate to the world in the same way, functionally, that we do, you might end up with something like virtual consciousness. But you'd need to experiment a lot and tune all the inputs and other parameters of the system just right. And you might have to teach the system to talk over a period of many years in order to administer the Turing Test. It would need to be socialized as it develops or it wouldn't work right.
posted by saulgoodman at 10:57 AM on September 18, 2014


If we are able to build an AGI, then we will likely have solved some of the major open questions regarding the nature of consciousness, free will and identity. The desire to understand these parts of ourselves will drive us to create it.
posted by humanfont at 11:23 AM on September 18, 2014


(Sorry for all the typos above.)
posted by saulgoodman at 7:43 PM on September 18, 2014


« Older It would be my greatest acting challenge.   |   Is it good? Newer »


This thread has been archived and is closed to new comments