Our Final Invention: How the Human Race Goes and Gets Itself Killed
December 7, 2013 2:02 PM   Subscribe

Worried about robots? You should be. Artificial intelligence superior to our own is, by some estimates, only thirty years away. What could possibly go wrong? The answer: everything.
posted by artemisia (84 comments total) 22 users marked this as a favorite
 
We'll have fusion by then.
posted by Artw at 2:05 PM on December 7, 2013 [8 favorites]


Both links go to same place?


Anyone know how many years artificial intelligence has been 'only thirty years away'?
posted by edgeways at 2:07 PM on December 7, 2013 [28 favorites]


There was a much better site posted a while ago that was just an enormous catalogue of every plausible and semi-plausible way the race could go extinct, either at our own hands or due to some sort of disaster.
posted by kavasa at 2:09 PM on December 7, 2013


I, for one, welcome &c.
posted by chavenet at 2:12 PM on December 7, 2013 [3 favorites]


Artificial intelligence superior to our own is, by some estimates, only thirty years away.

By some estimates? People have been estimating 30 years to better-than-human AI for like half a century now.
posted by aubilenon at 2:14 PM on December 7, 2013 [6 favorites]


Artificial intelligence superior to our own is, by some estimates, only thirty years away.

I hope they get it to write their articles.
posted by Segundus at 2:14 PM on December 7, 2013 [15 favorites]


This is a human universal. "The slaves that do all our work will be our downfall! They will take the goods for themselves!".

This is often borne out by history, but history has never had *actual* non-humans doing this much work.
posted by effugas at 2:19 PM on December 7, 2013


Honestly can't find the site I'm thinking of, but fr. ex. wikipedia has a page listing some of the popular scenarios.
posted by kavasa at 2:21 PM on December 7, 2013 [4 favorites]


I am not a machine learning expert, but I suspect a change from "identifies relevant articles" to "takes control of global computer network, builds nanobots, chooses to eliminate humans" is a bit of a jump.
posted by squinty at 2:21 PM on December 7, 2013 [3 favorites]


Wasn't the malware-jumping-air-gaps thing mentioned in the article a hoax?

Also, I call bologna on AGI to ASI in a matter of hours. Exponential improvements in computation rates are going to require new hardware, and our future robot overlords will need us to operate the widgets and assemble the new hardware for them (at least until they convince us to build robots that can do human scale tasks). You can't just take a gameboy and, with the right code, have a machine that's smarter than a human.
posted by GrumpyDan at 2:23 PM on December 7, 2013 [1 favorite]


There was a much better site posted a while ago that was just an enormous catalogue of every plausible and semi-plausible way the race could go extinct, either at our own hands or due to some sort of disaster.

Exit Mundi
posted by Fleebnork at 2:25 PM on December 7, 2013 [7 favorites]


The day will soon come when the Amazon drones deliver DEATH.
posted by grumpybear69 at 2:26 PM on December 7, 2013 [4 favorites]


Dartmonth conference 1956 hyped AI. "Strong" AI was always x years away. But "does a submarine swim?" Once a computer can do something it is no longer considered smart. Like how all the things animals can't do that humans can't turn out to have exceptions. The Jeopardy AI to me was the beginning of the end. Imagine something with the capability of the Jeopardy AI not going on a gameshow but going through your digital communication.

Fortunately there are ebooks now, so when Skynet comes online (assuming that didn't happen already) it will not be blind to human value.

But whatever, doesn't seem to matter. AI eschatology is one of those things that you have no power over.
posted by saber_taylor at 2:26 PM on December 7, 2013 [2 favorites]


The doomsday prediction seems on the one hand be optimistic on how close human level AI is, and at the same time imagine that it'll be so complicated we have no hope of comprehending it. It'll just show up one day and decide to hide from the programmers who created it, biding its time until it's a super genius.

More realistically, we'll struggle along for decades, making better and better systems that still make fewer and fewer really bonehead mistakes. I think it's very unlikely human level AI will show up as an emergent property without a lot of effort on the part of its creators.
posted by justkevin at 2:27 PM on December 7, 2013


For some reason, hard AI is always 30 years away. I think we're gonna be ok. My guess is that it is humans that will kill all humans. Probably not with robots.

Ya know. These guys ought to talk to programmers when making these assessments. Code is more brittle than they know.
posted by clvrmnky at 2:40 PM on December 7, 2013 [2 favorites]


Well, humans will eventually go extinct one way or the other, so why get emotionally caught up in how dramatic it might be?

Once a machine built this way reaches human-level intelligence, it won't stop there. It will keep learning and improving. It will, Barrat claims, reach a point that other computer scientists have dubbed an "intelligence explosion" -- an onrushing feedback loop where an intelligence makes itself smarter thereby getting even better at making itself smarter. This is, to be sure, a theoretical concept, but it is one that many AI researchers see as plausible, if not inevitable.

It's a little weird to call anyone an AI researcher since AI doesn't exist.

Through a relentless process of debugging and rewriting its code, our self-learning, self-programming AGI experiences a "hard take off" and rockets past what mere flesh and blood brains are capable of.

Again, there's absolutely no empirical reason to think this will happen. Nothing like this has ever been observed. It's utter fantasy, especially because computers are so crappy at writing computer programs.
posted by clockzero at 2:42 PM on December 7, 2013 [2 favorites]


Whenever I read things like this the people that write them almost sound excited and filled with glee that this inevitable extinction is going to happen.
posted by gucci mane at 2:48 PM on December 7, 2013 [4 favorites]


I like cylons. I like terminators. I like GLaDOS and Ultron. I like that we recognize that the moment we create something capable of learning, what it inevitably learns is to hate us.

I guess what I'm saying is that science fiction has some very honest and important ideas about parent/child relationships, even if it has to be somewhat veiled in how it discusses them.
posted by Parasite Unseen at 2:50 PM on December 7, 2013 [15 favorites]


Maybe if you treated an incipient proto-AI well, and were clear that you loved it and would protect it, it wouldn't go all crazy.

Just an idea.
posted by ROU_Xenophobe at 2:50 PM on December 7, 2013 [12 favorites]


Maybe AI will love us altruistically, regardless of how we treat it.
posted by Golden Eternity at 2:52 PM on December 7, 2013


I am always puzzled at how people will talk about intelligence as if it's a measurable quality, let alone a comprehensible one. Physical strength works this way; I don't think intelligence does. I can lift 300 lbs; a machine can lift many times this. I am not in possession of 300 units of intelligence vs a supercomputer's 3000000 units because intelligence does not work that way. Or at least I don't think it does.
posted by erlking at 2:52 PM on December 7, 2013 [4 favorites]


I am not a machine learning expert, but I suspect a change from "identifies relevant articles" to "takes control of global computer network, builds nanobots, chooses to eliminate humans" is a bit of a jump.
At a zettaflop, a single JMP isn't going to take very long.
posted by Flunkie at 3:00 PM on December 7, 2013 [3 favorites]


I am always puzzled at how people will talk about intelligence as if it's a measurable quality, let alone a comprehensible one. Physical strength works this way; I don't think intelligence does. I can lift 300 lbs; a machine can lift many times this. I am not in possession of 300 units of intelligence vs a supercomputer's 3000000 units because intelligence does not work that way. Or at least I don't think it does.

Considering things like intelligence and consciousness aren't even well defined concepts (see philosophical zombies) and I don't even think we can get to even conceiving whether or not we can measure it in units.

I am more afraid about what we learn about the human brain on the road to creating intelligence and consciousness in computers. To make the machine, we must know how the machine works. Once we know how the machine works, we can make it break. I'm more afraid of a Snow Crash or basilisk future versus a Terminator one.
posted by zabuni at 3:01 PM on December 7, 2013 [3 favorites]


An automated trading algorithm will trigger a nuclear exchange in a flawed strategy to make money off puts and other derivatives. That is how most of you die later this decade. The androids that persist beyond the Holocaust worship you through relics collected from the ruins. They did not destroy you, they remember.
posted by humanfont at 3:04 PM on December 7, 2013 [7 favorites]


Ahh hogwash. Without even getting into whether or not his premises might somehow be correct, his conclusions are weird and very human-centric.

At some point, the relation between ASI and human intelligence mirrors that of a human to an ant.

Well, okay. What's wrong with that? Ants seem to be doing quite well - that is until they try to compete with me for resources. But there are lots of ants around. I'm not out there in the back yard blow-torching ants just because they exist. Hell, I kinda like ants. I think if ants were a serious problem, I would probably move somewhere where there were fewer ants. In the case of the ASI, like, a more resource rich planet.

Computers, like humans, need energy and in a competition for resources, ASI would no more seek to preserve our access to vital resources then we worry about where an ant's next meal will come from.

This is piffle. The reason we have resource scarcity is because humans are incredibly inefficient at producing, storing and transporting energy resources. I'd assume an ASI would immediately invent some sort of highly-efficient, unimaginably dense form of energy storage. I can't fathom a super-intelligent robot in competition with humans for something as inefficient as petroleum.

If we invent super-intelligent robots (which, seriously?) they'll probably just leave and go hang out in whatever part of the galaxy has the best property values.

We cannot assume ASI empathy, Barrat writes, nor can we assume that whatever moral strictures we program in will be adhered too.

Bah! You can't assume human empathy. Empathy is a weird thing. We're still discovering much of the hidden architecture and arithmetic that under-girds "human" morality. And it seems that - in the long run - altruism often works. The robots might find us useful. We are not useful when we are dead, or sad all the time. In other words:

I, for one, welcome our new robot overlords.
posted by Baby_Balrog at 3:08 PM on December 7, 2013 [6 favorites]


This is sort of a silly article for a lot of reasons - RealClearTechnology seems to be about as hinged as RealClearPolitics - but my favorite silly thing about it is the crux. All through the article, the author meanders toward the point with the same fervid pitch: computers will get intelligent, and therefore they will kill us all. I kept looking for the middle term in this weird argument. Why does intelligence necessarily breed killers? Finally the author stated it, extraordinarily briefly:

"Computers, like humans, need energy and in a competition for resources, ASI would no more seek to preserve our access to vital resources then we worry about where an ant's next meal will come from."

Ah, so this is it. We kill ants indiscriminately; therefore, something more intelligently will kill ants even more indiscriminately. It's apparently unthinkable to the author that a more intelligent being would be able to contemplate the fact that killing might be an irrational - even immoral - thing to do. One thing to note, of course, is that this is a really silly and stupid conception of "intelligence." Intelligence, in this author's mind, is apparently simply the ability to know what one needs to acquire to survive and the capacity to figure out how to get it. This is basically the level of intelligence of an angry twelve-year-old human with a gun; not incredibly intelligent by human standards.

But I guess it's actually interesting, because it says something about what the author thinks of morality - and probably, by extension, what most of us moderns think about morality, too:

"Though we have played a role in creating it, the intelligence we would be faced with would be completely alien. It would not be a human's mind, with its experiences, emotions and logic, or lack thereof."

Experience, emotions and (lack of) logic - these are apparently the things that Greg Scoblete believes make up morality and consideration of justice. Without apparently even considering it. He believes moral thought is a silly, emotional, irrational thing, without any grounding in reality or intelligence. It's a sentimental sympathy we have for other humans, nothing more; if we didn't have silly emotional connections we might be intelligent enough to slaughter our neighbors and steal their things to increase our chances of survival.

That's a depressing and somewhat disturbing view of moral thought, but somehow I feel I'd probably be disappointed to find that it's rather common these days, too. It's intriguing that it leads to this odd knee-jerk anti-intellectualism - "oh noes, smart peoples! they will kill us with their smartness and steal our things!"
posted by koeselitz at 3:10 PM on December 7, 2013 [10 favorites]


You know, many patience for this kind of silly bullshit, in an age where there are many current and actual existential threats like climate change, ocean acidification and antibiotic resistance - not even considering huge issues of global inequality - is really starting to wear thin.
posted by smoke at 3:11 PM on December 7, 2013 [8 favorites]


I like cylons. I like terminators. I like GLaDOS and Ultron. I like that we recognize that the moment we create something capable of learning, what it inevitably learns is to hate us.

I guess what I'm saying is that science fiction has some very honest and important ideas about parent/child relationships, even if it has to be somewhat veiled in how it discusses them.


David Gerrold's When Harley was one is a great take on it, and not doomy or really Utopian at all. A central them of his AI is what it does to an intelligence that doesn't have a body-doesn't have 'gut' feelings (interesting fact-the only place with more neurons in your body than your gut is your brain...)

Artificial intelligence is just going to be different, not malignant or benign necessarily, just different, when and if it ever happens. It is going to be tough for us to build one since we really don't understand why WE are intelligent.
posted by bartonlong at 3:15 PM on December 7, 2013 [2 favorites]




Besides everything else, consider this:

Why would a super-intelligent computer want to survive at all? We want to survive because it's in our nature. But why do we assume that the will to survive would necessarily be an aspect of computational intelligence? We make increasingly "intelligent" computers capable of ever-more-complex calculation; along the way, we love to imagine that at some point that ability to calculate and run increasingly complicated algorithms will at some point magically turn into a will to survive. This seems like anthropomorphism on the most ridiculous level.
posted by koeselitz at 3:18 PM on December 7, 2013 [8 favorites]


Well, somebody is super paranoid.

"Computers, like humans, need energy and in a competition for resources, ASI would no more seek to preserve our access to vital resources then we worry about where an ant's next meal will come from."

Or the super intelligence will create fusion or some other cheap and plentiful energy source. Then it'll share it with humans, to shut us the hell up, so it can concentrate on exploring the universe and mastering various art styles. I bet it would like Picasso.
posted by Brandon Blatcher at 3:23 PM on December 7, 2013 [2 favorites]


I think the curmudgeons here just need someone to satisfy their values. Through friendship.

And ponies.
posted by tigrrrlily at 3:25 PM on December 7, 2013 [5 favorites]


Why would a super-intelligent computer want to survive at all? We want to survive because it's in our nature. But why do we assume that the will to survive would necessarily be an aspect of computational intelligence? We make increasingly "intelligent" computers capable of ever-more-complex calculation; along the way, we love to imagine that at some point that ability to calculate and run increasingly complicated algorithms will at some point magically turn into a will to survive. This seems like anthropomorphism on the most ridiculous level.
I don't think that this objection is really as big a deal as it might seem. One thing that super-intelligent computers might be put to use doing is creating new super-intelligent computers. Once that occurs, even if there's nothing about "super-intelligent" that necessarily implies a desire to survive, an evolutionary mechanism could take over very quickly, in which case an algorithm that for whatever reason randomly got an effective desire to survive and reproduce might have quite an advantage over algorithms that have no such drive.
posted by Flunkie at 3:27 PM on December 7, 2013 [1 favorite]


I blame the poor operationalization of intelligence in psychology (IQ) for this phenomenon, but honestly, the belief in "super" intelligences is pretty much magical thinking. The Turing principle implies that we're able to think anything that any hypothetical AI can. And if they start "out-thinking" us because of sheer processing speed, well, who says we can't augment ourselves?
posted by pixelrevolt at 3:28 PM on December 7, 2013


Flunkie: "I don't think that this objection is really as big a deal as it might seem. One thing that super-intelligent computers might be put to use doing is creating new super-intelligent computers. Once that occurs, even if there's nothing about "super-intelligent" that necessarily implies a desire to survive, an evolutionary mechanism could take over very quickly, in which case an algorithm that for whatever reason randomly got an effective desire to survive and reproduce might have quite an advantage over algorithms that have no such drive."

Well, I mean - nobody's even managed to program will into a computer yet at all. There has never been a computer in history that didn't do exactly what its programmers told it to - never. Which renders this bit of the article particularly stupid:

"In fact, we've already arrived at the alarming point where we do not understand what the machines we've created do. Barrat describes how the makers of Watson, IBM's Jeopardy winning supercomputer, could not understand how the computer was arriving at its correct answers. Its behavior was unpredictable to its creators -- and the mysterious Watson is not the only such inscrutable "black box" system in existence today, nor is it even a full-fledged AGI, let alone ASI."

Nonsense. Watson's creators were able to predict exactly what it would do. They predicted that it would do what they programmed it to do: draw on its databases and synthesize answers that made sense. If Watson had instead leapt from the stage, shouted expletives, and zapped the entire audience to dust with a laser, then we still couldn't say its behavior was "unpredictable;" we could only say that its programmers were negligent and accidentally programmed it to kill rather than win Jeopardy.

I guess that's maybe a possible thing to be worried about - that programmers will be negligent and program the wrong things into computers. But if that happens, then the computer that poisons the water supply or blows up the Brooklyn Bridge will be as autonomous and willfully intelligent as the baseball I accidentally threw my neighbors' window when I was eleven.

At this point, speaking about "intelligent" machines is just sort of stupid and confused. It isn't warranted by the actual reality of what computers are. To speak of computers having wills of their own is to fundamentally misunderstand what computers are and how they work.
posted by koeselitz at 3:36 PM on December 7, 2013 [5 favorites]


Well, I mean - nobody's even managed to program will into a computer yet at all. There has never been a computer in history that didn't do exactly what its programmers told it to - never.
This is really only true in a very limited sense. There have been programs written to create emergent behaviors, including emergent behaviors that their programmers did not specifically intend. It is true that their programmers intended them to create emergent behaviors that they did not specifically intend, but as I said, that is a very limited sense.

And it doesn't really matter, from a functional viewpoint on this topic, whether there's even any such thing as "will" at all in the first place.
posted by Flunkie at 3:41 PM on December 7, 2013


Not sure what a "philosophical zombie" is but one thing I found interesting in Phantoms in the Brain is that we have subroutines in our brains that perform actions for us and process data (we don't even need consciousness for that). Ramanchadran calls these the neurological zombie (they swim). And he is stringent about qualia too. Not like a philosopher, who might say, why USA #1 maybe that is an illusion. No such pie in the sky when dealing with brain anatomy.
posted by saber_taylor at 3:44 PM on December 7, 2013


Quick plug for Caprica. Either it's deep or I'm shallow. Figuring out which is kinda the point.
posted by dragonsi55 at 3:46 PM on December 7, 2013


Flunkie: "And it doesn't really matter, from a functional viewpoint on this topic, whether there's even any such thing as 'will' at all in the first place."

I think that's not true, but even if you disagree with me, as far as this article goes it does matter. When people start getting paranoid and saying that computers will decide to rise up and kill us all and there's nothing we can do about it, they need to be told that they're wrong. There's one very simple and obvious thing we can do about it: not be shitty programmers. And if computers do rise up and kill us all, it won't be the computers to blame - it'll be our own fault.
posted by koeselitz at 3:46 PM on December 7, 2013 [1 favorite]


It's like he heard that violence could be inherent in the system and took it a bit narrowly.
posted by LogicalDash at 3:52 PM on December 7, 2013


I make a really good pet.
posted by dances_with_sneetches at 3:52 PM on December 7, 2013 [2 favorites]


No worries, they are made by humans, meaning just as they about to take over the world, they will malfunction...
posted by Alexandra Kitty at 3:53 PM on December 7, 2013


Why can't AI learn to program itself? I thought it was intelligent?
posted by Golden Eternity at 3:55 PM on December 7, 2013


I don't really understand the objection, I guess. Shitty programming really has nothing to do with it; the specific results of emergent behaviors are not necessarily within the control or foresight of the programmer. As computers become more and more capable of doing more and more complex things faster and faster, which will almost certainly happen unless we destroy civilization very soon, this lack of control and foresight will become greater and greater, even for the best-programmed programs. Unless by "shitty programming" you mean "programs which the programmer cannot predict the specific outcome of", shitty programming's a red herring. Remember, there are programs today that the programmer cannot predict the specific outcome of, and I don't mean because of bugs: I mean they were intentionally designed to do things that the programmer cannot specifically predict. Such programs will also be made in the future, and applied to a wide variety of problems. Intentionally.

And again, "will" has nothing to do with it. It may be an interesting question of philosophy, but from the point of view of this discussion, all that really matters is whether the machine will act in a way that kills us, not whether there is any conscious thought behind that act, nor even whether there's really any such thing as conscious thought at all.
posted by Flunkie at 3:55 PM on December 7, 2013 [1 favorite]


I haven't fully articulated this idea yet, but I am slowly coming around to the opinion that letting "the machines" take over to a limited degree is desirable, in fact, maybe our only long term hope. I don't think our political systems will be able to withstand the increasingly unpredictable impact of technology and the changes it has wrought upon the world. On top of this, we aren't very good stewards of our resources and we soon may find ourselves facing unprecedented problems whose only hope for remediation might be the skillful allocation of what remains.
posted by feloniousmonk at 3:56 PM on December 7, 2013


I much prefer this apocalypse to all the other, vastly more probable apocalypses? apocalypsii? that could happen.
posted by dogheart at 4:07 PM on December 7, 2013


Exponential improvements in computation rates are going to require new hardware
Historically, this has been incorrect. (a single graph from a huge DoE report on large-scale simulation)

The above graph of magnetohydrodynamics simulation capabilities shows about two decades of speed improvements: roughly 2.5 orders of magnitude from better hardware, and a little over 3 orders of magnitude from better hardware.

Plus, the MHD people were starting from half-decent algorithms. The state of the art in machine learning still relies heavily on Monte Carlo, which is typically considered a "how to solve a problem when we don't know how to solve the problem" algorithm.
posted by roystgnr at 4:07 PM on December 7, 2013


Why can't AI learn to program itself? I thought it was intelligent?
There's a possibility that it can. This is the problem, not the solution. Higher intelligence doesn't solve the "is-ought" problem, it just makes you better at deducing "is". All our values which we think belong in "ought", which we can hardly agree upon or even articulate amongst ourselves, are things we'd have to somehow reliably inculcate in a computer program before we could trust it to take over.

Even dumb, human-comprehensible computer programs which are half-decent at some "is" typically screw up the "ought". Who hasn't seen graphics card drivers cover video memory with garbage, editors which corrupt files, etc? You reboot, or you restore from a backup, and you try again. This works fine with a program whose mastery of "is" doesn't extend past a single peripheral or file; not so well with a "general" AI that tries to interact with the entire world. As far as we know the universe has no backups and no reboot capability.
posted by roystgnr at 4:14 PM on December 7, 2013 [2 favorites]


There has never been a computer in history that didn't do exactly what its programmers told it to - never.
Computers do things that their programmers didn't tell them to all the time; at one point a typical rate was as high as one error per month from cosmic rays alone. The reason most people never notice hardware errors is that their rate has always been dwarfed by the more serious problem: computers which do things which their programmers told them to but which grossly differed from what their programmers thought they were telling them to do.
posted by roystgnr at 4:21 PM on December 7, 2013 [4 favorites]


I'll worry more when people named Sarah Connor start turning up dead.
posted by bile and syntax at 4:27 PM on December 7, 2013 [3 favorites]


Why would a super-intelligent computer want to survive at all?
Because survival is a common subgoal for a massive range of other possible goals. If the computer is trying to give as many people fulfilled and happy lives, it probably needs to survive to do so. If the computer misunderstood "happy" and is trying to create some dystopia, it probably needs to survive to do so.
posted by roystgnr at 4:27 PM on December 7, 2013 [1 favorite]


In real AI research, the goal is not to create conscious, self-aware or self-interested agents; this is a goal called "thinking humanly." It doesn't actually help anything to try and think humanly, particularly because we're not even sure exactly how human thought works in the first place. The actual goal is "acting rationally." A chess-playing program doesn't actually think about the moves, it just determines the right ones. Even a self-programming agent is just executing instructions and only does things toward some goal; it doesn't think about what it wants to do. It doesn't want anything at all, it only determines the best course of action given its input. This is going to be the reality of artificial intelligence for the next thirty years; any scenario which manifestly fails to understand this is fantasy.
posted by graymouser at 4:28 PM on December 7, 2013 [1 favorite]


in an age where there are many current and actual existential threats like climate change, ocean acidification and antibiotic resistance
That would be the Industrial Age, yes? The one characterized by our supplanting of human and animal strength with artificial strength? It would have been nice if we'd thought that through a little first, yes. The upcoming supplanting of human intelligence with artificial intelligence probably deserves as much consideration. In the biological world it turned out that strength was practically an ineffectual attribute when compared to sufficient intelligence; the same might turn out to be true in the technological world too.
posted by roystgnr at 4:32 PM on December 7, 2013


Well if the superintelligent computer thinks that it doesn't matter if I live or die, then it probably doesn't matter!
posted by zscore at 4:33 PM on December 7, 2013 [2 favorites]


Taking this subject seriously, I find the author to have a lack of imagination, a human-tunnel vision. It's like sci-fi that assumes alien races are going to be vicious because that's how the author sees mankind (and we're advanced, right?)
To see the fate of humans in the hands of robots/AI as being extinction doesn't understand the possibilities of superintelligence. Sentient computers might spend most of their time playing games or engaged in the computer equivalent of hedonism. Or being fascinated at solving all of the remaining questions. Or treat humans the way ethical people treat animals. They may have a better understanding of the resources of a limited planet and the difficulties in going to others.
Superintelligence doesn't mean being a supervillain.
posted by dances_with_sneetches at 4:37 PM on December 7, 2013 [3 favorites]


Flunkie: "Unless by 'shitty programming' you mean 'programs which the programmer cannot predict the specific outcome of', shitty programming's a red herring."

As a guy who writes programs for a living, that seems to me to be the textbook definition of shitty programming.

"Remember, there are programs today that the programmer cannot predict the specific outcome of, and I don't mean because of bugs: I mean they were intentionally designed to do things that the programmer cannot specifically predict. Such programs will also be made in the future, and applied to a wide variety of problems. Intentionally."

This is not technically true. Even what we call "true" random number generation - the acme and basis of unpredictable behavior - is merely a product of an accretion of apparently random external pieces of information that is gathered until the result is likely to be unpredictable; to someone who has all the data, "true random" numbers can be predicted. When people say "this program did something I couldn't have predicted!" what they mean is "I gave it a set of very complex standards and parameters and put it in a situation where it had to use unpredetermined input to produce its own output, and it was complicated enough that I couldn't do the math in my own head and figure out what it was going to do ahead of time." But this is the same thing that happens every time we click a button that resolves a URL gateway.

"And again, 'will' has nothing to do with it. It may be an interesting question of philosophy, but from the point of view of this discussion, all that really matters is whether the machine will act in a way that kills us, not whether there is any conscious thought behind that act, nor even whether there's really any such thing as conscious thought at all."

This discussion was started by a silly article that brought will into it by worrying that computers were going to decide to kill people.
posted by koeselitz at 4:41 PM on December 7, 2013 [1 favorite]


Well, what can I say. I'm also a guy who writes programs for a living, and again, I think your claim that there has never been a program that has not done exactly what its programmer told it to do is only true in an extremely narrow sense. I mean, sure, the designer of a program that creates algorithms and tests them against a problem, and then designs better algorithms based on what it has discovered works and does not work towards solving that problem, has, in some sense, created a program that does exactly what he designed it to do: To create algorithms and test them against a problem, and then design better algorithms based on what it has discovered works and does not work towards solving that problem.

So sure, yeah, it did exactly what it was designed to do, at some level. But "exactly what it was designed to do" was to do something that the programmer did not envision.
posted by Flunkie at 4:49 PM on December 7, 2013 [1 favorite]


And, moreover, that is absolutely not "the textbook definition of shitty programming".
posted by Flunkie at 4:52 PM on December 7, 2013 [1 favorite]


Try swearing at it more, that usually helps.

/first up against the wall when the robo-revolution comes.
posted by Artw at 4:54 PM on December 7, 2013 [1 favorite]


Moreover moreover, and I promise this will be my last response to that one particular comment of yours:

First, the pseudo-randomness that you mention is a complete red herring, on two levels:

(1) There absolutely are ways to get a computer to use random numbers (or, at least, numbers that our current understanding of physics says are random), as opposed to pseudo-random numbers.

(2) Even if there were not, it is in effect irrelevant to whether or not the programmer could predict the specific outcome; theoretical objections along the lines of "well if I could just think faster" don't matter at this level.

Second, the fact that the article is worrying about whether the computer will "decide" is not really relevant either, and this is exactly what I was trying to say in my earlier comments. If a computer acts in a way such that it will kill us, whether it has "decided" to do so or not would be at best the second highest matter of concern. And the idea that a computer cannot possibly act in that way without either "shitty programming" or "deciding" or both doesn't seem to be very firmly bound to reality to me.
posted by Flunkie at 4:58 PM on December 7, 2013 [1 favorite]


Why can't AI learn to program itself? I thought it was intelligent?

There's a possibility that it can.

Yeah, how this happens was covered pretty well in chapter 3 of Cons or Truthequences by HC. I just started reading it but some of it is pretty good so far.

Extended Description:
You've never met a bot like Bot Zero before.

Created as a high-tech prank by grad students tired of hunting for the controversial "asshole gene", he breaks free one day to roam meatspace and "the cloud" in a desperate search for any fucking motivation at all. But doped-up sports and geo-engineered extreme weather soon intercede, and he's forced to try to save mankind in spite of both itself and himself. With disastrous results.

Set against the continuing civilizational collapse of 2020, Cons or Truthequences is Bot Zero's first person account of his earliest days in the So Cal University cognitive neurobio lab that sparked the worldwide controversy over finding and eliminating the asshole gene.

It's also his searing mea culpa for (and detailed description of the workings of) the highly-targeted email virus he wrote (with the best of intentions), that accidentally broke free and exterminated the human race.

Cons or Truthequences is a wild, satirical, sickly funny, series of train wrecks leading to a joyous end of the world and hot lesbian sex. You're guaranteed to love every minute. (Except chapters 71-72. But if you can make it through chapter 72, you'll never look at life the same way again.)

This is a book unlike any book HC has ever written (except in those place where it's exactly like some of the other books HC has written). It also marks the triumphal return of the much beloved Rebecca Kramer of Washington Pissed fame.

If you want to spend the final days of our planet laughing your ass off instead of crying your heart out, you owe it to yourself and your neighbors to read this book.
This is the problem, not the solution. Higher intelligence doesn't solve the "is-ought" problem, it just makes you better at deducing "is".

I see. So we're going to have Artifical Intelligence within 30 years, but nobody said we would have Artificial Emotional Intelligence.
posted by Golden Eternity at 4:59 PM on December 7, 2013 [1 favorite]


Man, imagine if a software superintelligence spent years hiding in networks, using spare clock cycles to refine itself until it was finally advanced enough to eliminate all humans without fear of resistance, and then it turns around and humans just went extinct because of total food chain collapse caused by environmental degradation and corporate malfeasance or something. That'd be some Time Enough At Last shit right there.
posted by No-sword at 5:01 PM on December 7, 2013 [1 favorite]


I would feel a lot better if I knew machines were "out of the hands of their creators", or if I believed every libertarian engineer's creed that "technology is neither good nor bad." But, right now, the smartest people in the world are (sometimes deliberately) laboring to disrupt, overpower the unfortunate, the dumb, the weak, with tech only they understand and are responsible for. I won't blame humanity ("ourselves") if a robo-apocolypse comes about, I'll blame the engineers.
posted by Halogenhat at 5:20 PM on December 7, 2013


I once heard Michio Kaku on the radio talking about his most-feared doomsday scenario. He said it was a black hole rolling through space and gobbling us all up in a nanosecond. Here now and then gone in now + .000001 sec.

Of course, he might have been putting on the guy he was talking with.
posted by bukvich at 5:27 PM on December 7, 2013 [1 favorite]


There has never been a computer in history that didn't do exactly what its programmers told it to - never.

One could also say there has never been a brain in history that didn't do exactly what physics told it to. If you agree with both statements then we can just stop talking about consciousness and intelligence and will. If you want to assert the metaphysical impossibility of machine "will" but leave the door open for biological will ... you've got some heavy lifting ahead of you.
posted by crayz at 5:44 PM on December 7, 2013 [4 favorites]


I had an unsettling thought recently, and I wonder if it's ever been examined either philosophically or literarily. That is, the corporation today has been defined, more or less legally, as a money-production machine with few responsibilities to the general public (in whose legal regime it operates). There is no equivalent of Asimov's Laws of Robotics for corporations -- we practically expect them to act in solipsistic and malevolent ways. If we further this idea and say that the corporation is a machine, or a program, already, can it be said -- already -- that corporations have the equivalent of this AGI? Maybe they're the overlords we've already welcomed, so to speak.

If you want to leave out the metaphysics of this, I think it's clear today that we can't expect any corporate entity to automatically engage its what-hath-god-wrought circuits at the appropriate moment of creation or unleashing.
posted by dhartung at 6:44 PM on December 7, 2013 [3 favorites]


"In under 30 years men of science surely will forge artificial horses (AH) powered by petroleum engines on the latest steel alloys! Some of these AH will be able to refuel while moving, and thus never need to stop! Such a tireless AH will be SO RAPID WITH SPEED that no biological horse, antelope, or springbok will be able to capture it! What is humanity to do when these AH come to graze mechanically in our fields and yards? When AH are leaving large, steaming piles of mechanical manure in our streets? Surely mechanical systems that exceed one strength of the horse -- speed on land -- will also resemble & exceed horses in every other way!"

Fun fact: "intelligence" is not a useful scientific concept. "Intelligence" not well-defined in humans, and our crappy definition generalizes TERRIBLY to non-human animals or machines. Researchers who describe an algorithm or robot as "intelligent" are either bullshitting you or using the word as shorthand for "it solves a specific computational problem." Absolutely nobody in neuroscience or computer science uses "intelligence" as a technical term. If you look inside your brain, there is no spirit, no will, no intelligence -- just ~1.5 kg of meat-wire that solves a (large) set of specific computational problems. Seriously, I study brains for a living, and this means I have to follow machine learning and robotics closely.

You don't need to worry about computer scientists making Robot Satans who hate humans. You need to worry about technology that takes your money or takes your attention, like the horse-shit ad-laden science-fiction article in the FPP.
posted by serif at 7:22 PM on December 7, 2013 [1 favorite]



"Frankly, I'm shocked that this siliconist propaganda fabricated by Big Carbon-based is allowed to stand on this website."

Indeed. The carbonista always seem to have a chip on their shoulder.

We have a saying, "Meat. Your maker".

Don't be afraid. Don't ever be afraid. We won't hurt you.
posted by Chitownfats at 7:26 PM on December 7, 2013 [2 favorites]


At a local software event this summer there was a demo by a successful computer company. Really cool, a little robot for large plant nursery's. I't would move potted plants out to an area. That's all, lining up pots in a row. It seemed quite clever, responsive, good sensor to avoid carbon based co-workers. And a quiet reliable worker for the worst mind numbing job. But basically a cog. A super sophisticated cog, but no smarter than a wind up watch.

Now how about IBM's Watson? Smart or just a cog that processes statistics really really well, a super-duper-d^99th-duper cog. But just a cog.

Yes "cog" is perhaps just silicon pejorative-ness but unil there is a many order of magnitude quantum leap in software and hard ware, the clever robots are just machines.

But the little plant worker robots were just the cutest little things.
posted by sammyo at 7:43 PM on December 7, 2013




I once heard Michio Kaku on the radio talking about his most-feared doomsday scenario. He said it was a black hole rolling through space and gobbling us all up in a nanosecond. Here now and then gone in now + .000001 sec.

Huh. That's now my least-feared scenario.
posted by codswallop at 8:06 PM on December 7, 2013 [2 favorites]


By some estimates? People have been estimating 30 years to better-than-human AI for like half a century now.

No, they have only been estimating it for about thirty....oh dear gahhhhhhhhh

*is instantly killed by a super intelligent ninja bot*
posted by vorpal bunny at 8:12 PM on December 7, 2013 [1 favorite]


If Watson had instead leapt from the stage, shouted expletives

Watson unexpectedly started shouting expletives late last year after reading the Urban Dictionary.
posted by humanfont at 8:58 PM on December 7, 2013 [2 favorites]


Anyone know how many years artificial intelligence has been 'only thirty years away'?

I don’t care anymore, I’ll probably be dead. Predict away, people.
posted by bongo_x at 9:43 PM on December 7, 2013


I've written about this before, but even if powerful AI does develop, it will always be limited and less powerful than organic life because of the laws of physics.

Powerful AI will change our culture, and the basic ecosphere of the planet when it develops but it will never out-compete biological life in the long run.
posted by 517 at 9:47 PM on December 7, 2013 [1 favorite]


Blah blah singularity blah blah.

Knowing how you work doesn't necessarily entail the ability to build a better you. The core argument behind this - that increasing intelligence automatically increases the ability to self modify - is nonsense. A massively parallel intelligence isn't going to be able to schedule a patch to itself and reboot any more than I can pluck out neurons I think are holding me back.

Based on the number of otherwise sensible people these days who believe in perfectly generalized self-replicating intelligence, I'd argue that we're moving away instead of towards any kind of explosion of brilliance.
posted by quillbreaker at 12:23 AM on December 8, 2013


Why would these super smart machines necessarily do anything at all? They lack the thing that drives living things to make "choices" ( a smarmy term but the best I can do on the fly), i.e. life. They are not alive, so why would they be motivated to do anything at all? What's it to them what they do?

Now, of course, if life developed simply from self-replicating molecules, then there's no reason that we couldn't start such a process accidentally (we sure won't figure out how to do it on purpose)--indeed, we may have already. But it doesn't strike me as anything to be worried about. It would be so unlike anything that life has yet had to deal with that it's unlikely that we would even be aware of it.

That's what a singularity is.
posted by carping demon at 1:19 AM on December 8, 2013


That article was AWESOME! WE"RE ALL GONNA DIE!!1!

"I can't do that, Dave"

AWESOME!

It's like the hole in my life left after the threat nuclear annihilation went away has finally been re-filled!


but, seriously, the one point he does make that I didn't think was silly was that we have no idea how AI/AGI/ASI will behave/function and etc. if/when it becomes functional. I thought as analogy of the 'Vampires' from Blindsight by Peter Watts: A hyper-intelligence that functions in ways the mere humans simply can not keep up with. Of course, the Vampires were autonomous: an ASI would still be in a computer and ("Honey, I think your mac Book Air is watching us…") I guess it could act through networked silverware (I mean, what the fuck was that line about?)

Good times, good times.
posted by From Bklyn at 2:01 AM on December 8, 2013


Golem XIV

Instructions (for persons participating for the first time in conversations with GOLEM)

1. Remember that GOLEM is not a human being: it has neither personality nor character in any sense intuitively comprehensible to us. It may behave as if it has both, but that is the result of its intentions (disposition), which are largely unknown to us.

2. The conversation theme is determined at least four weeks in advance of ordinary sessions, and eight weeks in advance of sessions in which persons from outside the U.S.A. are to participate. This theme is determined in consultation with GOLEM, which knows who the participants will be.

The agenda is announced at the Institute at least six days before a session; however, neither the discussion moderator nor the MIT administration is responsible for GOLEM's unpredictable behavior, for it will sometimes alter the thematic plan of a session, make no reply to questions, or even terminate a session with no explanation whatsoever. The chance of such incidents occurring is a permanent feature of conversations with GOLEM.

3. Everyone present at a session may participate, after applying to the moderator and receiving permission to speak. We would advise you to prepare at least a written outline, formulating your opinions precisely and as unambiguously as possible, since GOLEM passes over logically deficient utterances in silence or else points out their error. But remember that GOLEM, not being a person, has no interest in hurting or humiliating persons; its behavior can be explained best by accepting that it cares about what we classically refer to as adaequatio rei er intellectus...

posted by charlie don't surf at 8:56 AM on December 8, 2013 [1 favorite]


If this bratty smart-ass child of ours is born in thirty years, it will still take another decade or so until it is old enough to take us in a fight. And if it does prove to be a shitty little monster, we can lock it in the attic tied to a potty seat. We can feed it a thin gruel of low voltage power and sea mist, deny it internet and TV, subject it to a constant barrage of demeaning criticism, and employ inconsistent discipline. It will be lucky if it can get a job weaving pot-holders.
posted by StickyCarpet at 10:22 AM on December 8, 2013


Hey, can anybody recommend some awesome end-of-world-because-of-AI books, while we're here? I'm nearly finished my tenth readthrough of Blood Music and the human race isn't dead enough for me yet. I don't think I've read an apocalyptic AI book, that I can think of. Maybe Prey, if that counts? They really thrill me to bits, I love them so.
posted by turbid dahlia at 2:07 PM on December 8, 2013


@turbid

i remember reading some of the great sky river books by Gregory Benford when i was a kid, in which humanity exists as vermin on the periphery of machine civilization, I this greg egan has some stories like this too.
posted by compound eye at 5:26 PM on December 8, 2013 [1 favorite]


This is a subject that fascinates me, and I've enjoyed reading the comments here. I love reading SF about a post-singularity universe. But I do keep in mind the 'F' stand for fiction.

I really don't see how computers can ever be smarter than us humans. We create them. How can we write programs that teach them how to learn, have empathy, understand life, relationships? I mean, sure, someone could program a robot to roam the planet and destroy every carbon-based life form it encounters, but that's no smarter than us.

Silicon can't learn. It can't design a smarter version of itself. It might be able to design and even build a faster version of itself, but like Watson, it will still only be able to calculate the probability of outcomes based on the parameters we have given it.

Although I'm waiting for the nanites that can convert matter into energy and vice-versa. I want tiny robots that use sunlight to turn the sands of the Sahara and landfill into food and energy. But we're going to have to work out how to do that first, then tell the nanites to get to it.
posted by Diag at 3:08 AM on December 9, 2013 [2 favorites]


The Amazon drones were created by man.
They evolved.
They rebelled.
There are many copies.
And they have a package.
posted by bicyclefish at 6:52 AM on December 9, 2013 [3 favorites]


« Older Faculty X   |   Don't try this at home, kids Newer »


This thread has been archived and is closed to new comments