It has become death, destroyer of worlds
July 27, 2015 11:28 AM   Subscribe

Over a thousand scientists and public intellectuals, including Stephen Hawking, Daniel Dennett, Steve Wozniak, Noam Chomsky, and Elon Musk, have signed an open letter calling for a ban on the development and deployment of "offensive autonomous weapons beyond meaningful human control”, i.e., the coupling of autonomous Artificial Intelligence to weapons systems.
posted by Rumple (83 comments total) 26 users marked this as a favorite
 
"Those fears," said SkyNet CEO Albi Backerson, "are without merit."
posted by grumpybear69 at 11:32 AM on July 27, 2015 [37 favorites]


I have little fear of a hypothetical skynet or any illusions that a given technology won't be used to kill human beings no matter what some impotent registry says. I'm more concerned about bugs, and the fact that the people seem to be very ignorant of the limitations of the technology in question at the margins of the statistical distributions they model (See: Google image labeling gaffes). No need to worry about machine gun tripods -- Self-driving car accidents and biased training data will be enough to damage AI's reputation.

YOU HAVE 10 SECONDS TO COMPLY...
posted by smidgen at 11:39 AM on July 27, 2015 [2 favorites]


This is why I am reaaaalllly nice to my toaster, you just never know what the MOST HUMANS MUST DIE criteria will be.
posted by Cosine at 11:40 AM on July 27, 2015 [2 favorites]


Dodge this.
posted by phoebus at 11:41 AM on July 27, 2015 [2 favorites]




I think that there's some tendency to assume this means "drones" but this also could be meaningful for what the jingoists are calling "cyberweapons," too. In the case of either, heuristic automation short of what could be called "AI" is about as bad (or worse) than AI .

In terms of heuristic automation, this seems to already be taking place as the major governments (and vaguely competent for-hire firms) seem to be conducting, in parts at least, automated heuristic driven exploitation of systems.
posted by Matt Oneiros at 11:42 AM on July 27, 2015


This sentence is just fantastic and bears quoting.
The key question for humanity today is whether to start a global AI arms race or to prevent it from starting.
Sadly I'm a pessimist and think it's impossible to prevent the development of autonomous weapons. The tech is just too easy to hack.

Bonus GIF
posted by Nelson at 11:42 AM on July 27, 2015 [5 favorites]


Self-driving car accidents and biased training data will be enough to damage AI's reputation.

Every single one of the Google Car's accidents have been caused by other drivers.
posted by anotherpanacea at 11:43 AM on July 27, 2015 [14 favorites]


The singularity is near.
posted by Sir Rinse at 11:44 AM on July 27, 2015


I am sure this letter will make a tremendous difference and cause the human race to reconsider its choices.
posted by entropicamericana at 11:47 AM on July 27, 2015 [8 favorites]


“... like a berserk raptor thrown into a nest of hibernating kittens...”
posted by zarq at 11:49 AM on July 27, 2015


When you write out a shopping list, do you include conditionals? "Get this, and if this is available get this also, but not that brand of it?" Congratulations, your grocery-buying process is Turing-complete. And being afraid of the rise of AI is like worrying that if your grocery list gets long and complicated enough, it will eventually start doing the shopping itself.

We have tools now that can see a lot further into the dark than we can, make decisions about what they find and then act on them immediately, deploying an staggering amount of force with no human judgement or moral framework behind it whatsoever and that's a considered a feature, not a bug, in a conflict where the outcome is determined largely by response time. The argument is that human intervention is too slow to be anything but a dead weight loss in a conflict, but the reality is that you want to cut the human out of that loop so that nobody can hold you accountable for why your drone blew up a school bus.

And that will happen because people largely don't want to realize that software is just a bunch of decisions made by somebody else somewhere else about how this machine will act right now. Not because the hardware has somehow become self-aware, but because the people who programmed it largely weren't. The problem isn't that computers will get too smart, the problem is that people will always be too quick to wash their hands of responsibility.
posted by mhoye at 11:52 AM on July 27, 2015 [24 favorites]


Every single one of the Google Car's accidents have been caused by other drivers.

How did you find that link?

I rest my case.
posted by gauche at 11:52 AM on July 27, 2015 [29 favorites]


Maciej Ceglowski's take:

Let me give you a little context here. This little fellow is Caenorhabditis elegans .... The absolute state of the art in simulating intelligence is this worm. We can simulate its brain on supercomputers and get it to wiggle and react, althogh not with full fidelity.

Wwe barely have computers powerful enough to emulate the hardware of a Super Nintendo.

But since unreasonably fearful people helm our industry and have the ear of government, we have to seriously engage their stupid vision. At best, having the the top tiers of our industry include figures who believe in fairy tales is a distraction. At worst, it promotes a kind of messianic thinking and apocalyptic Utopianism that can make people do dangerous things with all their money ... Grown adults, people who can tie their own shoes and are allowed to walk in traffic, seriously believe that we're walking a tightrope between existential risk and immortality.

posted by entropone at 11:54 AM on July 27, 2015 [14 favorites]


At best, this means they'll lie about working on it. In the cold war sort of mindset which still governs these matters, partially or fully autonomous weapons are like nuclear weapons in that it doesn't really matter if you have a moral objection to them, because if a war starts and the other side has them, they'll use them. If you can't respond like for like, they may even use them to strike first, without warning.
posted by feloniousmonk at 11:54 AM on July 27, 2015 [2 favorites]


I think the much more plausible threat isn't from berserk AI, it'd from autonomous weapons being abandoned after the conflict they're used in and killing civilians years later. Kind of like land mines. You can imagine how this could happen with sentry guns, for instance, or very small fliers.
posted by Mitrovarr at 11:59 AM on July 27, 2015 [8 favorites]


. . . with no human judgement or moral framework behind it whatsoever . . .

Does human judgement and moral framework have that great a track record?
posted by Sir Rinse at 12:01 PM on July 27, 2015 [5 favorites]


I'm starting to lean toward Peter Watts's pessimism. The apocalypse doesn't need sentience, conscience, or a Turing Test, just the determinism of game-theory optimization algorithms wired to weapons of mass destruction. In fact, Echopraxia argues that's already the case, it's just that our brains maintain the self-delusion that we have free will on the kill switch.
posted by CBrachyrhynchos at 12:03 PM on July 27, 2015 [5 favorites]


I think the much more plausible threat isn't from berserk AI, it'd from autonomous weapons being abandoned after the conflict they're used in and killing civilians years later. Kind of like land mines. You can imagine how this could happen with sentry guns, for instance, or very small fliers.

Once again, Cordwainer Smith beat us all to it with his manshonyaggers (menschen jaeger).

Aren't CWIS/Goaltender already just about fully autonomous? Like, you turn them on and clear them to fire, but they shoot at whatever piques their interest?
posted by ROU_Xenophobe at 12:05 PM on July 27, 2015 [1 favorite]


Welp, if things go really, really bad, we'll always have phenomenology to fall back on ...
posted by ZenMasterThis at 12:05 PM on July 27, 2015 [1 favorite]


Couldn't autonomous weapons also confer real benefits compared to human-operated weapons, though? Yes, in the hands of unscrupulous operators, they could facilitate atrocities, but the same is true of ordinary weapons and soldiers. (Rwandan genocidaires, for example, needed only machetes and cheap transistor radios). I would submit, therefore, that the worst-case scenario for autonomous weapons is something of a wash.

The best-case scenario might be significantly better than a military built entirely on human-operated weapons, though. Autonomous drones (as opposed to those with human operators) do not get PTSD. They carry no grudges. They carry no ethnic hatreds into battle. A soldier facing an angry mob might panic, but an autonomous weapon programmed not to fire unless fired upon will face the prospect of annihilation with perfect equanimity.

In short: The evils to be wrought with autonomous weapons seem comparable to those we're already perfectly capable of perpetrating with conventional weapons, but autonomous weapons may be capable of a kind of dispassionate restraint that could help a military live up to humanitarian ideals. In that way, an autonomous weapon may be More Human Than Human. ("More Human Than Human" is a registered trademark of the Tyrell Corporation.)
posted by Mr. Excellent at 12:05 PM on July 27, 2015 [4 favorites]




Every single one of the Google Car's accidents have been caused by other drivers.

From the AI's point of view I'm sure there's a simple fix -- just eliminate the other drivers!
posted by neckro23 at 12:12 PM on July 27, 2015 [4 favorites]


When you write out a shopping list, do you include conditionals? "Get this, and if this is available get this also, but not that brand of it?" Congratulations, your grocery-buying process is Turing-complete.

If your grocery list is a tree of conditionals ending in computable predicates, then it's a primitive recursive language and hence not actually Turing complete.
posted by Pyry at 12:13 PM on July 27, 2015 [16 favorites]


an autonomous weapon programmed not to fire unless fired upon

I seen to find the idea of capital punishment for property damage or attempted property damage to be somewhat less satisfying than you. Once you take the soldier out of the equation then that's what this becomes.
posted by biffa at 12:14 PM on July 27, 2015 [11 favorites]


Well, we already have the self-aiming rifle (previously), which is already a terrifying escalation in accuracy over human sharpshooters. Imagine an Aurora shooter with a weapon that can precisely target his random shooting for maximum fatalities.

Luckily, Project Pluto (also previously) was deemed "too provocative."
posted by Existential Dread at 12:16 PM on July 27, 2015


Honey Badger, vesrion 8, don't care
posted by Brandon Blatcher at 12:20 PM on July 27, 2015


I fear that robotic weapons are to human-operated weapons what firearms were to all weapons that preceded them in history: a technology that fundamentally changes the course of who wins and loses in most conflicts.

Once small armored drones become inexpensive enough to produce and easier to coordinate, then you no longer have to command an army of human beings. The atomic bomb was one such invention, but I fear drones much more because they lack the potential deterrent effect of mutually-assured destruction.

In the past, if you were a feudal lord, a head of state, or the would-be leader of a military junta, you had to bribe/inspire/intimidate some group of human beings to act as soldiers in your army. In the age of cheap armored drones, what is going to stop the rich and powerful from lording over everyone with completely unchecked cruelty, with an army that never questions?
posted by overeducated_alligator at 12:20 PM on July 27, 2015 [8 favorites]


Professor Hawking is doing a Reddit AMA in the subreddit /r/Science. Because of Dr. Hawking's communication challenges, the format's a bit different, and the questions should be asked today and voted upon (so popular questions rise to the top by default), and tonight or tomorrow, Dr. Hawking will compose his answers which will be added as replies to the thread tomorrow. Many of the questions are related to this topic (which is, one presumes, his reason for doing an AMA), and /r/science is pretty aggressively moderated, not usual Ask-Me-Anything free-for-all, so rudeness and digression won't be tolerated for long.
posted by Sunburnt at 12:21 PM on July 27, 2015 [3 favorites]


You can have my Aegis 2 when you'll pry it from my cold, dead robot hands.
posted by I-baLL at 12:27 PM on July 27, 2015 [1 favorite]


In the age of cheap armored drones, what is going to stop the rich and powerful from lording over everyone with completely unchecked cruelty, with an army that never questions?

Nothing's stopped the USA from drone-bombing civilians yet, so there's the answer to your question I guess.
posted by feckless fecal fear mongering at 12:28 PM on July 27, 2015 [3 favorites]


I'm torn. I work in AI — specifically with wireless sensor networks, but more generally in the field of sensor/automator systems, and I go to conferences where uniformed members of the US Navy give presentations on the very tiny (unclassified) advances that have been made since the last conference on things like getting drones to fly through windows without disgracing themselves. (Turns out this is pretty hard!)

On the one hand, a lot of the breathless futurism tends to come from outside the AI community. Maciej Ceglowski, linked above, is absolutely right: artificial intelligence is pretty primitive when you talk about it in those terms. We don't have to worry about the rise of the machines any time soon.

On the other hand, I don't think that this is necessarily the concern being expressed by the signatories to this letter. I'm by no means eminent or influential in my field, and yet I've written algorithms that can leverage large-scale networks to perform some (simple) tasks that would be prohibitively expensive and dangerous for humans to perform. My work has potential military applications (whether or not I can foresee them), and unlike chemists, say, I work in a field with no treaties whatsoever, that I know of, that ensure that it is used for peaceful purposes. Once it is published, that's that. My benign aggregation algorithm can be used by absolutely anyone, for any purpose. (For me, feel free to substitute the civilian scientists in my field who truly are eminent, influential, well-cited, etc.; they certainly work with co-authors whose emails end in .mil.)

I enjoy dumping on Kurzweil as much as anyone else, but I do kind of feel that there is a legitimate complaint at the root of this letter.
posted by Zeinab Badawi's Twenty Hotels at 12:29 PM on July 27, 2015 [12 favorites]


I'm still leaning toward nuclear war as inevitable, a matter of how and when rather than whether. Autonomous drones are a few steps down my Freak the Fuck Out list.
posted by echocollate at 12:34 PM on July 27, 2015


Sorry, that was very long. In short, the crux of the letter, for me, is that
chemists and biologists have broadly supported international agreements that have successfully prohibited chemical and biological weapons, just as most physicists supported the treaties banning space-based nuclear weapons and blinding laser weapons.
You might interpret the word "successfully", as used above, with the same pessimism as me, but the intent is clear.
posted by Zeinab Badawi's Twenty Hotels at 12:35 PM on July 27, 2015 [4 favorites]



The best-case scenario might be significantly better than a military built entirely on human-operated weapons, though. Autonomous drones (as opposed to those with human operators) do not get PTSD. They carry no grudges. They carry no ethnic hatreds into battle. A soldier facing an angry mob might panic, but an autonomous weapon programmed not to fire unless fired upon will face the prospect of annihilation with perfect equanimity.


Or, alternatively, they can fire into an unarmed crowd with perfect equanimity - unlike now, where there's usually some hesitation and a lot of pressure on the cops/soldiers before they actually start killing people. And the state will never again have to worry that the army might mutiny if it's told to massacre civilians. No soldier will ever leak data or testify at Winter Soldier because they can't sleep at night.

Picture all the para-state stuff that is so immensely shitty right now - raids on civilian Palestinian settlements, drone attacks, corporate/army joint ventures against indigenous people who won't move off valuable land - and picture that all done in perfectly locked down fashion by something that will never be disloyal. That's what all this is ultimately for.
posted by Frowner at 12:37 PM on July 27, 2015 [28 favorites]


> Well, we already have the self-aiming rifle, which is already a terrifying escalation in accuracy over human sharpshooters. Imagine an Aurora shooter with a weapon that can precisely target his random shooting for maximum fatalities.

Aim is the least of your problems when shooting into a dense crowd in close quarters.

Further, this isn't a self-aiming rifle; more of a robotic-assistant for a sniper, a sniper who can have less skill than it currently takes as a result. This weapon is more appropriate for terrorizing a college campus from a clock tower, to take a historical example. A human user must identify possible targets (presumably other humans), select from the available targets (apply psychosis here), and then designate them, and continue to designate them as they move. This "self aiming rifle" does nothing more than wait for the human shooter's aim to blunder into just the right spot that will, per its windage and elevation calculations, result in a hit. Utterly worthless assistance in a crowded theater and would greatly hinder a shooter most likely.
posted by Sunburnt at 12:39 PM on July 27, 2015 [2 favorites]


I think the much more plausible threat isn't from berserk AI, it'd from autonomous weapons being abandoned after the conflict they're used in and killing civilians years later. Kind of like land mines. You can imagine how this could happen with sentry guns, for instance, or very small fliers.

A Farewell to Weapons in Short Peace suggests this kind of future.
posted by a lungful of dragon at 12:40 PM on July 27, 2015 [1 favorite]


Let me give you a little context here. This little fellow is Caenorhabditis elegans .... The absolute state of the art in simulating intelligence is this worm.

well that's not entirely true. simulating a brain and simulating intelligence aren't precisely the same thing
posted by p3on at 12:41 PM on July 27, 2015 [3 favorites]


Sunburnt is clearly an AI fifth columnist. "Carry on, nothing to fear from RoboRifle!"
posted by Etrigan at 12:47 PM on July 27, 2015 [3 favorites]


simulating a brain and simulating intelligence aren't precisely the same thing

Sorry, should have been clearer. That's absolutely on the money. Take edge detection: that's basically a solved problem in computer vision. Would you like to have your autonomously-patrolling drones (also a well-researched issue as of ACC 2013) shoot at things that look like people? Because we can do that, and so can you, with Google Scholar and a budget.
posted by Zeinab Badawi's Twenty Hotels at 12:48 PM on July 27, 2015


I don't think we'll get the equivalent of Terminators or Berserkers soon. But we might get a better buzz bomb, one that maximizes causalities. And there's a century-old argument (including Nobel, among others) that automating and mechanizing war means you get bigger escalations of conflicts.
posted by CBrachyrhynchos at 12:50 PM on July 27, 2015 [1 favorite]


Yeah, Maciej's critique "lol AI is so dumb it can barely do a worm" is really not relevant here. You don't need general purpose intelligence. As the letter says, even something as straightforward as applying facial recognition to targeting systems raises huge, scary questions. We already have remote controlled death robots and we already have autonomous robots. It's not hard to make the leap to autonomous death robots.

I just don't see how we stop this "progress". Well-meaning scientists and engineers can all promise not to build autonomous death robots. That won't stop military developers, or crazy zealots, or lone curious hackers. What terrifies me most is just a slow feature creep of increasing automation until one day we realize no one's finger is really on the trigger any more.
posted by Nelson at 12:51 PM on July 27, 2015 [4 favorites]


A Farewell to Weapons in Short Peace suggests this kind of future.

Though, short of a supply of autonomous robots that can reload existing autonomous robotic weapons (or can make new ones) as well as make copies of themselves, a dystopian future where humans hide outside of abandoned cities seems highly unlikely.

The real-world threat will be from the gradual transfer of power from somewhat unaccountable governments to entirely unaccountable private entities that will make, own and operate these killing machines for their ends (Google, et al.).
posted by a lungful of dragon at 12:58 PM on July 27, 2015 [2 favorites]


"As the letter says, even something as straightforward as applying facial recognition to targeting systems raises huge, scary questions."

This already exists.
posted by I-baLL at 12:58 PM on July 27, 2015 [1 favorite]


metal storm[*] :P

also btw...
"Oppenheimer's famous 'I am become death, destroyer of worlds' quote misunderstood, say Gita experts, actually meant as a humble statement of devotion to duty."
posted by kliuless at 1:01 PM on July 27, 2015 [2 favorites]


This just seems like a no-brainer to me. Otherwise our eulogy will be "How did this culture manage to destroy itself via the same means its storytelling had been obsessed with for the preceding 60 years?"
posted by bleep at 1:31 PM on July 27, 2015 [3 favorites]


The Gita expert cited doesn't claim to be a Gita expert, and my translation and commentary has a very different account of that verse, and the nature of the Gita in its entirety. At least the interpretation I read, the Gita is a religious discourse inserted into a convenient location of the epic. It describes three different types of Yoga or religious practice with the aim of liberation. Krishna goes beyond Karma Yoga (Ajuna's duty) to discuss Knowledge Yoga and Devotion Yoga. At which point, Krishna reveals himself to be The Ultimate Reality. Then it comes back down to Earth, reminding us of the frame story before discussing religious issues some more, then finally returning to the frame story.

tl;dr the passage in question isn't just giving Arjuna a just war doctrine, it's interpreted as establishing Krishna as Vishnu and Brahman incarnate. But I'm not a Gita expert either.
posted by CBrachyrhynchos at 1:40 PM on July 27, 2015 [1 favorite]


being afraid of the rise of AI is like worrying that if your grocery list gets long and complicated enough, it will eventually start doing the shopping itself.

Actually, it's like worrying that if your spouse _believes_ the grocery list covers every possible contingency, it actually doesn't. Hey, I got bread! (but you didn't get the cold cuts, and you got salad lettuce instead of sandwich lettuce, so you should have gotten pasta instead. Also this is the kind of bread that falls apart when you try to butter it. Also, while you were out, I accepted a dinner invitation and we already have breakfast set up for tomorrow morning, and the freezer is full, so the bread is going to go to waste. Thanks for shopping, though.)
posted by amtho at 1:49 PM on July 27, 2015


When it comes to AI and harmfulness, I always think of the Slylandro from Star Control 2 (i.e. The Ur-quan Masters) - They purchased a self-replicating probe in an attempt to explore the universe, since they were unable to personally.

It was programmed as so:
   TARGET LIST (with associated Target Priority).
       Space Vessel (5).
       Transmission Source (4).
       Astronomical Anomaly (3).
       Planet Bearing Life Signature (2).
       Raw Replication Materials (1).
   PROBE BEHAVIORS (With assigned priorities).
       Communicate (5).
       Record Data (4).
       Analyze Data (3).
       Seek Replication Materials (999).
       Move to Current Target (1).
All of the values except for 999 were the defaults - They prioritized the 'Seek Replication Materials' behavior over all else, in an attempt to get them to multiply their numbers more quickly, and thus explore the galaxy more rapidly.

Of course, what actually happened was that it considered "Seek replication materials" over all else, and as a result, was hostile to everything it encountered - It became a self-multiplying attack drone.

It's something that stuck with me for a long time after I played the game, and I've thought back to it often when I've encountered some bit of software that someone has tuned in an attempt to "optimize" it.
posted by MysticMCJ at 1:56 PM on July 27, 2015 [3 favorites]


Congratulations, your grocery-buying process is Turing-complete.

Argh.

*Scratches S-combinator off list*

It was on sale this week too, dammit.
posted by A dead Quaker at 2:00 PM on July 27, 2015 [1 favorite]


The key problem identified by the statement is as follows: "Autonomous weapons are ideal for tasks such as assassinations, destabilizing nations, subduing populations and selectively killing a particular ethnic group. We therefore believe that a military AI arms race would not be beneficial for humanity."

In other words, it's not about a robot Skynet, it's about a robot Sarajevo.
posted by CBrachyrhynchos at 2:02 PM on July 27, 2015 [8 favorites]


I have a feeling, and am going to make a prediction, that the first person to die at the hands of a fully autonomous armed drone will be from one operating in a security/police context, not in a military conflict.

And if you think the ass covering and victim blaming is bad when a cop shoots someone, it's just going to be a solid wall of "the machine did exactly what it was supposed to do! it has no emotion involved in it's judgement, they just shouldn't have done X!".

A proper system like this should accept damage or destruction of itself to prevent unnecessary loss of human life. Like, it shouldn't fire on someone firing at it unless they're threatening or it thinks they're about to harm other humans.

And i get the feeling none of these will work that way. They'll just be the worst version of robot aggressive cops that basically go IF WEAPON THEN THREAT IF THREAT THEN FIRE. And i could see it having just as much of a problem deciding on what constitutes a "weapon" as human cops have, if not worse.

I mean we can do better than that, and probably will in some cases, but if it's possible to set the system up that way plenty of people are going to. Because it's become painfully obvious over the past few years, at least for me, that there's plenty of people in this country who think that property is worth more than the life of some "undesirable".

The first shot fired in this awful nebulous situation is just going to be some guy trying to get in to a building, or a child getting shot by a border patrol robot, or something along those lines.

And i think it's utterly impossible to avoid this. Even if we outright ban these things, some other country will just make them. There's no way a point isn't come at which this isn't going to be something you can just rig up from off the shelf parts, at which point someone else will be putting those parts together and selling them.
posted by emptythought at 2:35 PM on July 27, 2015 [4 favorites]


Autonomous weapons are ideal for tasks such as assassinations....

I'm always surprised how little assassination is used in offensive warfare. Particularly among those for whom surviving the attack oneself is at best a secondary concern.
posted by IndigoJones at 2:43 PM on July 27, 2015


Isn't like every single person in Chomsky's department at MIT working directly or with one degree of misdirection on this exact technology?

Maybe it's time to pass this around again:

Social Responsibility

Information for Students on the military aspects of careers in PHYSICS

Charles Schwartz, Professor of Physics
University of California
Berkeley, CA 94720

posted by bukvich at 3:02 PM on July 27, 2015 [4 favorites]


I'm rarely optimistic about anything these days, but this bit from Maciej Cegłowski's talk Web Design: The First 100 Years is worth considering:

... here is Elon Musk, the founder of PayPal, builder of rockets and electric cars. Musk has his suitcase packed for the robot rebellion: “The risk of something seriously dangerous happening is in the five year timeframe. 10 years at most.”

“With artificial intelligence we are summoning the demon. In all those stories where there’s the guy with the pentagram and the holy water, it’s like – yeah, he’s sure he can control the demon. Doesn’t work out.”

“We need to be super careful with AI. Potentially more dangerous than nukes.”

“Hope we're not just the biological boot loader for digital superintelligence. Unfortunately, that is increasingly probable.”

Let me give you a little context here. This little fellow is Caenorhabditis elegans , a nematode worm that has 302 neurons. The absolute state of the art in simulating intelligence is this worm. We can simulate its brain on supercomputers and get it to wiggle and react, althogh not with full fidelity.

And here I'm talking just about our ability to simulate. We don't even know where to start when it comes to teaching this virtual c. elegans to bootstrap itself into being a smarter, better nematode worm. In fact, forget about worms—we barely have computers powerful enough to emulate the hardware of a Super Nintendo.

If you talk to anyone who does serious work in artificial intelligence (and it's significant that the people most afraid of AI and nanotech have the least experience with it) they will tell you that progress is slow and linear, just like in other scientific fields. But since unreasonably fearful people helm our industry and have the ear of government, we have to seriously engage their stupid vision.

But since unreasonably fearful people helm our industry and have the ear of government, we have to seriously engage their stupid vision.

posted by ryanshepard at 3:15 PM on July 27, 2015 [1 favorite]


and it's significant that the people most afraid of AI and nanotech have the least experience with it

That's not significant in the slightest, it's incredibly obvious. Might as well say, "the people with the most experience with are the least afraid of it". Of course they aren't. It's their interest and their living. This has no bearing on whether them being unafraid is meaningful to the rest of them. For all we know, they are a bunch of Edward Tellers cooking up a storm for ideological purposes.
posted by Rumple at 3:27 PM on July 27, 2015 [1 favorite]


a historical turning point some experts have described as the third revolution in warfare (the advent of gunpowder and nuclear weapons being the first two)

Aristotle tells us that military technology tends to determine what type of government a state will have. If a state depends on expensive cavalry, expect to see an aristocracy. If it relies on triremes, expect to see the poor oarsmen get a stake in government. Political power flows toward those who are too effectively armed to ignore.

Gunpowder was and is a democratizing technology. For the past few hundred years it has been more or less impossible to hold territory against a determined populace. Muskets require little training, and were perfect for mass-mobilized citizen armies. Likewise, assault rifles and improvised explosives can cheaply grind down any occupier. Really, the only way to beat a citizen army equipped with gunpowder weapons is full-on genocide.

In Aristotelian language, autonomous weapons favour the few over the many. If citizen infantry ceases to be effective on the battlefield at the same time that labour unions fade, ordinary citizens will no longer have any way of forcing oppressors and occupiers to respect their rights. That's deeply unsettling.
posted by justsomebodythatyouusedtoknow at 3:31 PM on July 27, 2015 [10 favorites]


Professor Hawking is doing a Reddit AMA in the subreddit /r/Science.

If I were Professor Hawking, I would be less concerned with Artificial Intelligence than Dumb Machines providing Force Multiplication for the Worst of Humanity. In other words, the members of many subreddits like /r/gunsforsale.
posted by oneswellfoop at 3:33 PM on July 27, 2015 [3 favorites]


Why is everyone quoting Maciej Cegłowski like he's an expert an AI? AFAIK he's now, but I'm not an expert on him so maybe I've missed something.

Smart and funny, yes - but why is his dismissive attitude more legitimate than the concern expressed in this letter? Is it more legitimate?
posted by Tevin at 4:01 PM on July 27, 2015 [2 favorites]


I think about the broad swathes of the US, given to utilities and energy companies, and the railroad. Will their auto drones just shoot folks walking down the tracks, or lingering too long on the bridge?
posted by Oyéah at 4:16 PM on July 27, 2015


Will the auto drone films be acceptable evidence in court proceedings?
posted by Oyéah at 4:18 PM on July 27, 2015 [1 favorite]


we barely have computers powerful enough to emulate the hardware of a Super Nintendo.


That doesn't make any sense. We might not be able to emulate a super Nintendo's hardware (true???), but we can make something that acts exactly like a super Nintendo and does everything it could do, and far more. A robot weapon doesn't need to emulate a human killer's hardware, it just needs to act like a killer.So what am I missing?
posted by mrbigmuscles at 4:18 PM on July 27, 2015 [3 favorites]


> > applying facial recognition to targeting systems raises huge, scary questions."
> This already exists.


What you linked is face detection, easily foiled, and only good if you're building something that wants to shoot everything with a face. (Like a camera.)

I'm sure we'll get to the huge scary questions before long, but it's early yet for facial recognition, which is a still a complicated and difficult problem to solve, because faces are complex shapes, and cameras that see those faces are kinda sucky, and people change their appearance easily and often, and quite a lot of people look a lot alike if the measurement is coarse.
posted by Sunburnt at 4:28 PM on July 27, 2015


Maybe we won't get cylons. But right now we could have robots wheeling down the street, shooting at people.
posted by persona au gratin at 4:30 PM on July 27, 2015 [1 favorite]


His dismissive attitude is warranted at this point. Hardware continues to improve rapidly, but the improvement of software is...less than linear. We're mostly doing the same things somewhat better rather than experiencing huge paradigm shifts. It takes an incredible amount of programming for an autonomous robot to navigate with vision, instead of radar or pattern recognition.

Most of the fears about AI come from extrapolation.

There are some smart people who think it's a civilization-ending threat, but that's only if it continues to advance. Tim Urban had a well-written article where he too considered it a civilization-ending threat, but he puts too much faith in the great filter not being explained by the implausibility of interstellar travel. The cosmos could be empty just because you can't travel faster than c and we're not particularly close to anywhere, and maybe you can only reasonably go 1/10 c anyway.

I find it very telling that biologists and geologists don't tend to share this fear of AI at all. The fossil and geochemical records have way too many examples of extinction that are frankly more likely at this point than us somehow figuring out the computer code for sentience.

We know how to program some software to do some things autonomously or make some choices, but it's not really that much more elaborate than using IF:THEN logic. Give inputs, return outputs. For my part, I don't consider our progress to date that much of a threat, because it's not like having faster hardware will make software less shitty. Shitty software has been posted about ad infinitum on the blue.

Clippy's not on the verge of discovering how to kill all humans. Civilization-level threats these days are, in my opinion, more likely to consist of the following:

1. Ecological catastrophe
2. Economic and social collapse stemming from, or causing, resource depletion (How do you restart an industrial revolution when all of the easily mined surface deposits of minerals have already been extracted?)
3. Climate change, caused by or stemming from Ecological catastrophe or human activity
4. War, especially if it includes Nuclear weapons
5. Natural disaster - mostly asteroids or volcanoes (like the Yellowstone super volcano)

To be clear, I'm not saying any of these things are likely to cause extinction of humans, but any or all of them could easily destroy human civilization as we know it and prevent it from rising again. And they're all considerably more likely than AI destroying us, unless we're stupid enough to program it badly and give it the nuclear codes, because we already have the technology to do that. I don't think a machine sentience is more to fear than our own stupidity, because there are many ways we can destroy our civilization with no help from anybody else.

I have noticed that many of the loudest speakers from the, shall we say, less life-oriented sciences (specifically physics and engineering) tend to have a parochial view of the concerns held by ecologists. But Carl Sagan did bring up many of these issues back in the 80s. I don't think any of the worry about AI is more warranted than the definite problems that exist today, in a non-theoretical sense rather than a possible extrapolated problem.

But when was the last time you heard somebody on the major TV networks talk about ocean acidification? Or thermohaline circulation?

I have said it before, and I will say it again: in today's world, you need a minimum of a year's worth of college level chemistry, biology, geology, physics, and economics to begin to understand the issues of today. A master's degree in each would serve you far better.
posted by Strudel at 4:33 PM on July 27, 2015 [5 favorites]


Cosine: This is why I am reaaaalllly nice to my toaster, you just never know what the MOST HUMANS MUST DIE criteria will be.

Dude, not cool. That's a word that only we can sa—

Shit.

I've said too much.
posted by brundlefly at 4:54 PM on July 27, 2015


There are some smart people who think it's a civilization-ending threat, but that's only if it continues to advance.

Someone didn't read the article.

The letter addresses the possibility that the software that everyone is downloading to make psychedelic art could be weaponized in the form of autonomous weapons, and could be made cheap. Probably a simple case would be a kamikaze quadcopter that maximizes casualties by counting people. The letter doesn't address AI research in general, doesn't address "civilization-ending" threats, and doesn't address much that isn't already in the technology news.
posted by CBrachyrhynchos at 5:10 PM on July 27, 2015 [3 favorites]


The real threat of an AI toaster, of course, is the fact that it could make you never want toast again. (YTL)

It would be even more horrible if, having invented the self-aware toaster, you went on to make something self-aware whose only purpose is to pass the butter.
posted by Sunburnt at 5:26 PM on July 27, 2015 [1 favorite]


A civilized society will need to reject automated weaponry in the same way it rejects biological and nuclear weaponry.

Sadly, in the case of most of the major developed countries, that also means they're likely to build these machines anyhow, "just in case". After all, if the [fill in the blanks] have a robot fighter plane that can pull 8 Gs, it's going to outmaneuver every pilot you have, so YOU better have a robot fighter plane that can pull 8 Gs.

Automated weapons are different from previous weapons. You start with a fist, and everything else, whether club, sword, gun, laser, or bomb, is a kind of amplifier of that fist. You have a a human that must identify a target, decide to attack, and then execute the attack.

Using computers or automation to aid in identification and decision is mostly OK. But taking a human hand off that last bit is really problematic.

The good news is at least some people in the military are extremely concerned about this trajectory, and have been for some time.

I've been spending a good bit of time over the last year talking about these issues with some other people. I'm less concerned about automated weapons (though still concerned!) than I am about the emerging technological evolution's corrosive effects on our social structures.
posted by Jinsai at 5:29 PM on July 27, 2015


Unlike nuclear weapons, they require no costly or hard-to-obtain raw materials, so they will become ubiquitous and cheap for all significant military powers to mass-produce. It will only be a matter of time until they appear on the black market and in the hands of terrorists, dictators wishing to better control their populace, warlords wishing to perpetrate ethnic cleansing, etc. Autonomous weapons are ideal for tasks such as assassinations, destabilizing nations, subduing populations and selectively killing a particular ethnic group. We therefore believe that a military AI arms race would not be beneficial for humanity. There are many ways in which AI can make battlefields safer for humans, especially civilians, without creating new tools for killing people.

You don't need AI to do any of this. In fact, the US military is already doing this on a large scale with the drone program. Drone flies around, human in the loop directs its course and targeting decisions. You already have your humans out of harm's way and in an environment where they can hand things off to relief crews with minimal costs. AI is a red herring here.
posted by indubitable at 5:47 PM on July 27, 2015 [2 favorites]


In the age of cheap armored drones, what is going to stop the rich and powerful from lording over everyone with completely unchecked cruelty, with an army that never questions?

What has EVER stopped the rich and powerful from creating armies that don't ask questions?

The basic argument is that drones will allure the Elite to: exterminate teens if millions of people based on their ancestry; machete to death hundreds of thousands of people of a putatively ethnic group; commit genocide if 99% of the population of a continent.

Face facts- worrying about drones is a distraction. If you want to avoid all the abuses you're predicting, the only thing to do will be to get rid of the people. Sadly, that seems all too likely, because people are busy being distracted about drones, while ignoring the environmental catastrophe bearing down on us.
posted by happyroach at 6:33 PM on July 27, 2015


The fossil and geochemical records have way too many examples of extinction that are frankly more likely at this point than us somehow figuring out the computer code for sentience.

Like people have been saying up above, this is a complete failure to understand what is being discussed, as is the bit in the Maciej Cegłowski piece from the post the other day.

This has nothing to do with sentience or machine consciousness or simulating humans or worms. The term "AI" is being used informally in the same way it gets used to refer to the computer-controlled opponents in a video game.

If you don't have to worry about avoiding killing people, self-driving cars are really easy compared to what Google and car companies are trying to do. When the objective is making an automated device specifically for killing people, one that can't be easily stopped even by opposing military forces, not killing the wrong people is a high-end premium luxury feature.

That's why it can actually be useful for military purposes to give little kids guns and turn them into child soldiers; the sort of job a twelve-year-old would do is entirely acceptable in many circumstances. (Not that the automated weapons we're talking about are going to be anything like the equivalent of child soldiers, I'm just making a point.)

What has people worried is anticipating what we'll end up with when the cheapest, most mass-producible version of this is made and shipped off somewhere for use in wars where unimportant people live, and then becomes available everywhere else.

Science fictiony things like machine sentience would make the kinds of automated weapons that will actually be built in the 21st century less dangerous. The problem is that no one is going to wait for developments like that to happen first.
posted by XMLicious at 11:58 PM on July 27, 2015 [6 favorites]


The responses to the statement have been more hyperbolic than the statement itself.

But yes, we should worry about what technologies are weaponized to kill tens or millions of people. Climate change is going to be violent, and the question of which mass-produced parts are pulled off the shelf to commit that violence will be relevant. Comparisons to the Avtomat Kalashnikova were intentional.
posted by CBrachyrhynchos at 4:40 AM on July 28, 2015


This has nothing to do with sentience or machine consciousness or simulating humans or worms. The term "AI" is being used informally in the same way it gets used to refer to the computer-controlled opponents in a video game.

I've always been under the impression (perhaps wrongly) that AI has never been just about Age of Ultron or Ex Machina type scenarios, and it applies just as much to worm-like systems as human-like systems.
posted by CBrachyrhynchos at 5:07 AM on July 28, 2015 [3 favorites]


Part of my skepticism there is that saying you've implemented current information processing models of a C. elegans network as AI is reasonable. FrankenAI strikes me as still on the level of a pot-brownie conjecture given no consensus about what consciousness is, what it does, how to measure it, or even if it's necessary.
posted by CBrachyrhynchos at 5:50 AM on July 28, 2015


We've no chance of preventing the automation of weaponry, way too much money, and way too far along. We could however limit both automated and fully human tools of oppression from targeting dissidents as effectively by encrypting our communications and concealing our metadata.
posted by jeffburdges at 5:55 AM on July 28, 2015 [2 favorites]


I've always been under the impression (perhaps wrongly) that AI has never been just about Age of Ultron or Ex Machina type scenarios, and it applies just as much to worm-like systems as human-like systems.

What I'm saying is that creating software that plays a video game or drives a car or controls a weapon system for killing people has no direct relation to simulating worms or humans in software, regardless of which of those things get referred to as "AI". So scoffing at concerns about autonomous weapons using as a basis supposed difficulties or lack of fidelity in simulating worms or reproducing human consciousness makes no sense.
posted by XMLicious at 6:54 AM on July 28, 2015 [1 favorite]


Yes the issue is not that AI would do the killing well it is that AI would do the killing poorly. You don't need a great AI for the latter.
posted by Rumple at 7:54 AM on July 28, 2015 [1 favorite]


Also, we banned chemical and biological weapons in part because they were not terribly effective, meaning the winning armies had little interest in pursuing them. Very dumb robots can still be more effective than people, depending upon your definition of effective.
posted by jeffburdges at 8:45 AM on July 28, 2015


Right. An AI barely on the level of C. elegans is not Skynet or Ultron, but it doesn't have to be; put that exact AI in the driver's seat of a backhoe and I bet it could kill or maim a relatively large number of people before being disabled. A hardened combat machine: many, many more people.

The scary question might not be, "when do we get a turing-complete self-aware AI that's orders of magnitude smarter than humans?" It might be, "when does it become trivial to put a dumb AI in the driver's seat of a backhoe?"
posted by gauche at 10:11 AM on July 28, 2015 [1 favorite]


An interesting comparison jumped out at me:
autonomous weapons have been described as the third revolution in warfare, after gunpowder and nuclear arms.
Gunpowder and nuclear arms are precisely the examples George Orwell uses in his essay, You and the Atomic Bomb, where he argues that the dominant weaponry of an era determines its political structures. According to Orwell, because gun technology was relatively easily produced and shifted the balance away from concentrated state power toward ordinary people, gunpowder brought the end of feudalism and aided the French and American revolutions; by contrast, Orwell argues the atomic bomb is an inherently totalitarian weapon because it requires a concentrated industry to produce and is only suitable for large-scale war.

Which raises a question: if autonomous weapons are a new revolution in warfare, what political structures do they favor? I'm hoping the answer isn't "a grimdark cyberpunk society," because I think this technology is coming whether we like it or not.
posted by Wemmick at 2:28 PM on July 28, 2015 [3 favorites]




I'd agree with Orwell that the dominant weaponry of an era determines its political structures, Wemmick. Ain't quite so easy to understand exactly what the dominant weaponry is though. I'd argue nuclear weapons encourage empire formation, not necessarily totalitarianism. America made considerable social progress because they feared the Soviets gaining political advantage from segregation, etc.

There are massive ethical problem with states assassinating people who they should be putting on trial, problems compounded by doing that assassination with big bombs that kill tens or hundreds of innocent people, but..

Are robot assassins like predator drones becoming the dominant weaponry? If so, then political movements might succeed by (a) making their personnel more easily replaceable, and (b) assassinating business leaders, etc. with homemade robots. And doing so could bring political decentralization.

If we want a future where non-violent revolutions work, then we need our governments to employ a judicial system, even when the other guy prefers to use bombs. If we pursue this path of politics by robot assassination then the entire world will adopt it too. Ain't exactly grimdark though.

Is otoh the dominant weapony going to be mass surveillance coupled with machine learning, aka bad statistics? If so, then we should expect greater centralization and authoritarian control, ultimately halting social progress. Very bad!

You simply cannot protest police brutality, inequality, drug policy, etc.if Google, Facebook, Twitter, etc. all prevent your message from getting out and help authorities obstruct you. We're years off from that because currently those services remain in fairly liberal or libertarian hands, but that'll change.
posted by jeffburdges at 2:19 AM on July 29, 2015


« Older Yakety Max   |   Margaret Atwood on How to Save the World Newer »


This thread has been archived and is closed to new comments