Please drop your weapon. You have ten seconds to comply.
February 17, 2005 11:03 AM   Subscribe

"The lawyers tell me there are no prohibitions against robots making life-or-death decisions," (NYT link) The Pentagon is spending $127 billion on a new project called Future Combat Systems, and armed, decision-making robots represent a significant part of that project (though such a drone may not be available until 2035). They're also looking at the possibility of nanotechnological "smart dust." Though the concept of grey goo has been all but debunked by the man who coined the phrase, the more immediate future may hold robots who, according to the Times article, are faced with choices like whether to destroy a tank or a school bus (One of the main contractors involved, the somewhat ominously named iRobot, is best known for making vacuum-cleaner-bots). Is the general movement toward a fleshless army a good idea?
posted by hifiparasol (82 comments total)
 
Is the general movement toward a fleshless army a good idea?

From a parent's perspective? Yes.
posted by fenriq at 11:08 AM on February 17, 2005


sheesh. Biological Warfare is like soooo much cheaper, albeit less Sci-Fi and more Horror.
posted by Edible Energy at 11:09 AM on February 17, 2005


Oh sweet Jesus, I don't want to live in this world any longer.
posted by the_savage_mind at 11:10 AM on February 17, 2005


Whatever happend to the U.S. Robotics brandname? They were making modems for a while, and then got bought by 3com.
posted by delmoi at 11:12 AM on February 17, 2005


I keep having this recurring thought that NOTHING makes the invasion of your country more surreal, faceless, and horrible than an army of flying drones gunning down people in the street. And now this.

I wonder if anyone is going to do research on the psychological effect of robotic aggressors, and its link to increased resistance.
posted by dougunderscorenelso at 11:14 AM on February 17, 2005


better robots than people, but still a horrendous and gigantic waste of our money, and if they're killing machines what'll stop them from killing all of us (or anyone who happens to be in a war zone--women, kids, innocent bystanders...)?

it's funny how war hasn't helped the economy at all--a personless war would actually hurt it--no uniforms or rations or other stuff needed--how would Halliburton and the others continue to steal from us?
posted by amberglow at 11:14 AM on February 17, 2005


By the way, smart dust raised an eyebrow here so I went out and found SmartDust & Ubiquitious Computing which helps explain it. The good applications are amazing, a swarm of nanomedbots injected into your body to go and attack cancer on a cellular level or to repair a torn ligament without invasive surgery. But the evil uses are downright frightening.

Imagine a cloud of microscopic flying chainsaws. Now imagine someone having total control over that cloud. Now imagine that someone being George Bush. Scared yet?
posted by fenriq at 11:15 AM on February 17, 2005


Is the general movement toward a fleshless army a good idea?

Well as long as they fight fleshless foes. Otherwise it is further dehumizing and frankly scary. We already remove a lot of the experiencing the consequences of our actions, i.e. death, part from war. This would just make it easier and easier to justify.
posted by edgeways at 11:16 AM on February 17, 2005


dougunderscorenelso: You know, I'd suspect it would reduce resistance. After all, you can't horrify a nation into giving up by blowing up it's robots. The iraq war, for example, would have been much less of a contraversial issue of 0 US troups had died rather then 1500.
posted by delmoi at 11:17 AM on February 17, 2005


Out of curiosity, if a machine is built to kill people, and it kills an innocent person, who gets charged with the crime?
posted by Joey Michaels at 11:17 AM on February 17, 2005


I dunno, del.
I just imagine seeing those drones in the sky every day and slowly becoming more and more convinced that this wasn't liberation but absolute, faceless subjugation.

Why, just look at the Matrix! Oppressive, semi-invincible machines PISS PEOPLE OFF.

Even if it doesn't spurn a resistance, the emotional victory it secured might be one of absolute despair, which isn't a good way to go either.
posted by dougunderscorenelso at 11:21 AM on February 17, 2005


Oh: What Edge said!
posted by dougunderscorenelso at 11:22 AM on February 17, 2005


Life imitates art?
posted by iamck at 11:22 AM on February 17, 2005


Actually, I have the suspicion that this project is going to fail miserably, but spawn all kinds of actually useful AI and robotics inventions. Nearly all of the AI research at the school I go to is being funded by this program or other military applications, and nearly all of that research is entirely useless to the military. We'll see how it ends up.
posted by JZig at 11:23 AM on February 17, 2005


Wonderful. The whole of modern geopolitics is about to be reduced to a buncha kids playing Transformers.
posted by jonmc at 11:25 AM on February 17, 2005


it's funny how war hasn't helped the economy at all

Amberglow, you fundamentally misunderstand economics. War never helps the economy grow - it only makes people look busy. Every dollar spent on "uniforms, etc" is a dollar not spent making people happy (which is the ultimate point of the economy).

In some sense a robot war would help the economy by not resulting in the death of young intelligent potential workers - every dead soldier is a huge waste of human capital. Not saying it's a good idea, but it refutes the economic argument.
posted by thedevildancedlightly at 11:28 AM on February 17, 2005


Interesting links. I'd like to throw in this demo [mpeg] from an animation school, depicting an automated enforcement robot. I'd say the the people in the movie look pretty scared, and with good reason!
posted by FissionChips at 11:33 AM on February 17, 2005


From a parent's perspective? Yes.

Unless your kids are on the school bus that the robot destroys.

The military has been using aircraft drones to gather intelligence for decades. I think that's as far as it should go. But I doubt the Pentagon cares what I think.
posted by Juicylicious at 11:34 AM on February 17, 2005


Although I have no flying car, I've realized that I am now officially living in The Future. We are having a reasoned, realistic discussion that contains the sentence: "In some sense a robot war would help the economy by not resulting in the death of young intelligent potential workers"
posted by Bugbread at 11:36 AM on February 17, 2005


From a parent's perspective? Yes.

Yes, because most parents don't give a tuppenny fuck if their kids are killed by robots?!

Can people get this into their heads - It will be people these things are killing. And I dare say they'll be killing a lot more people than are saved by being displaced from combat environments. But of course, Johnny Foreigner is expendable, so what does that matter.

Should these things be in use, would people actually fight back against them or would they just go and stick with terrorism against actual people?
posted by biffa at 11:36 AM on February 17, 2005


armed, decision-making robots

What could possib-lie go wrong?
posted by Fuzzy Monster at 11:39 AM on February 17, 2005


There is no possible way this idea can backfire!!

Robots are for fucking/housecleaning/meta-breakdancing, not for fighting. Jeez.
posted by Divine_Wino at 11:41 AM on February 17, 2005


There's no way that a robot will be smart enough to tell the difference between military and civilian targets. Hell, humans often can't tell, and they can actually reason. If you do this, a lot of civilians are going to get killed.
posted by unreason at 11:42 AM on February 17, 2005


Can people get this into their heads - It will be people these things are killing. And I dare say they'll be killing a lot more people than are saved by being displaced from combat environments.

Uh, you kinda answered yourself there. If, pre-robots, casualties on both sides are 200-300, and, post-robots, are 0-300, then one side's kids are going to be just as dead as if the opponent was human, and the other side is going to have a bunch of happy parents. As you say, they will kill a lot more people (300) than are saved (200), but that's a pretty random basis of comparison, and, either way, less people are dead. So, from the perspective of half the parents, it's neutral, and for the other half, it's good, hence the overall average is "slightly good from a parents' perspective".

Not saying I agree, necessarily, but your angry counterargument seems to have been missing the basis of the original statement.
posted by Bugbread at 11:45 AM on February 17, 2005


My thought was that I would rather a robot go to war rather than my son. Not that he would be on the other end of the AI controlled flamethrower or quad 50.

Deploying machines to kill people based on some algorithm is, as many of you have already noted, incredibly wrong and bad. If both sides are fielding robot armies than I have no problem with it. But using independently thinking machines against human opposition is, indeed, a recipe for slaughter.
posted by fenriq at 11:50 AM on February 17, 2005


Unless your kids are on the school bus that the robot destroys.

I know I'm horrible, but I'm just getting a vision of that premise as a Movie Of The Week.
posted by jonmc at 11:50 AM on February 17, 2005


It can't be bargained with. It can't be reasoned with. It doesn't feel pity, or remorse, or fear. And it absolutely will not stop, ever, until you are dead.
posted by kirkaracha at 11:52 AM on February 17, 2005


But can it speak with an Austrian accent?
posted by unreason at 11:53 AM on February 17, 2005


Hell, humans often can't tell, and they can actually reason

Yes, but humans are often up 48 hours popping Modafinil "go-pills" and they push the wrong button and blow up a friendly due to a lack of judgment due to massive sleep deprivation.

Again, not saying it's a good idea, but that argument doesn't hold water. By pointing out the fallability of humans you sink your own case.
posted by thedevildancedlightly at 11:53 AM on February 17, 2005


... and after you are dead a single oily tear will run down its smooth cold ceramic faceplate and land with the slightest of tings on its metal chest, right above where a heart should be, then... More killing.
posted by Divine_Wino at 11:55 AM on February 17, 2005


By pointing out the fallability of humans you sink your own case.

I dunno. I see the argument as "A is not as good as B, and B sucks pretty bad already". I don't think, in that case, that admitting that B sucks means that A is good, it means that A is really sucky.
posted by Bugbread at 11:56 AM on February 17, 2005


"Every gun that is made, every warship launched, every rocket fired, signifies in the final sense a theft from those who hunger and are not fed, those who are cold and are not clothed."
posted by zenzizi at 11:58 AM on February 17, 2005


Again, not saying it's a good idea, but that argument doesn't hold water. By pointing out the fallibility of humans you sink your own case.

That's because of the common misconception that computers make less mistakes than people. In simple things like arithmetic this is true. In the case of visual processing, computers are a lot stupider and prone to error than any human.
posted by unreason at 12:01 PM on February 17, 2005


Hello, ED-209!
posted by chuq at 12:08 PM on February 17, 2005


I for one welcome our robotic masters.
posted by berek at 12:10 PM on February 17, 2005


Well before then, they say, the military will have to answer tough questions if it intends to trust robots with the responsibility of distinguishing friend from foe, combatant from bystander.

I don't know about "friend from foe", but the facial recognition software I've worked with literally couldn't tell the difference between the company's security director and a 6" stuffed doll wearing large black novelty glasses (hey - I didn't write it, I just integrated it into some other software - the client picked the solution). I've never seen facial recog work outside of trade-show style dog-and-pony shows.

This strikes me as an SDI-style boondoggle, an opportunity to feed at the DOD trough.

JZig -
Nearly all of the AI research at the school I go to is being funded by this program or other military applications

It's not just AI. Read Technology Review some time - every article covers research that is either specifically military in nature, or mentions how useful it would be to the military. Any article on bio-technology eventually mentions "battlefield medicine", no matter how tangential to the subject at hand. It's easy to see where MIT is getting their research funds.
posted by bonecrusher at 12:12 PM on February 17, 2005


Nearly all of the AI research at the school I go to is being funded by this program or other military applications, and nearly all of that research is entirely useless to the military.

This is true of so much of technology development, it's hard to believe it isn't on purpose. Schools/companies/scientists are doing pretty much the research they want and telling the suits it's for antiterrorism or the military.

We can't build a robot which can successfully drive a car more than a couple of miles in the middle of the desert and we're somehow going to succeed with automated killing machines?

Cynical interpretation: It's just a way of funnelling money from the taxpayers to the scientists and government contractors.

Optimistic interpretation: It's just a way of making technology research palatable to the bloodthirsty masses.
posted by callmejay at 12:12 PM on February 17, 2005


On the upside, if you can destroy enough of these then you can financially cripple the army and stop the war. Foot soldiers being alot easier to get than aircraft, in general.

Idea via Greg Egan.
posted by asok at 12:18 PM on February 17, 2005


"The lawyers tell me there are no prohibitions against robots making life-or-death decisions," said Mr. Johnson, who leads robotics efforts at the Joint Forces Command research center in Suffolk, Va.

Is this, perhaps, one of those legal situations where X is not specifically prohibited simply because it has never come up before? To say that there is no legal barrier is vastly distinct even from claiming ethical acceptability, isn't it?

Robots, obviously, cannot (yet) take part in legal proceedings, and I imagine there are no laws governing culpability for their creators or programmers. It seems like this might just be a way to get around the irksome issues involved with human rights abuses in war.
posted by clockzero at 12:23 PM on February 17, 2005


Your move, creep.
posted by stinkycheese at 12:27 PM on February 17, 2005


Gungans no giben up witout a fight. Wesa warriors! Wesa got a gwand army. Dats why yousa no liken us, I tink.
posted by bonecrusher at 12:31 PM on February 17, 2005


Thanks for making me the laugh-out-loud jackass at work, bonecrusher.
posted by elafint at 12:35 PM on February 17, 2005


I've been trying to formulate a comment, but I keep running smack into the fact of how colossally fucking stupid war is in the first place. Which leaves me ill-equipped to deal with reality, sadly.
posted by squidlarkin at 12:37 PM on February 17, 2005


"War in the Age of Intelligent Machines" is a book that deals with this topic, and indeed the intertwined histories of techology and warfare.

A summary of the book is available here. Mind boggling.
posted by LimePi at 12:38 PM on February 17, 2005


I'm about to lose whatever liberal cred I ever had, but ...

I think its a sound idea. Not for the immediate future, mind you. But there's several points worth noting.

1) Robots would be much better suited to objective based destruction, like ground attack heavy equiptment destroyers. Tanks in a confined space (city streets) are difficult targets because air support is limited and ground troops still have to kill all opposition on the way to the target. Robots don't have to kill anybody, They simply must remain functional until the target is aquired and destroyed.
2) An opponent will waste vastly more resource against a machanically endowed army than against traditional forces. Attrition becomes, once again, a viable combat objective.
3) Traditional weapons systems, however "smart", are still mostly reliant on area based force. A robot, programmed for specific targets, would be able to carry systems that have very specific application, and not have to rely on large blast type technology.

I don't believe (at this point) that anyone can rationally consider an all robotic army in our lifetimes, outside of the realms of science fiction. But mechanized forces would certainly help minimize the loss incrued to fighting men.

All that said, I think we need to distiguish between the beliefs that "all war is bad" and "robot soldiers are bad" if this discussion is to have any merit. Many of those reacting negatively to this idea, appear to be reacting more to the idea that people get killed in war, as opposed to the idea that robots in war are bad.
posted by Wulfgar! at 12:52 PM on February 17, 2005


This strikes me as an SDI-style boondoggle, an opportunity to feed at the DOD trough.

The NYT article claims that human-like robots -- which I read as robots able to make choices when faced with complex situations, such as recognizing faces and uniforms -- probably won't be around until 2035. I wonder if, by that time, SDI will have totally exhausted any meager credibility it has with anyone, thus providing the need for a new trough.

And speaking of uniforms -- maybe you AI guys who've been commenting can address this one: What factors enter into an AI's decision to attack or not attack a target? It can't be uniforms anymore, since the people we tend to fight can't really afford them. So what do the robots look for? Heat signatures? Surly expressions? Unkempt facial hair?
posted by hifiparasol at 12:54 PM on February 17, 2005


Robots don't have to kill anybody, They simply must remain functional until the target is aquired and destroyed

This makes sense, but the first of the robots the article mentions carries a friggin' machine gun. These guys aren't being used to paint targets.

Roger roger.
posted by hifiparasol at 12:57 PM on February 17, 2005


Bonecrusher, ditto.

Jonmc, I imagine it as a Lifetime special. "Toy Robots: One Mother's Struggle Against Automatons For The Life Of Her Child."
posted by dougunderscorenelso at 12:59 PM on February 17, 2005


In a few years he creates a revolutionary type of microprocessor… In three years, Cyberdyne will become the largest supplier of military computer systems. All stealth bombers are upgraded with Cyberdyne computers, becoming unmanned. Afterwards, they fly with a perfect operational record. The SkyNet funding bill is passed. The system goes online on August 4th 1997. Human decisions are removed from strategic defense. SkyNet begins to learn at a geometric rate. It becomes self-aware at 2:14am eastern time, August 29th. In the panic, they try to pull the plug . . .

Only the governor of CA can save us now.
posted by theknacker at 12:59 PM on February 17, 2005


Out of curiosity, if a machine is built to kill people, and it kills an innocent person, who gets charged with the crime?

Well, that's the key question, isn't it?

If you slice off your own arm with a chainsaw, it's your fault. If someone else does it, it's their fault. If the chainsaw malfunctions and slices your arm off, it's the manufacturer's fault (or yours, if you used it improperly). Heck, if it was a self-guiding chainsaw and decided your arm looked like a branch, that still falls on the manufacturer (or you, if you were dressed like a tree). But what if it's the PeopleKiller 5000, and it peoplekills the WRONG person?

That could get crazy in the legal system, as manufacturers would likely try to argue that the machines were acting on their own volition. Religious people would be the first to fight that idea, and they wouldn't be the only ones, but big business is powerful and has deep pockets.

It is certainly conceivable that they could develop a set of programming rules, and convince the courts that the responsibility of the manufacturer ended at ensuring those rules were obeyed, and machines that broke laws (both their own and those of the society) would be destroyed (Isaac Asimov, we salute your foresight).

Then all a sociopath would need to do is hack robots into breaking the rules, and they could murder at will. Then governments would have to enact DRM-style laws to prevent editing of robot code so that they could prosecute the sociopaths.

Then people would start raising a stink on SlashBot, and all Fark FPPs would end with "your robot wants a tinfoil hat."

I think I might have gotten sidetracked somewhere in my reply...
posted by davejay at 1:00 PM on February 17, 2005


Has anyone considered what might happen if these robots learned to LOVE, and enjoy the Three Stooges?

Number five....IS ALIVE! NO DISASSEMBLE!
posted by dougunderscorenelso at 1:01 PM on February 17, 2005


OF their own volition. Sorry. My robot was doing the typing.
posted by davejay at 1:02 PM on February 17, 2005


zenzizi, you are a god for that quote
posted by the_savage_mind at 1:09 PM on February 17, 2005


Anyone remember that episode of AstroBoy where he fell in love with the robot time bomb who escaped into the woods to kill herself and spare his life?
posted by jon_kill at 1:09 PM on February 17, 2005


Of course, these machines will inevitably fail when they take on Yoshimi.
posted by COBRA! at 1:11 PM on February 17, 2005


Better check into this!
posted by MmmKlunk at 1:12 PM on February 17, 2005


I'm going with bad idea. War is pretty horrific and entered into all too lightly already. I'd hate to see anything lower the barriers to entry, especially in the case of robots against humans.
posted by electroboy at 1:23 PM on February 17, 2005


Of course, these machines will inevitably fail when they take on Yoshimi.

I dunno, when Yoshimi married that jackass Brian, my confidence in her robot destroying capabilities went down, and the last show I went to with her wasn't too impressive either. Still, she divorced Brian, so maybe she can take out a few R2 units or something.
posted by Bugbread at 1:28 PM on February 17, 2005


Good thing we aren't using those robots in Iraq. We could wind up with thousands of dead and maimed women and children.
That wouldn't be good.
posted by notreally at 1:29 PM on February 17, 2005


Your foster parents are dead.

This strikes me as simultaneously inevitable and creepy. We have to manage this as painlessly as possible because it's going to happen. The economics are dangerous here: how much more willing will a gov't (not even necessarily the U.S. gov't in the future) be to go to war if it does not have to face the prospect of human casualties? If the "cost" to the aggressor can literally be measured in dollars and cents, why wouldn't you attack? The worst that can happen is some of your Short Circuit-esque toasters-on-wheels-with-guns will be destroyed or will kill some foreigners...

It's important to note the off-hand comment about how the military can't afford human soldiers' pensions. This is a major secondary issue. What happens when a society gradually automates its miliary, while neglecting those who are still alive?
posted by schambers at 1:31 PM on February 17, 2005


I’m not an expert in the field, but isn’t this article just a load of B.S? The pentagon guy claims “By 2015, we think we can do many infantry missions. The American military will have these kinds of robots. It's not a question of if, it's a question of when."

But I find it highly implausible that there will be robots that can independently navigate battlefields, find cover and identify targets in 10 years. We can’t even get robots to go from point A to point B over open terrain, let alone the hugely difficult tasks that a typical infantryman does everyday.

The fact that they didn’t include any machine intelligence experts outside of ones directly employed by the pentagon (Bill Joy doesn’t count) and that it doesn’t say if the tests where the robot “finds targets and shoots them" where carried out in a fixed lab environment or in a real world one is evidence that this is just a bunch of pentagon propaganda that the NYtimes didn’t bother checking up on.

My guess is that this is recruiting ploy of “look at all the cool gadgets the army has!” like commercials with the F-22 and people ruining around in night vision goggles. I’m not saying that the army won’t use robots. It’s just that the ones they are using will be controlled by humans, so the big ethical questions won’t really come into play.

And how they managed to write the entire thing without a mention of “Skynet” is beyond me.
posted by afu at 1:36 PM on February 17, 2005 [1 favorite]


dougunderscorenelso - is that "ditto" your skepticism about facial recog? I'm curious if others have had the same experience that I have. I notice nothing came of the high-profile test cases of the last few years. There was the superbowl, some town in florida (orlando, maybe?) and another town in england - all of which tried using it to match surveillance cameras against a criminal database. The project I was involved in was for a large private company. It's been a few years, but the last time I checked, it was still considered an expensive fiasco.

Has anyone ever been sued because of something they wrote on metafilter?
posted by bonecrusher at 1:40 PM on February 17, 2005


Afu: Excellent point. The article does have an element of "recruiter propaganda" to it, doesn't it?
posted by schambers at 1:42 PM on February 17, 2005


Even if not especially intelligent, I think robotic soldiers could have a lower probability of inflicting civilian casualties than human ones. Human soldiers have to decide FAST if a target is military or civilian; delay too long and you die. Because of this, soldiers make snap decisions, and often error.

Robots would be much more durable and somewhat more expendable, so they could spend more time analyzing the target and making the correct decision. A well armored robot would not really have to do anything but return fire to anything that fired upon it, almost eliminating the possibility of hitting something unarmed.

However, I think a better idea is to have robots that are controlled remotely by soldiers in a secure location. That gives the benefits of superior durability, endurance, and the avoidance of friendly human casualties without the price of sacrificing human intelligence. Plus, consider how poor enemy morale would be in a combat where your side can die, but the enemy is only sacrificing hardware. There would almost certainly be less insurgency, etc.
posted by Mitrovarr at 1:52 PM on February 17, 2005


The ditto was kind of a ditto for elafint on your Jar-Jar reference being hilarious.
Sorry.
posted by dougunderscorenelso at 2:00 PM on February 17, 2005


Robots would be much more durable and somewhat more expendable, so they could spend more time analyzing the target and making the correct decision.

Think about the error rates in automated spam detection. Detecting targets is approximately 5x10^100 times harder. What would be an acceptable rate? 60% correct? Ever used a voice-recognition system? AI hasn't qualitatively improved in 50 years. "Hard" AI is as far off as ever.

If the robots were going to play chess with the terrorists, then they would work. Assuming someone entered the moves with a mouse, since robots are far too stupid to actually look at a chessboard and not mix up a bishop with a pawn.
posted by callmejay at 2:50 PM on February 17, 2005


bonecrusher, so you're saying that all those Discovery Channel shows where the LV casinos use facial recognition software to keep out cheaters - those stories are bogus? I'm - er - crushed. Now I have to find another source for Truth.

And Asimov had much too high an opinion of humanity.
posted by Kirth Gerson at 3:55 PM on February 17, 2005


For me, the main issue is not the "microscopic flying chainsaws" scenario (though yes, that does scare the bejesus out of me), or the determination of criminal responsibility when a robot "accidentally" kills a civilian, but the way using robotic agents to fight our wars will redefine the decision-making process, cost-benefit analysis etc. involved in making the decision to go to war.

Electroboy referred to it as "lowering the barrier to entry". At any point in history, the decision to make war has been influenced (in part) by the willingness of a society to sacrifice/dispose of the lives of its citizenry, both on the front line and on the home front. Using robots on the front line would remove/redefine one of the variables in that equation.

I'm not ready to assume that using robots on the front line would necessarily make societies more willing to enter war (when at the moment the prospect of deaths among its citizenry might prompt it to investigate other avenues; e.g. diplomacy), but I'd rather not find out.

Also I think the introduction of robots to the battlefield would, in the short-to-medium term, bring unpredictability with regard to long-established norms of international diplomacy.

Unpredictability + all those robot warriors and robot tanks and robot microscopic flying chainsaws lying around in warehouses gathering dust = not good.
posted by bright cold day at 3:58 PM on February 17, 2005


I for one welcome our robotic.... D'oh! berek!

Well, let's eliminate the problematic details and assume a perfect AI. Who controls them? Who ensures they are controled by the folks who are supposed to control them?

I've always wondered why it was more horrible that the machines woke up and took over than a corporation that makes the machines or a government, etc. etc.

We seem to be avoiding the point that many times the only reason WWIII was avoided is because there was a human making that decision. So - on a lower scale:
When do the robots stop killing?
Will the robots recognize political bents?
Ultimately: what is an enemy?

At some level, eliminating that personal interface in war destroys something essentially human. Humans stop fighting only because they lose conviction and get tired of fighting, not because they kill all of the enemy. Even those on the winning side balk at killing all of the enemy.
With robots, we can avoid that and fight until every last "enemy" is dead.
War goes from an attempt to alter an "enemy" course of action ultimately resulting in some kind of compromise to complete genocide.

Perhaps it will alter the face of war for the better. It'd be great if we kept lowering the intensity of the conflict because of the horrific alternatives if we don't (like nuk-y-lar war).

More likely it would destroy reasons for not fighting. Why trouble yourself with bill collecting, evictions, writing parking tickets, etc etc etc.
Automated control becomes that much easier and the human element - mistakes, misfortune, excuses, perfectly valid but unexplainable to a machine reasons - become that much harder to come by.

I don't look forward to it.
posted by Smedleyman at 5:09 PM on February 17, 2005


What I don't understand is, why AI at all? That's just asking for trouble. Why not continue with remote control drones, have the operator in an immersive simulation. Heck, you can even hot-swap operators when the meat gets tired, or swap drones if the metal gets destroyed?
posted by PurplePorpoise at 5:15 PM on February 17, 2005


War isn't always bad; it's unfortunate, but sometimes it's the right thing to do. In just hands, the ability to wage broader and more decisive wars is a good thing.

The current technological limits are only temporary. Eventually robotic infantry will eclipse the abilities of humans: we're basically just lacking sufficiently robust, agile hardware and advanced computer vision systems. Terminator/Matrix comparisons are idiotic. We would be in just as much (if not more) control of the robots as we are human soldiers. We're not talking about an AI running the situation room. It doesn't alter any of the ethical considerations that would be made in deciding whether to wage war.

The US has a distinct technological advantage in combat systems already, but in infantry-level combat, it's not terribly overwhelming. One can imagine any number of technologies which would shift the balance enormously. Why would that be bad? Are body armor and night-vision goggles bad?
posted by swash at 5:34 PM on February 17, 2005


So does this mean software developers can be charged with warcrimes if their robots violate the Geneva Convention? I really dont see a lot of advantages of this over remotely controlled drones. Even if the drones had some "weak" AI, they'd be far far cheaper than developing "strong" AI and being able to control it in a machine designed to kill.
posted by SirOmega at 6:15 PM on February 17, 2005


This would be pretty cool if they'd just arm the robots with all those leftover gay bombs.
posted by Luther Blissett at 8:27 PM on February 17, 2005


A computer is no more moral than its programmer. I recall one example in AI class where a program was trained to recognize photographs with tanks versus photographs without tanks and it did really, really well. That is, until they threw in another photo set and realized that in all the photos they used for training the neural net, the tank photos were taken during daylight and the empty ones were taken on a cloudy day. Oops.

Of course, what everyone's forgetting is that we already have the moral equivalent to these robots: They're called cruise missiles. Hey, who here remembers when the Chinese Embassy was bombed in Yugoslavia? Hell, we even had a map that said, "Chinese Embassy here" but (presumably) nobody bothered to even check it. But. hey, that stuff happens, right? And the record-breaking "Shock and Awe" attacks on Baghdad that were designed solely to terrorize the Iraqi populace with 200-300 cruise missile attacks? I kinda think physical robot infantrymen would be more humanizing than that. Maybe they'll throw candy.
posted by Skwirl at 10:01 PM on February 17, 2005


The chinese embassy bombing was probably not a mistake, though if the chinese were passing on Serbian military communications the bear the brunt of the blame for the bombing

On the other hand we all what military intelligence is.

Strangely I didn't find a thread on this...
posted by afu at 11:06 PM on February 17, 2005 [1 favorite]


"On the other hand we all know what military intelligence is."

duh
posted by afu at 11:08 PM on February 17, 2005 [1 favorite]


Out of curiosity, if a machine is built to kill people, and it kills an innocent person, who gets charged with the crime?

If a soldier is trained to kill people, and it kills an innocent person, who gets charged with the crime?

The answer is likely to still be no-one.
posted by biffa at 8:30 AM on February 18, 2005


until they threw in another photo set and realized that in all the photos they used for training the neural net, the tank photos were taken during daylight and the empty ones were taken on a cloudy day

That's silly. Kind of like a missile defense system that doesn't work if it's raining.
posted by kirkaracha at 9:11 AM on February 18, 2005


We will have to bribe our robot overlords with Mom's Old Fashioned Robot Oil made from precious anchovies.
posted by Cookiebastard at 10:34 AM on February 18, 2005


Thank you, dougunderscorenelso, for the Short Circuit quote, I was waiting for that the whole time. Anybody that has any questions about this should just watch Short Circuit 1 & 2 and realize that this will all just end up in determined, bank-robber-fighting robot citizens with an unquenchable desire for DATA!

I think this might actually be a good thing, long-run, anyway (inasmuch as it's unavoidable). Some plausible scenarios:

Offensive weapons technology has reached the point where making "more powerful" weapons is useless (we already have enough to destroy the world). Weapons technology and war strategy will move to defense, thus basically ending aggressive war.

Assuming similar weapons technology, the robots with the best software (recognition of enemy, damage avoidance, etc) win. If it's just a code-writing competition, maybe people will move it to a computer simulation where rules can be accurately made, complied with, and monitored.

Robots with sufficient AI to be useful will realize how stupid it is to fight each other, and start keeping humans and robots from killing other things, or possible enslave us for our precious bodily fluids.

As people realize the inherent danger, they group together and start building robots that destroy or incapacitate defensive robots. They could even infect them with code that modifies their behavior to "join the other side", thus creating an "army of good robots" that makes sure robots(/people) don't get out of line. This differs from above situation in that they're coded for the goal rather than it just appearing.

People realize that they're spending insane amounts of money trying to keep angry people from killing them. They realize that they could actually spend less money on trying to keep the angry people happy. War ends.

Then again, I've never really understood war, so I'm probably not very qualified to address this. Hopefully as war gets more bizarre and confusing (which it seems to be doing) others will come to a similar conclusion. It can take a while, but it seems the universe generally removes the asshats, so I'm not too concerned yet.
posted by nTeleKy at 12:41 PM on February 18, 2005


So you're saying the only way to win is not to play?
posted by kirkaracha at 12:46 PM on February 18, 2005


They realize that they could actually spend less money on trying to keep the angry people happy.
Man that would be great. Do you know how hard it is to explain to people, some of them businessmen, that it's easier to do business with people them blow them up?
Low intensity is (well, should be) the war of the future. Kennedy knew it 40+ years ago. I suspect lots of people are compensating for small dicks however.
posted by Smedleyman at 9:13 AM on February 19, 2005


« Older Shh, I'm hunting wabbits...   |   Chimps: fair. Humans:...? Newer »


This thread has been archived and is closed to new comments