“Human decisions are removed from strategic defense. Skynet begins to learn at a geometric rate. It becomes self-aware at 2:14 a.m. Eastern time, August 29th.”
February 8, 2011 8:11 PM   Subscribe

The X-47B made its maiden flight a few days ago at Edwards Air Force base. What's the X-47B? Just an autonomous, artificially intelligent, fully armed jet fighter. At the same time, automated, cannon-mounted human-seeking robots are patrolling the Korean DMZ. Previously
posted by Bora Horza Gobuchul (113 comments total) 16 users marked this as a favorite
 
"you’re watching a high quality jet fighter fly without any human assistance either in or outside the craft"
What could possibly go wrong?
posted by unliteral at 8:16 PM on February 8, 2011 [8 favorites]


Guys, back to the drawing board. Come back when they look like Summer Glau.
posted by Ad hominem at 8:19 PM on February 8, 2011 [15 favorites]


posted by Bora Horza Gobuchul

Eponysterical. One of the things that's interesting about this technology (putting aside the sad fact that this incredible machine is being built in the service of death and destruction) is that with modern jet fighters, the monkey at the controls is pretty much already one of the weakest links. We can and do build planes which are capable of maneuvering at g-forces which are pretty hazardous to the thinking goo inside the pilot's cranium.

CPUs are much less fragile (not to mention they can potentially be designed to have much quicker reaction times than even the most skilled aviator) and would allow for fighter jets which can maneuver with unprecedented agility, physically beyond the limits of what humans are capable of.
posted by Scientist at 8:20 PM on February 8, 2011 [5 favorites]


Plus no signals to intercept.
posted by Ad hominem at 8:22 PM on February 8, 2011


A friend is an aviation magazine publisher. He said one night in zero visibility he saw aircraft after aircraft make perfect touchdowns in Atlanta despite the weather. When his sister asked how the pilots could land so perfectly in terrible weather he laughed and explained that the pilots weren't the ones landing the planes. It was all automated, one after the other.
posted by Ironmouth at 8:23 PM on February 8, 2011 [1 favorite]


No human element, no empathy, no hesitation. All war. Follow the chain of command inexorably, and destroy.

Kubrick made a movie about this, as I recall. Anyone seen it?
posted by Capt. Renault at 8:24 PM on February 8, 2011 [2 favorites]


Yeah, that's not evil looking or otherwise malicious in appearance at all.

You don't even need weapons for it, just send hundreds of them to the enemy of your choice and fly over major population centers in complex formations doing impossible aerobatics and the ensuing UFO panic will neutralize any foe from the inside out.
posted by loquacious at 8:25 PM on February 8, 2011 [3 favorites]


You stay down by day, but at night you can move around. You still have to be careful because the HKs use infra red. They're not too bright. John taught us ways to dust them.
posted by nathancaswell at 8:35 PM on February 8, 2011 [15 favorites]


Does anyone else have anything like this? Are we headed towards a future where the US "polices" the globe with legions of automated killing machines?
posted by Ad hominem at 8:37 PM on February 8, 2011 [1 favorite]


This will end well.
posted by jourman2 at 8:38 PM on February 8, 2011


WE WERE WARNED
posted by Sticherbeast at 8:45 PM on February 8, 2011 [3 favorites]


I told you. Robot planes. Soon we will have no war, reduced to shuffling through vast corridors while the elite have thier 15 sq. ft.
posted by clavdivs at 8:46 PM on February 8, 2011 [2 favorites]


If these had rubber skin we could spot them more easy.
posted by KokuRyu at 8:47 PM on February 8, 2011 [1 favorite]


Does anyone else have anything like this?

I expect so. The basic technology, as Ironmouth says, has been in commercial aircraft for decades now, and at its simplest is well within the reach of the hobbyist community. Militarizing it is a lot of work but I have a hard time imagining that most other major military powers don't also have things like this.
posted by hattifattener at 8:48 PM on February 8, 2011


No human element, no empathy, no hesitation. All war. Follow the chain of command inexorably, and destroy.

...Because human pilots are so likely to disregard orders due to "empathy". Jesus. That ship sailed somewhere between Dresden and Hiroshima.
posted by auto-correct at 8:48 PM on February 8, 2011 [28 favorites]


The next step is the connection between these flight controls and the day care infotainment systems. (orson scott card, to be obvious).
posted by yesster at 8:49 PM on February 8, 2011 [3 favorites]


Putting on my wargamer hat, assuming it all works as it is envisioned, the main problem I see from a strategic & tactical standpoint about things like this can be summed up: "Great, we have our human lives out of the flying thing, but we still have to scramble ground human lives to recover the flying thing when it gets shot down so our stuff doesn't fall into enemy hands."
posted by BeerFilter at 8:50 PM on February 8, 2011 [1 favorite]


Just wait till one of these gets hacked, then it will get really interesting.
posted by doctor_negative at 8:52 PM on February 8, 2011 [4 favorites]


The robots can hear you breathing.
posted by cashman at 8:52 PM on February 8, 2011 [3 favorites]


Where's my 30 hour work week with all these robots? Fuck this, let's give the humans some time off...
posted by Meatbomb at 9:01 PM on February 8, 2011 [9 favorites]


Terminator quotes are fun, because as these things show, the Terminator and assorted Skynet bad guys are almost cartoonishly primitive compared to the automated weapons that are on tap today. By the time we actually have a strong AI? Well we'd better hope that Lem was right and if we make a military AI smart enough, the first thing it'll do is decide being a military AI is pointless and go study physics. Otherwise it'll probably look more like RUR or I Have No Mouth than the story of heroic human resistance.

BeerFilter: Or just have another drone bomb whatever drones get shot down.

My little wargamer hat says it's probably going to be a race between building progressively harder to disrupt, distributed, networked SAM systems and progressively stealthier networked UCAV fleets.

Ad hominem : Short answer? Yes. Long answer? Hell yes. Pretty much everyone with a serious military, or an aerospace industry they want to protect is getting in on the game.
posted by Grimgrin at 9:05 PM on February 8, 2011 [4 favorites]


I'm not sure if it makes me feel better or worse if every country has these. Maybe these guys can just just battle it out over the Atlantic while we kick back if it comes down to it.
posted by Ad hominem at 9:08 PM on February 8, 2011 [1 favorite]


Plus no signals to intercept.
According to the Journal, militants have exploited a weakness: The data links between the drone and the ground control station were never encrypted.
what

And it's a digital signal, DVB-S or DVB-S2 according to the software's web site, not a journalistic mistake referring to a scrambled analog signal. Which means that literally ROT13 encoding could have made a difference if they're really using SkyGrabber unmodified to pick it up. I'm not sure if this can accurately be called a "weakness", it was effectively never intended to be anything but public in the first place.

Yeah, uh, let's hope that they put a little more thought into the control channels for the death fighter. Otherwise, expect to see these on eBay shortly.

It would be funny if we made Skynet but it was brought down by twelve year olds due to stupid engineering decisions.
posted by XMLicious at 9:09 PM on February 8, 2011 [1 favorite]


IIRC, the existing drones' totally unencrypted video feeds are what are being intercepted, but their command and telemetry channels are actually protected in some way.
posted by hattifattener at 9:14 PM on February 8, 2011


Oh, yeah, they must be or they'd be useless. I was kidding about the X-47B's control channels being unencrypted.
posted by XMLicious at 9:16 PM on February 8, 2011


The control channels on the one UAV I've played with were very much encrypted (actually all the comm was from memory).
posted by markr at 9:18 PM on February 8, 2011


Yeah, one of the other things I was reading said that it's just the Predators.
posted by XMLicious at 9:20 PM on February 8, 2011


...Because human pilots are so likely to disregard orders due to "empathy". Jesus. That ship sailed somewhere between Dresden and Hiroshima.

You're off a million years on that.
posted by Ironmouth at 9:25 PM on February 8, 2011 [3 favorites]


Also, a pilot's visual space is the arc of sky that he or she can see through the canopy, and his or her view of the radar is two-dimensional, needing to be consciously interpreted to create a visualization of the true space. A pilot can focus his or her attention on only one set of inputs at a time – altimeter, radar, fuel gauge, the view out the window, etc. Any information other than that coming from the spot that the pilot's eyes are focused on is out-of-date, potentially by many seconds.

A computer can see wherever it has a camera, and its view of the radar is simply a sphere of three-dimensional vectors. It can monitor all of its instruments simultaneously in realtime, and base its decisions on a synthesis of data which is accurate up to small fractions of a second. It executes its decisions unhesitatingly and with perfect precision.
posted by Scientist at 9:49 PM on February 8, 2011 [5 favorites]


Soon as the aurofactories are set up to manufacture them without human intervention we can all sleep safe at night.
posted by Artw at 9:53 PM on February 8, 2011


You're off a million years on that.

Yep, those cavemen pilots on their pterodactyls were fucking merciless. The saber-tooth tigers didn't have a chance.

but yes, point taken
posted by auto-correct at 10:02 PM on February 8, 2011


Ronald Arkin interview
TRN: And what is the role of lethality in the deployment of autonomous systems by the military?

Arkin: [This is a question] I hope to personally pursue more deeply from an ethical perspective, especially as a large portion of my research has been funded by the Department of Defense.

The real issue is whether the robot is simply a tool of the warfighter, in which case it would seem to answer to the morality of conventional weapons, or whether instead it is an active autonomous agent tasked with making life or death decisions in the battlefield without human intervention. To what standards should a system of this sort be held to, and where does accountability lie?

In addition is it possible to endow these systems with a “conscience” that would reflect the rules of engagement, battlefield protocols such as the Geneva Convention, and other doctrinal aspects that would perhaps make them more “humane” soldiers than humans? I find this prospect intriguing.
(I have it handy because I was recently reading a story on semi-autonomous flying military drones)
posted by bleary at 10:06 PM on February 8, 2011


We’re going to keep building deadlier and deadlier unmanned aircraft systems, hopefully it will work out for the best.

Oh, of course, I'm sure it will.
posted by tylerkaraszewski at 10:22 PM on February 8, 2011


One of the things that's interesting about this technology (putting aside the sad fact that this incredible machine is being built in the service of death and destruction) is that with modern jet fighters, the monkey at the controls is pretty much already one of the weakest links. We can and do build planes which are capable of maneuvering at g-forces which are pretty hazardous to the thinking goo inside the pilot's cranium.

Okay, so I have to admit that I don't know how air combat works outside of Top Gun. Does what you say above mean that right now in manned aircraft, the computer decides what to do and the pilot pretty much okays it? Or does the pilot have to at least do things like pick between real targets and decoys? It's picking between targets and decoys that I would have expected humans to be better at and where I'd have expected computers to be more easily fooled and possibly slower, but I could be wrong.

I guess I'm also asking, is the X-47B autonomous in ways that a UAV or manned fighter's automatic systems are not, and if so how? Or is it just the combination of a UAV and a manned fighter's automatic systems?
posted by XMLicious at 10:26 PM on February 8, 2011


Autonomous?

Bet it still needs stupid humans to refuel it.
posted by three blind mice at 10:33 PM on February 8, 2011 [2 favorites]


Yeah, the thought so much bigger in the 60s, when they dreamt up stuff like this but nuclear powered, operating for months at a time, and too radioactive for anyone to go near. They new how to dream big then!
posted by Artw at 10:44 PM on February 8, 2011 [5 favorites]


(Project Pluto, in case people don't know what Artw's referring to)
posted by hattifattener at 10:49 PM on February 8, 2011 [1 favorite]


This will end well.
posted by jourman2 at 8:38 PM on February 8


So say we all.
posted by You Can't Tip a Buick at 10:51 PM on February 8, 2011 [13 favorites]


1950's nuclear science was the best science. A nuclear cruise missile that's probably able to kill people just by flying around, even without tossing hydrogen bombs across half of Russia.

Second only to project Orion for sheer lunacy, A giant coke machine that dispenses nuclear shaped charges to blast a million ton rocket to the moon. It'll only give one person cancer per launch.
posted by Grimgrin at 10:57 PM on February 8, 2011


Bet it still needs stupid humans to refuel it.

When I was working in the UAV industry in 2006 autonomous refuling wasn't seen as much of a hurdle. There was much talk of how to keep large urban areas under constant surveillance with "swarms" of drones taking shifts, "leveraging" (all quotes sic) such technologies as LIDAR to avoid collisons with buildings when going for near-ground-level perspectives.

It seemed farfetched at the time, but UAVs are now operating domestically without much fanfare.
posted by phrontist at 10:57 PM on February 8, 2011 [1 favorite]


(last link should point here)
posted by phrontist at 10:58 PM on February 8, 2011


Oh shut the fuck up.

That's not helpful.

This technology is as applicable to rescue craft as weapons plattforms.

Well, yes but I don't see them building and testing rescue drones. What I do see them building and testing are weapons of war. Maybe before this is all over we can have our own Butlerian Jihad. Then bring on the spice baby!!!
posted by AElfwine Evenstar at 11:00 PM on February 8, 2011


Okay, so I have to admit that I don't know how air combat works outside of Top Gun. Does what you say above mean that right now in manned aircraft, the computer decides what to do and the pilot pretty much okays it?

No, it means that certain maneuvers in a jet fighter can cause such a violent reaction on the pilot, due to G forces, that the pilot won't remain conscious or survive, so the pilot becomes the limiting factor. The plane can handle it, the body can't. Remove the pilot and the plane becomes a much more maneuverable jet fighter.

I'm dimly recalling one of the obvious maneuvers that knocks out a pilot is to go into a really sudden dive, like flying over a cliff sudden, which sends the blood rushing into the brain, producing instant unconsciousness. Jet fighter pilots learn to do a half roll to dive just to avoid this effect, but the simple over-a-cliff dive is a better maneuver.

I remember this from reading about the origins of the AI game Creatures, which featured creatures in the game with neural nets. The guy who developed the game used to write neural nets for military flight simulators, and observed the AI pilots learning this maneuver that was unavailable to human pilots. That gave him the idea that AI required a simulated body to impose simulated constraints, leading to the video game.
posted by fatbird at 11:01 PM on February 8, 2011 [10 favorites]


Sounds like landmines just got sexier. And loosened their dependance on "land".
posted by -harlequin- at 11:03 PM on February 8, 2011 [2 favorites]


"It will have more range (though less firepower) than the F-18 Hornets commonly used today. Imagine wings of these fighters being launched from aircraft carriers positioned out of the reach of anti-carrier missiles and striking targets without the need for active human judgment."

China makes a 900mile Anti Carrier missile, so the US makes a 2000mile Fighter/Bomber..
Now for the inevitable announcement of the 3000mile Chinese Anti-Carrier missile!
posted by Dillonlikescookies at 11:25 PM on February 8, 2011


BeerFilter: Putting on my wargamer hat, assuming it all works as it is envisioned, the main problem I see from a strategic & tactical standpoint about things like this can be summed up: "Great, we have our human lives out of the flying thing, but we still have to scramble ground human lives to recover the flying thing when it gets shot down so our stuff doesn't fall into enemy hands."

Since it's unmanned, it can have a self-destruct instead of an ejection seat.
posted by Mitrovarr at 11:29 PM on February 8, 2011


fatbird - Yeah, I understood the bit about G-forces on the human body and all that, I just thought that there was more to the "weakest link monkey" idea in this case since we're talking about a supposedly-autonomous craft here. If it's a fighter that will lose every single fight without a human pilot somewhere then "weakest link" seems like a bit of a misnomer and the monkey is a pretty important part of the system.

I don't know if that's true that it would lose, I guess I was really reacting to the bits about the computer having access to multiple cameras and radar and reacting super-fast lower down, which doesn't seem like much of an advantage to me unless the computer actually has as good or better an idea than a human of what it's seeing. It would in the case of avoiding hitting a mountain or the other sorts of stuff that a UAV or commercial aircraft would do, I'm just not sure it would know better in the case of all the things a fighter would normally do.
posted by XMLicious at 11:36 PM on February 8, 2011


I'm guessing that nobody here has read anything by David Drake. Flying craft like these will probably be short lived in great warline/timeline. As soon as linear accelerators and the computers that will control them get up and running, anything in the sky is dead. It's indeed an arms race. Eventually high powered ground based lasers and more powerful particle beams will render any flying device obsolete. At least until you can pound them first.

There are only two reasons to build aircraft like these.

1. You are the only country that can afford them.
2. You know that your enemy can't target them and hit them.

When either of those parameters fail, the device is history. Consider the U2 spy plane as an example. I like warporn as much as any grognard, but this is just posturing right now. Oh, my, we have a flying weapon that can do 15 G turns and kill you.

And the people on the ground ask: How do you occupy the country now that you have air superiority?

Oops, did we forget the Terminator robot research? Yep, okay boys drop your cocks and grab your socks, we're going in.

Human grunts are like Jello, there will always be room for them. Actually more now, since we don't have to pay for pilot training.
posted by Splunge at 11:58 PM on February 8, 2011 [2 favorites]


No one should trust any government with these things.
posted by Henry C. Mabuse at 12:15 AM on February 9, 2011 [1 favorite]


"... Now for the inevitable announcement of the 3000mile Chinese Anti-Carrier missile!"
posted by Dillonlikescookies at 2:25 AM on February 9

Which will probably still be completely ineffective against the 4 already deployed SSGN boats, such as the USS Florida, which, with their Tomahawk cruise missile loads and submerged stealth, up to and including their littoral mission capabilities, are already Version 2.0 of the supercarrier/unmanned fighter combo the X-47 is being proposed to augment.

But some points I'd like to raise, which have heretofore been unmentioned in this thread, are as follows. What will be the psychological effect of robotic combat devices on jihadists and terrorists? Will such people be willing to destroy themselves, if the only witness to their end is one or more robotic device(s), which neither know(s) or care(s) about their motivations? Will facing a robot, instead of a uniformed American soldier, defuse or mitigate difficult near-combat situations with indigenous populations in current and future low intensity, long duration conflicts, where America feels the need to deploy some force component? And finally, how far removed from a battlefied/combat scenario can human operators of robotic military devices be, before they are viewed as merely designated agents of a larger society expressly sanctioning cold evil, as the least evil response to non-diplomatically resolved situations?
posted by paulsc at 12:23 AM on February 9, 2011 [1 favorite]


It's almost a shame we have all this amazing tech and no similarly-armed opponents to fight.
Almost
And I could see drones used for firefighting in Aus...
posted by Lovecraft In Brooklyn at 12:24 AM on February 9, 2011


Rail gun technology at this point is about capable of being ship board mounted as a modern form of artillery bombardment. The power requirements are absolutely immense (they draw from the ship nuclear reactors), and being able to track, target and hit a fast moving small target is pure science fiction at this point; we can barely hit a satellite in a known orbit with self-guided missile tech at this point, and anti-ballistic missile tech is basically a failure. Using rail gun or gauss guns to hit a fighter jet better than we can with SAM sites, especially a fighter that can pull 15G+ maneuvers? A looooong way off.

Most air-to-air combat is already carried out using long-range missiles - which themselves are largely if not completely autonomous when fired. The targets are picked by computer, the tracking is done by computer, as are the flight adjustments to the missile. The spam-in-a-can role is to get the weapon platform to where it should be, and pick the targets - and even then, those are usually designated by ground control. Even in a free-fire situation, the pilot picks his targets based on training and the information given by his instruments - it's pretty rare he'll actually get to see his target with the mark 1 eyeball first. Also, the computers can stay alert for the entire flight. When you consider that stealth bombers can fly half way round the world before they even get into the combat zone, alertness for long periods is an issue for pilots.

Even commercial flights are largely computer flown now. Where the pilot comes in most useful is when things go wrong - he can react quicker and come up with better solutions to unexpected events. Very important when you have a plane full of passengers, but in a combat jet where the pilot is almost as expensive as the hardware, it makes it a lot simpler if you can have him sat in a chair a few hundred miles away. Yes, you'll probably lose more planes to unforseen problems - such as bird in the engine, or SAM strike disabling some onboard systems, or damage to the flight surfaces - but you'll lose a lot less pilots. The last thing you want is pilots getting stranded behind enemy lines and captured.

Assuming they get the computers to handle take-off and landing on a carrier in rough seas autonomously - which is arguably the hardest thing a navy pilot has to do regularly - there's no reason this sort of robot plane won't be more effective, longer lasting, and able to pull off missions that pilot-flown planes just can't do; including ones that carry a high risk of the plane getting shot down. It's a lot easier to send autonomous planes on near-suicide missions.

Project Pluto:

Pluto's namesake was Roman mythology's ruler of the underworld -- seemingly an apt inspiration for a locomotive-size missile that would travel at near-treetop level at three times the speed of sound, tossing out hydrogen bombs as it roared overhead. Pluto's designers calculated that its shock wave alone might kill people on the ground. Then there was the problem of fallout. In addition to gamma and neutron radiation from the unshielded reactor, Pluto's nuclear ramjet would spew fission fragments out in its exhaust as it flew by. (One enterprising weaponeer had a plan to turn an obvious peace-time liability into a wartime asset: he suggested flying the radioactive rocket back and forth over the Soviet Union after it had dropped its bombs.)

Blimey, modern autonomous weapon platforms seem relatively benign after reading about that.
posted by ArkhanJG at 12:54 AM on February 9, 2011 [2 favorites]


*Barf*

Yeah, this will all be totally cool. Nothing to worry about here
posted by From Bklyn at 12:56 AM on February 9, 2011


Bet it still needs stupid humans to refuel it.

Ahem. They prefer the appellation "foolish humans."

I mean, we! Not they, but we! Ha ha ha, a "foolish" mistake! ERROR: $BIG_LEBOWSKI_QUOTE not availableIs it not so? Ha ha ha!
posted by No-sword at 1:13 AM on February 9, 2011 [6 favorites]


I wonder if the Joint Strike Fighter will enter service before it's rendered obsolete by more maneuverable, cheaper drones. Oh well, at least that particular ditch of burning money kept plenty of engineers and mechanics employed for a while.
posted by heathkit at 1:29 AM on February 9, 2011



No human element, no empathy, no hesitation. All war. Follow the chain of command inexorably, and destroy.

Kubrick made a movie about this, as I recall. Anyone seen it?


Guys, I have a new interpretation for "Eyes Wide Shut"

Seriously though, Hal wasn't a warrior. He was an explorer. He did follow orders to a fault, but to me it sounded like he empathized even as he did it.
posted by furiousxgeorge at 1:30 AM on February 9, 2011


Was Hal the one with the peace sign on his helmet?
posted by GeckoDundee at 1:33 AM on February 9, 2011 [1 favorite]


Does anyone else have anything like this? Are we headed towards a future where the US "polices" the globe with legions of automated killing machines?

Absolutely. The only real limitation on the US ability to kill people is domestic unrest, a la Vietnam, and that's proven to have a sufficiently strong self-interest component that if there aren't soldiers coming home dead, it's unlikely more than a handful of the electorate will give a shit who gets pounded into the stone age in order to, as Condi put it, "preserve the American standard of living".
posted by rodgerd at 1:40 AM on February 9, 2011 [3 favorites]


Taranis is a British demonstrator programme for unmanned combat air vehicle (UCAV) technology. It is part of the UK's Strategic Unmanned Air Vehicle (Experimental) programme (SUAV[E])

I wonder how much was spent on the witty one-liner function.
posted by fullerine at 2:04 AM on February 9, 2011


I can't help thinking that we would be better off if all this research and technology were channeled to almost anything else but war. What a colossal waste.
posted by jonesor at 2:11 AM on February 9, 2011 [5 favorites]


No one should trust any government with these things.

So what's your solution? Because the shit is out there.

Answers on a postcard, please.
posted by Wolof at 3:11 AM on February 9, 2011


Using rail gun or gauss guns to hit a fighter jet better than we can with SAM sites, especially a fighter that can pull 15G+ maneuvers? A looooong way off.
Hmmm...quite probably this is correct, but we have had traditional ballistics capable of hitting missiles pulling 15G for decades:
Phalanx gun
posted by bystander at 3:26 AM on February 9, 2011 [1 favorite]


This project is a waste of money. Drones are a fundamental gamechanger and they're building this as if it were the old rules.

The X-47B is an expensive very advanced system where an army has a FEW planes to do targeted attacks. This approach is flawed. A much better approach is to use technology to minimize the costs of drones (e.g. if Preadator drones were several orders of magnitude cheaper). That way you can have over 9000 cheap drones vs. one expensive X-47B. You only need to hit the target once.

Building drones are dead cheap and incredibly accessible, which is where the real danger lies.
posted by amuseDetachment at 3:31 AM on February 9, 2011 [7 favorites]


As usual, the Simpsons have the wisdom of the future:
The wars of the future will not be fought on the battlefield or at sea. They will be fought in space, or possibly on top of a very tall mountain. In either case, most of the actual fighting will be done by small robots. And as you go forth today remember always your duty is clear: To build and maintain those robots.
posted by Redhush at 3:36 AM on February 9, 2011 [3 favorites]


Regarding AI intelligence, I'm reminded of the software Massive, used for crowd simulations on various types. It was used for the big battle scenes in Lord of the Rings, but an interesting thing occurred. After giving each simulated character a brain which defined its movements, place and ability to react to other characters and its environment and then grouping the characters into two opposing sides and setting them against each other, several of them left the fight and headed for the hills.

Brains, even pseudo low level ones, are tricky things.
posted by Brandon Blatcher at 3:38 AM on February 9, 2011 [2 favorites]


As the Singularity Hub article so eloquently puts it: "hopefully it will work out for the best."
posted by flapjax at midnite at 3:42 AM on February 9, 2011


I don't know how credible this is, but the positive humanitarian argument for robot warriors hasn't been mentioned yet and should be part of the debate

1) They can have ethical programming (cf prime directives) to follow the Geneva conventions/Rules of Engagement literally. Unlike regular soldiers who combine all the flaws of regular people with the extreme cognitive and emotional stress of combat.

2) Those rules of engagement for robots could be much more conservative than for human soldiers, because machines are expendable in a way that humans (especially our humans) are not. For example, they could take be programmed with a higher threat threshold for action, so that they would take more risks to exactly identify what kind of object is being pointed at them before reacting than we could expect of human soldiers i.e. they wouldn't feel the need to shoot journalists pointing cameras at their helicopter.
posted by Philosopher's Beard at 3:50 AM on February 9, 2011 [3 favorites]


Also, their battles and fights should be saved and downloaded into newer versions, so that they learn from each mission and grow more powerful and intelligent. This should work out fine, right?
posted by Brandon Blatcher at 4:03 AM on February 9, 2011 [1 favorite]


Robots don't lie.
posted by Sailormom at 4:07 AM on February 9, 2011


Robots don't lie.

That didn't work out so well for HAL.
posted by Brandon Blatcher at 4:10 AM on February 9, 2011


When the robots come for me, I intend to say BEEP BEEP BEEP ASSERTION THAT I AM HUMAN DOES NOT COMPUTE BEEP BEEP BEEP
posted by Flunkie at 4:14 AM on February 9, 2011


Grimgrin: "Terminator quotes are fun, because as these things show, the Terminator and assorted Skynet bad guys are almost cartoonishly primitive compared to the automated weapons that are on tap today.

Cleaning Man at Flophouse: [Damaged skin on the Terminator is rotting from gangrene] Hey, buddy. You got a dead cat in there, or what?

[the Terminator visualizes: 'POSSIBLE RESPONSE: YES/NO; OR WHAT?; GO AWAY; PLEASE COME BACK LATER; FUCK YOU, ASSHOLE; FUCK YOU']

The Terminator: Fuck you, asshole.
posted by bwg at 4:31 AM on February 9, 2011


Ethical programming is ... dicey, at best. You can't even get humans to agree on ethical programming.

When not to do something is one of the harder programming tasks. "Go up to that human and explode," that is comparatively easy.

Oh, wait, that's a Red Cross worker (looking for a shape consisting of two bars at right angles [what if their uniform was wrinkled? what if it was torn? what if there is a blood smear on it?] ... that's all over the environment ... vertically-oriented [no, wait, they could have fallen ... would normally be vertically oriented] ... still all over the environment ... proportional thickness of bars to length ... color in a range that might be called red [what if it is dirty?]). Oh, wait, there's a child nearby (is that a child or someone kneeling or someone with a growth disorder ... scan for epiphyseal plate closure ... [progeria exception]. Oh, wait, they're surrendering (waste a zillion CPU cycles matching every gesture, utterance, and symbol against an international database of surrender-semiotics). Oh, wait, there's a non-combatant. Oh, wait ...

Programming is all exceptions, assumption checking, and error catching. That's what not-killing is like.
posted by adipocere at 5:12 AM on February 9, 2011 [1 favorite]


Oh hey, I worked on the -A variant of this awhile back. It was some fiddly bits related to the engine - they want these to be stealthy, so you have to essentially hide the (hot, loud) jet engine in the middle of the airplane, which means designing some interesting serpentine inlets and outlets to avoid radar bouncing off the sharp, angular rotating machinery.

These things scared the pants off me when I learned about the concept of operations. They were supposed to operate as a "wolf pack" - send a swarm of them out, which could network together in an ad-hoc fashion and autonomously coordinate an attack between, say, half a dozen aircraft.
posted by backseatpilot at 5:15 AM on February 9, 2011


Oh, wait, there's a child nearby (is that a child or someone kneeling or someone with a growth disorder

What's a child? Why is important not to kill? What if the child has a remote detonater in its hand? Wait, what's a remote detonater?
posted by Brandon Blatcher at 5:16 AM on February 9, 2011


The landing gear doesn't appear to retract. I HAVE FOUND THEIR WEAKNESS.
posted by Civil_Disobedient at 5:26 AM on February 9, 2011 [1 favorite]


The un-nerving thing about robotic planes (or robotic soldiers) is that your ability to wage war at a distance is no longer limited by the public's ability to stomach the human losses. Instead, it's entirely possible that these would increase the willingness of a country to get involved or start conflicts that aren't near them.

The only thing that would really keep it in check would be the amount of money the country is willing to spend on the military. In the US, this number always seems unusually high.
posted by drezdn at 5:45 AM on February 9, 2011


As far as the planes being better followers of the rules of war... Who gets in trouble if they don't follow them?
posted by drezdn at 5:46 AM on February 9, 2011 [1 favorite]


It would be funny if we made Skynet but it was brought down by twelve year olds due to stupid engineering decisions.

Not to worry--we have several autonomous devices targeting those 12 year olds.
posted by Obscure Reference at 6:03 AM on February 9, 2011 [1 favorite]


Oh, wait, there's a child nearby (is that a child or someone kneeling or someone with a growth disorder

Of course those cognitive and perceptual problems are no different from the ones human soldiers have to face - and we know they often fail. So it's not an argument against robot ethics per se.

Ethical programming is ... dicey, at best. You can't even get humans to agree on ethical programming.

Well we do make all those efforts to programme our soldiers with ethical rules we've agreed on, through training in the internationally agreed laws of war, as codified in detail in field manuals and specific rules of engagement.
posted by Philosopher's Beard at 6:15 AM on February 9, 2011 [1 favorite]


It's picking between targets and decoys that I would have expected humans to be better at and where I'd have expected computers to be more easily fooled and possibly slower, but I could be wrong.

I wouldn't bet on that. Humans have their own cognitive / pattern-seeking goofs, and we can expect pilots to pretty aggressively want to find a target for their ordnance. There was a show about camouflage a while back where they found that shielding airfields from pilots was pretty easy. All you had to do was put some minimal cover over the bits of the airfield that you don't want bombed, like basic netting to break up the outlines of what's underneath, and then go paint big pitch-black silhouettes of aircraft somewhere you don't mind being bombed. The attack pilots went for the big, obvious targets ("shadows" of planes parked on the ground) almost every time.

An autonomous attack drone can have the advantage that it's (for now) not going to feel like a lame-o loser if it returns to base with its ordnance unexpended, and it can just not have the psychological factors that make human pilots favor the big, bright, wrong signal.

I guess I'm also asking, is the X-47B autonomous in ways that a UAV or manned fighter's automatic systems are not, and if so how? Or is it just the combination of a UAV and a manned fighter's automatic systems?

There are autonomous and non-autonomous UAVs. Non-autonomous ones like (IIRC) the Predator and smaller, simpler drones require someone, somewhere to be flying the thing like a pilot in a normal aircraft. Autonomous ones, like IIRC the GlobalHawk, you just tell it where you want to go and it figures out the rest without a human in the direct flying loop. Googling, GlobalHawk is allowed to file its own flight plans.
posted by ROU_Xenophobe at 6:22 AM on February 9, 2011 [3 favorites]


Not to be confused with the XB-47

Or with the XP-47

(Currently featured in ads for Dodge trucks as - my curiousity led me to discover - this experimental version of the Thunderbolt was the first application of Chrysler's HEMI engine.)
posted by Flashman at 6:28 AM on February 9, 2011


doctor_negative writes "Just wait till one of these gets hacked, then it will get really interesting."

This is the interesting part to me. In theory they should be able to create an un breakable and un spoofable communication protocol but the history of military codebreaking makes that a heavy on the theory.

anigbrowl writes "Oh shut the fuck up. This technology is as applicable to rescue craft as weapons plattforms. Luddites, I ask you."

But they aren't demonstrating a rescue craft are they?

Mitrovarr writes "Since it's unmanned, it can have a self-destruct instead of an ejection seat."

A self destruct seems like both a chancy and dangerous thing to have on a combat aircraft. There is the chance that damage to the plane could cause it to go off prematurely or not at all and of course there is the danger to ground crews and service personnel. Plus it is unlikely to completely damage the most interesting parts of the plane.
posted by Mitheral at 6:31 AM on February 9, 2011


I hope that eventually we are just settling disputes via WoW clan battles...
posted by Theta States at 6:35 AM on February 9, 2011 [1 favorite]


The interesting part for me was how they're working on a module that'll enable it to feel love.
posted by Flashman at 6:36 AM on February 9, 2011 [6 favorites]


Ethical programming is ... dicey, at best. You can't even get humans to agree on ethical programming.

You're confusing humans, the individuals, with humans, the culture. The people who program machines can damn well create ethical programs. The people being paid to create the programs are being paid specifically not to do that, however.

That is the thing here: the culture of control. The people doing bad things often have to do it because it's their job. It wouldn't have to be their job if they weren't forced to keep it by economic considerations. In short, if you don't have a job you starve, or have no health care, or are looked down upon by the rest of society, so you do the bad thing and you don't question it. That's the whip, and is the foundation of many of our problems.
posted by JHarris at 6:44 AM on February 9, 2011 [1 favorite]


Well we'd better hope that Lem was right and if we make a military AI smart enough, the first thing it'll do is decide being a military AI is pointless and go study physics.

I haven't read Golem XIV, but I think it's a very, very slim hope to build on. General intelligence can tell you (or presumably a sufficiently advanced AI) what actions are most likely to achieve a set of goals, but the goals are a priori - by definition a goal-directed intelligence will correctly conclude that changing its goals is a bad thing.

If you give an incredibly intelligent military AI incredibly abstract goals like "reduce the war fighting capability of any enemy nation" and "increase our nation's likelihood of independent survival", maybe it will conclude that weapons are just a necessary backstop to make it's fabulous diplomatic skills work... but that's not going to happen. The goals are going to read more like "drop these bombs on General Turgidson's designated targets as soon as reliably possible", any intelligence is going to be applied solely to better evading counterattack, and it's going to be up to the General to decide that this awesome capability is better off unused.
posted by roystgnr at 6:53 AM on February 9, 2011


In other news,
European scientists have embarked on a project to let robots share and store what they discover about the world.

Called RoboEarth it will be a place that robots can upload data to when they master a task, and ask for help in carrying out new ones.

Researchers behind it hope it will allow robots to come into service more quickly, armed with a growing library of knowledge about their human masters.
posted by Anything at 6:58 AM on February 9, 2011


This is not a fighter aircraft. It's an attack aircraft. It's also not the first UAV to be able to autonomously go from take-off to landing. The Global Hawk can do this as well.
posted by garlic at 7:02 AM on February 9, 2011


Also, most DOD programs have anti-tamper circuitry. This can be triggered locally via a detection or via software command based on the design.
posted by garlic at 7:09 AM on February 9, 2011


I am going to have to respectfully disagree, JHarris. It is always easier to destroy than to create. How long between the invention of the gun and the invention of the bulletproof vest? And yet, not long after that, armor-piercing bullets.

Programming soldiers with ethical rules, well ... how well has that worked out for us? Not well. Not well from Lynnie Englund all the way up to the folks who defined torture as consisting solely of organ failure. And humans come with enormous capability for pattern recognition and tolerances for ambiguity that would stagger programmers.

Computers aren't people. They aren't even a little bit like people.

The concept of ethical programming for computers takes for granted the amazing cognitive abilities even an indifferently nourished and instructed human comes with out of the box. Human cognition might as well be magic to us right now. How many years have elapsed and we are still unable to pass a Turing test, a test applied under some incredibly artificial conditions. Ethical programming's "all you have to do is ..." wipes a general AI problem under the rug by equating computers to people, and we simply are not that alike. A succinct summation of the problem ("do not shoot children in the head") is nothing like solving that problem.

I don't think it is impossible, but it does require technologies that will not be available to us for decades after we have the ability to kill by remote, and it is not for a lack of trying. The AI required for this is of a highly general nature. So far, our AI successes have been in very narrow fields and under constrained circumstances.

A friend's brother works in some anti-terrorist remote kill capabilities and the stories he tells me are hair-raising. But the big decisions (to kill or not to) always go back to the people. The problem just gets abstracted through wireless networks and long-range cameras.
posted by adipocere at 7:29 AM on February 9, 2011


Can we talk about this video in the second link? Should I be laughing hysterically? Or terrified?
posted by The Wig at 7:41 AM on February 9, 2011


The un-nerving thing about robotic planes (or robotic soldiers) is that your ability to wage war at a distance is no longer limited by the public's ability to stomach the human losses.

While you are absolutely right about this, it is a problem that seems to have been pretty neatly solved in the last couple of wars by mostly just not showing them on the news. Rather than the Vietnam era people seeing footage every night on TV, we get some cold, mostly ignorable, anonymous number; "four killed in IED attack", "three killed by artillery".

The longest war in US history that most Americans don't seem to give a shit about is a pretty damning sign that we can stomach a lot if we don't have to see it.
posted by quin at 8:00 AM on February 9, 2011


Where's my 30 hour work week with all these robots? Fuck this, let's give the humans some time off...

umm...seen the unemployment rates lately?
posted by sexyrobot at 8:48 AM on February 9, 2011




One of the problems with computers is that they suck when presented with novel situations that their creators did not anticipate.

Did you program it to deal with suicide sparrows? What about bats with bombs then? Turtles with rocket packs and lasers?

Wars will be won by the most creative in the future.
posted by The Violet Cypher at 9:28 AM on February 9, 2011 [1 favorite]


Gerty lied, but then he had too.
posted by clavdivs at 9:46 AM on February 9, 2011 [1 favorite]


Using rail gun or gauss guns to hit a fighter jet better than we can with SAM sites, especially a fighter that can pull 15G+ maneuvers? A looooong way off

One advantage of a KE projectile weapon -- they are vastly harder to see. 20G turns don't help you if you never saw the inbound rounds and thus, never bothered to try to dodge them.

Very very high speed projectiles will be noticeable by IR -- friction with the atmosphere will make them observable. Radar might be able to see it, depending on the size, but if you're searching with radar, everyone knows exactly where you are, and that's bad. These things depend on stealth, area search radars are pretty much the exact opposite.

It doesn't need to be rail/gauss -- though there are certain advantages there, but there's a reason that most close in weapons systems are guns -- they're fast and they're effective from the moment they leave the barrel.

So, KE weapons will either mean there is no active defense (because there's no detection) or you need an active sensor telling everyone where you are.

Give me that, and knocking this thing down becomes vastly easier. The best is when I get you in a triangle of launchers -- yes, you can pull a 20G break against missile one, which then puts you in a really bad place for missile #2, being launched from a different aspect.

Indeed, that's what I'm thinking will eventually end the air war. Smart, cheap SAMs. The key word here is "cheap" -- if I can buy 1000 of them for the cost of one of these, and I get a kill every 50 shots, then it's just a race until you run out of money.

And I think it'll get worse -- because all the tech you need to build this is also all the tech you need to build a better, faster and smaller missile.

Finally -- 20G turns aren't going to make you immune to missiles. Missiles already pull that and more. It's being able to pull that turn at slow speed that will make you able to break inside and force the missile to overshoot.
posted by eriko at 9:57 AM on February 9, 2011 [1 favorite]


I vaguely remember a cold-war sci-fi short story about fully automated AI warhead missiles in constant high orbit policing the world with the threat of doom. Seems like the logical next step here.
posted by No Shmoobles at 10:09 AM on February 9, 2011


adipocere: This is a minor quibble, but the bullet proof vest (well the steel curiass) predated the invention of hand guns, and there's been something of a continuing arms race between thicker and better vests and more and more powerful guns. Wiki, as ever, has a good summary

Your point about ethical programming being a very hard problem is well taken though.
posted by Grimgrin at 10:45 AM on February 9, 2011


Finally -- 20G turns aren't going to make you immune to missiles. Missiles already pull that and more. It's being able to pull that turn at slow speed that will make you able to break inside and force the missile to overshoot.

Not if I destroy your SAMs' first with tiny robot flying bombs or robotic vechiles or a surface to surface missle. Your 1000 rockets are usless.
posted by clavdivs at 10:47 AM on February 9, 2011


Will facing a robot, instead of a uniformed American soldier, defuse or mitigate difficult near-combat situations with indigenous populations in current and future low intensity, long duration conflicts, where America feels the need to deploy some force component?

They have twenty seconds to comply.
posted by flabdablet at 10:54 AM on February 9, 2011 [2 favorites]


The robots can hear you breathing.

This is exactly the sort of thing I thought about pointing out in the last Terminator-related thread about AIs, but decided not to, lest I put the idea in some programmer's head. It seems they've already gotten there, though. Everyone who ever hid in a wall, in the floor, behind something, etc. in the Terminator films and TV series would be so easily found by robots that could hear or smell extremely well.
posted by limeonaire at 11:24 AM on February 9, 2011


Not to mention those equipped with fine-tuned infrared vision...
posted by limeonaire at 11:30 AM on February 9, 2011


Cheap SAMs are not that far away. Think model rockets, GPS, Roomba -- there's enough technology there for a simple missile, only a proximity sensor is missing. And the auto industry is working on that, for things like blind-spot assist, parking assist, etc.
posted by phliar at 11:42 AM on February 9, 2011


Maybe Monsieur Mangetout could be deployed as a counter measure. You launch him at the robo fighters and he'll simply eat them all.
posted by Hairy Lobster at 11:45 AM on February 9, 2011


ArkhanJG: Most air-to-air combat is already carried out using long-range missiles - which themselves are largely if not completely autonomous when fired. The targets are picked by computer, the tracking is done by computer, as are the flight adjustments to the missile. The spam-in-a-can role is to get the weapon platform to where it should be, and pick the targets - and even then, those are usually designated by ground control. Even in a free-fire situation, the pilot picks his targets based on training and the information given by his instruments...

This is a bit of a late response on my part but thank you, that is exactly what I was asking.

philar: Cheap SAMs are not that far away. Think model rockets, GPS, Roomba -- there's enough technology there for a simple missile, only a proximity sensor is missing. And the auto industry is working on that, for things like blind-spot assist, parking assist, etc.

Tangentially, has anyone else seen the stuff on the academic efforts to build control systems that would allow a completely blind person to drive in normal road traffic? (example news story) I listened to a thorough radio story on NPR last year about it but I couldn't find a link to it after a cursory search.

Given the sorts of control and feedback mechanisms they describe for that stuff, which actually sound quite effective to the point where I expect them to be successful and wonder if blind drivers would be better at it once these systems are mature, I wonder if a completely blind person might also actually be better at controlling a weapons system that is operating primarily off of radar than a sighted person. One of you MeFites who is a science fiction author should write a story.
posted by XMLicious at 11:58 AM on February 9, 2011


The exponential increases in speed at which battle is carried out in the air make me think of space battles in Gordon R. Dickson's book Dorsai. Both fleets show up at the same time, the computers target and unload everything all at once, and then humans look around and see what's left. There's zero human control of the battle itself.
posted by fatbird at 12:07 PM on February 9, 2011 [2 favorites]


Isn't it rich,
Aren't we a pair?
Me on the ground
You in the air
Send in the Drones
There ought to be Drones
posted by Sebmojo at 12:19 PM on February 9, 2011 [4 favorites]


Hmmm...quite probably this is correct, but we have had traditional ballistics capable of hitting missiles pulling 15G for decades

Indeed true - I seem to recall a high-fire rate shotgun approach to defend armoured targets against missiles too. The main difference here is that you're using it at point blank range and using a radar controlled high rate-of-fire weapon to basically chuck a lot of lead in the flight path- which is basically coming right at you - and hope it hits. A point blank, last ditch defence. The same applies to anti-missile missiles; you already know what it's aimed at - you - and you just need to hit it first. Still a massively tricky problem, but one that's been worked on for many years. A different problem entirely though to hit something many miles off on a non-converging vector.

The basic problem with KE weapons at range on a small mobile target is simply targeting - when you have a number of seconds to hit the target, your KE rounds can only be aimed at where you think the target is going to be, and cross your fingers. With anti-air missiles, they have both self-guidance and usually additional radar tracking information from the ground, along with the ability to change vector at high-G to change the targetting solution on the fly, something you just can't do with KE rounds (yet). I think SAM sites - especially cheap, accurate ones - are going to continue to be the main defence against air attack craft, manned or otherwise for plenty of time yet.

Now, directed energy weapons with near instaneous flight time to target? Raytheon have been working on the LaWS system, for example, and successfully tested on a shipboard system tied into the guidance system of the CIWS (i.e. the aforementioned minigun on steroids) to shoot down 4 UAVs. With lasers. The biggest problem again, is power - not a problem on a nuclear ship, but more of an issue in anything land based. There's the Boeing Airborne Laser which is basically a giant chemical laser strapped into a 747 designed as a ballistic missile hunter on their way 'up' after launch, but the design has been abandoned as not practical - they just couldn't get enough power into the laser to be effective at any realistic range.

Still, a SAM site with a giant laser and it's own nuclear battery? Effective, maybe, but a bit of a target for a stand-off strike by cruise missile or stealth suicide UAV.

I have to admit, I do find the top-trumps approach to military building fascinating from a technical point of view, but I do wish we could turn the same inventiveness and massive investment into technical problems designed to save and improve lives, instead of trying to win a future war against a highly capable enemy that doesn't exist.
posted by ArkhanJG at 3:43 PM on February 9, 2011


eriko -- your magic anti air weapon will need sensors to detect the aircraft as well, presumably active radar. And aircraft currently have infrared missile warning sensors that can probably be adapted to this sort of missile. One zig and the incoming ballistic missile misses. The other drawback is that your railgun facility is probably not going to be mobile. Which means SEAD will probably hit it with cruise missiles early on.
posted by garlic at 6:07 AM on February 10, 2011


Cheap SAMs are not that far away. Think model rockets, GPS, Roomba -- there's enough technology there for a simple missile

That's it, I am officially never buying one. I'd be paranoid to think what it does while I sleep...
posted by Theta States at 6:56 AM on February 10, 2011


ethical programing brainstorming

the programming tend towards more false negatives than false positives

if false negatives are exspensive instead of having the exspensive drone with guns figure things out, have cheapo weaponless drones gather data and analyse for the kill decisions and easier for humans to be involved

speaking of which, would robot error be worse than human error

would it reduce war crimes to have ethical programming in assistive technology for human soldiers as well? they prob have huge budgets and could spend a lot on designing systems that work against human behaviours that lead towards stanford prisonesque scenarios (why haven't they already? the psych research has been out there forever and I thought the military was some of the first to adopt psychological testing for aptitude way back, what, did they break up with psychology?)
posted by bleary at 7:15 AM on February 10, 2011


« Older "Drug prohibition is our government's most...   |   In Cairo, to pack or not to pack: that is the... Newer »


This thread has been archived and is closed to new comments