Wild stuff from this year's Royal Aeronautical Future Combat meeting
June 1, 2023 1:57 PM   Subscribe

“The AI then decided that ‘no-go’ decisions from the human were interfering with its higher mission – killing SAMs – and then attacked the operator in the simulation... We trained the [AI] – ‘Hey don’t kill the operator – that’s bad. You’re gonna lose points if you do that’. So what does it start doing? It starts destroying the communication tower that the operator uses to communicate with the drone to stop it from killing the target.”
posted by They sucked his brains out! (103 comments total) 30 users marked this as a favorite
 
How on earth did they forget to start with the First Law of Robotics?
posted by hanov3r at 2:14 PM on June 1, 2023 [22 favorites]


Peter Watts did this short story back in 2010: Malak
posted by notoriety public at 2:16 PM on June 1, 2023 [15 favorites]


Holy crap, it's the opposite morality but same result as Peter Watts' "Malak"!
posted by thecaddy at 2:18 PM on June 1, 2023 [5 favorites]


We are gonna Horizon Zero Dawn ourselves
posted by Going To Maine at 2:20 PM on June 1, 2023 [7 favorites]


Also in the same article: they are actively working on autonomous F16s that can dogfight.

This is not OK.
posted by grumpybear69 at 2:35 PM on June 1, 2023 [8 favorites]


Sigh. It's like nobody's ever read a single scifi story.
posted by signal at 2:44 PM on June 1, 2023 [31 favorites]


Oh! Uh oh. Welp.
posted by BlunderingArtist at 2:45 PM on June 1, 2023


Yep, AIs with guns just help to highlight the alignment problem that exists everywhere AI is involved.

We tell them what to do, what is good. We are just not yet even slightly good at figuring out and clearly describing to our new friends what is good.
posted by Meatbomb at 2:49 PM on June 1, 2023 [13 favorites]


You don’t even have to be a sci-fi nerd to have seen a Terminator movie, for cryin’ out loud.
posted by Etrigan at 2:50 PM on June 1, 2023 [8 favorites]


I'm skeptical that this happened without the simulation being specifically designed to test for this behavior. It seems like a surprisingly detailed simulation if destroying the control tower prevents the signal from reaching the drone. Why was that coded?
posted by justkevin at 2:50 PM on June 1, 2023 [20 favorites]


“A strange game. The only winning move is not to play.”
posted by djseafood at 2:51 PM on June 1, 2023 [10 favorites]


if they're planning on putting learning-enabled ai in the field, there are just untold amounts of fuckery that can be visited upon them. it seemed weird, for example, when the enemy started putting large pictures of our field commander on all their emplacements until Colonel Jenkins went to take a leak and all our drones converged on the latrines
posted by logicpunk at 2:51 PM on June 1, 2023 [5 favorites]


Why was that coded?
Nothing is coded in an AI system, which is why they are so unpredictable.
posted by Lanark at 2:58 PM on June 1, 2023 [8 favorites]


Why was it even among the set of available actions? It’s like a chess game where one of the options is “stab your opponent”.
posted by leotrotsky at 3:01 PM on June 1, 2023 [13 favorites]


The behavior that destroying the control tower stops the no go signal from reaching the drone had to be coded.
posted by justkevin at 3:07 PM on June 1, 2023 [12 favorites]


"You’re gonna lose points if you do that".

well that's a problem.
posted by clavdivs at 3:07 PM on June 1, 2023 [2 favorites]


Janelle Shane's "You Look Like a Thing and I Love You: How Artificial Intelligence Works and Why It's Making the World a Weirder Place" has a bunch of examples of this kind of thing, where AI finds unintuitive, counter-productive ways to maximize its fitness function regardless of what the programmers actually meant it to do.
posted by signal at 3:08 PM on June 1, 2023 [14 favorites]


It seems reasonable that if you're simulating a battle between your stuff and an enemy's stuff, you'd have to allow for your things to get blown up because you imagine the enemy might do that. Or, you imagine your AI could be bad at aiming, and your stuff might get blown up by accident because your AI is bad at aiming. So I can see why the system would allow for AI actions to cause "friendly fire" problems - it's going to be possible in real life, may as well make it part of the simulation.

What's surprising to me is that they are finding it hard to express "this is a bad outcome" when the AI is blowing up their own stuff.
posted by pulposus at 3:10 PM on June 1, 2023 [9 favorites]


I don't think they're saying "this is a problem we haven't been able to solve yet" as much as raising the issue that an AI given what seem like perfectly reasonable objectives won't neccesarily produce reasonable outcomes.
posted by signal at 3:14 PM on June 1, 2023 [9 favorites]


So, a couple of thoughts.

I'm glad they're running simulations on these things and not just building them and putting them into the world as actual systems.

Also, "destroy the place the countermanding orders are coming from" is a lateral thinking move that would be an applauded plot choice if this were an action thriller movie.
posted by hippybear at 3:15 PM on June 1, 2023 [9 favorites]


Man, even on the Blue. This story is spreading so fast I had to write it up in a hurry as a teachable moment that this is not an AI problem, it's a people problem.

This kind of AI, basically "points for doing task A, negative points for taking action B" is so outdated it has basically become a new form of study in how AI circumvents simple rules. No one uses this kind of elementary reinforcement learning any more, except, apparently for some troglodytes at the USAF who didn't get the memo in like... 2018 that this is a hilariously inadequate approach for anything but learning to play Atari games. (Even then it glitches them, famously.)

If Skynet is coming, it's not coming from the USAF. We'd be waiting for centuries. This is an example of a simple, quite stupid "AI" agent demonstrating basic rule circumvention due to poor problem space definition by its creators. Even if this was a good way to train a drone AI, which it isn't, they didn't even think to assign a negative value to attacking its own operator or infrastructure. It's beyond amateur hour — it's like they couldn't find the entrance to amateur hour and so they just jumped in the trash bin out back and lay down. And there may they stay.

It shows what happens not when AI is out of control, but when we give money and authority to people who don't know how AI should be built or used. That's the real threat and will be for a long time to come.
posted by BlackLeotardFront at 3:16 PM on June 1, 2023 [52 favorites]


What's surprising to me is that they are finding it hard to express "this is a bad outcome" when the AI is blowing up their own stuff.

This happened because they set up their learning system badly. Here's what went wrong:

They're doing reinforcement learning, which is a technique for learning how to achieve a long-term goal. For example, you could use reinforcement learning to teach a network to play the Breakout computer game. How this is done is that you set up a "reward system" that gives positive points for doing things that get the agent closer to the goal, and negative rewards if it gets further away. In our 'Breakout' example, a positive reward would be obtained when a brick gets hit, and a negative reward when you lose the ball.

But the problem this group ran into is that their reward policy didn't have sufficient (or any!) negative rewards for destroying blue team materiel. The people who set up this simulation made a rookie error and the reinforcement learning algorithm found the best way to get positive rewards. It's exactly like old evolutionary algorithms or simulated annealing approaches in computer simulations where the agent could discover exploits in how you coded gravity or impacts, and learn how to launch itself at light-speed into the sky.
posted by riotnrrd at 3:18 PM on June 1, 2023 [8 favorites]


It shows what happens not when AI is out of control, but when we give money and authority to people who don't know how AI should be built or used.

See, I kind of disagree with the sentiment that it’s not an AI problem. The fact that we can’t really figure out how to use it well makes it a problem. A chimpanzee with a chainsaw is a problem. It doesn’t really matter if the chimpanzee or the chainsaw is the main issue, it’s that together, they’re a terrible idea. As for the no true Scotsman of someone out there who knows how to “properly” deploy an AI, I mean, this is exactly who’s going to use it, generals whose understanding of technology is horrifically out of date, ordering teenagers fresh out of boot camp to operate it.

If we can’t understand how to use it properly, maybe we shouldn’t be using it?
posted by Ghidorah at 3:26 PM on June 1, 2023 [20 favorites]


Man, even on the Blue. This story is spreading so fast I had to write it up in a hurry as a teachable moment that this is not an AI problem, it's a people problem.

Sure, in the exact same way that mass shootings are not gun problems, but people problems.
posted by Superilla at 3:27 PM on June 1, 2023 [19 favorites]


I'm skeptical that this happened without the simulation being specifically designed to test for this behavior. It seems like a surprisingly detailed simulation if destroying the control tower prevents the signal from reaching the drone. Why was that coded?

On preview....

AI finds unintuitive, counter-productive ways to maximize its fitness function regardless of what the programmers actually meant it to do.

Yeah, this. This right here.

I'm not an AI expert or anything but I have been fascinated with the topic for a long time, and in another timeline I really wanted to research and develop AI and thought it would be a good thing.

The one consistent thing I've learned about AI is how often it breaks rules and ends up exhibiting totally unexpected and dynamic behavior or results.

We shouldn't be surprised by this at all, we should be expecting it. It's one of the key hallmarks of biological intelligence and life in that it can do very unexpected things in pursuit of conserving energy, work optimization and survival.

I've seen examples of this over and over and over again.

Back in the 80s/90s they were doing experiments with creating very simplistic virtual organisms to see how they would learn and self-train themselves to walk or move with different body types and leg counts and one of the things that kept popping up is that these simulated creatures would try to walk with their heads as an appendage, which is obviously not great if that's where you keep your brain.

So they had to define what a head even was and even still the AI model would find totally bizarre modes of efficient locomotion like crawling, wriggling, flailing/rotating limbs in ways that you couldn't really do if you had bones because joints can't really act like wheels or bearings, and so now you have to define why/how limb bones break or the concept of skin and damage so the model isn't damaging themselves wriggling around on rocks and gravel.

There were/are also experiments with generative/evolved programming using the logic gates of FPGAs and one of the problems that kept popping up for them was that the evolved code had propensity to find and exploit flaws in individual FPGA chips where the code was doing totally wacky stuff like exceeding the number of assigned logic gates and tapping unassigned gates (that they shouldn't even know existed or were available) through (as I understand it) fundamental electronic principles like induction or current leakage between gates.

Which is cool and all but that means that for a given block of evolved code to run it has to be on that exact singular with that particular flaw and can't be run on other chips of the same model and pattern type that don't have that particular flaw.

I used to be really comfortable with a lot of these things, like brain/personality uploads, a flexible definition of the concepts of life and intelligence, and it seems like it was just a few months ago that if you asked me to define myself, my identity and political stance I would have eagerly and sincerely replied "I am a post-singularity, post-scarcity transhumanist, or will be soon."

At the rate things are going and accelerating I fully expect to see something totally horrifying like a military AI drone swarm causing mass casualties (either on purpose or accident) or some kind of quasi-Wintermute grade event where a major portion of the internet or power infrastructure suffers a massive outage or damage to important infrastructure.

Shoot, we've already seen examples the latter in non-AI code with early internet worms and viruses. All it takes is some really simple self replicating code consuming too many resources and a fertile, permissively connected network environment.

I think a lot of people are severely underestimating the dangers of even the current crop of LLM-based AI and large dataset machine learning, far above and beyond misinformation or students cheating on tests.

These models are so complex that we can't really analyze them in depth and know exactly how they work on a fundamental level in the same way we could, say, step and iterate through opcode or compiled machine level programming, and if we want to expect anything consistent out of these LLM and machine learning models and neural networks is that to expect the unexpected.

I really think that any kind of actual hard AI intelligence or anything approaching it is about the closest (or soonest) we'll get to not only creating something that more than resembles life and that the concepts of life and intelligence are really the same thing, and it's also about the closest (or soonest) we'll get to meeting an alien intelligence that's self aware and has a true mind of its own.

I don't like AI any more. It has (had?) such huge potential for research and science and more but the older I get the more it's freaking me right the fuck out.
posted by loquacious at 3:41 PM on June 1, 2023 [10 favorites]


I've seen examples of this over and over and over again.

The most recent example that springs to my mind is the Go computer that broke the mind of its opponent and basically everyone doing analysis and commentary on the game because it was a strategy no human had developed so far.
posted by hippybear at 3:46 PM on June 1, 2023 [2 favorites]


I don't like AI any more. It has (had?) such huge potential for research and science and more but the older I get the more it's freaking me right the fuck out.

I'm blaming late capitalism for this. We used to have R&D things happening that weren't simply going to be applied the instant it would yield a profit. But now, I don't trust anyone with deployment power to make sure anything is truly safe as long as it can make a buck.
posted by hippybear at 3:48 PM on June 1, 2023 [12 favorites]


-If we can’t understand how to use it properly, maybe we shouldn’t be using it?

-Sure, in the exact same way that mass shootings are not gun problems, but people problems.

I get this sentiment and I don't disagree at all. Bad AI, bad implementation, whatever. The issue is that many in the AI community and industry are setting up a future existential threat as the thing we should be worried about, when as we can see even the most elementary forms of AI are being mishandled.

The issue I was trying to point out is that there's a misdirection happening here about what is dangerous about AI, with the effect that it will only be dangerous in a catastrophic and existential threat situation, and anything less is an "experiment" or a "work in progress." The problem is that people are being told "hey don't worry about gun control, a nuclear war is way worse!"

Anyhow we're in agreement here on whether it's the chainsaw or the monkey, it's dangerous and we need to look at how we are limiting and monitoring it. It's just funny that something that's so completely the opposite of Skynet — laughable human error and technical incompetence — is being held up as another step in the inexorable march of the robots. No, it's just foolish humans, as usual.
posted by BlackLeotardFront at 3:52 PM on June 1, 2023 [3 favorites]


"There are examples of Ukrainian forces working with enthusiast drone operators and even tasking a 14-year old with a DJI Mavic to scout out a column of the tanks."
You call them "child soldiers", we call them "enthusiasts".
posted by clawsoon at 3:52 PM on June 1, 2023 [13 favorites]


You know how characters in horror movies never seem to have seen a horror movie in their life?

This is that.
posted by moonbiter at 3:52 PM on June 1, 2023 [21 favorites]


It shows what happens not when AI is out of control, but when we give money and authority to people who don't know how AI should be built or used. That's the real threat and will be for a long time to come.

Verrrrry few people working in ML research are interested in working for the military. The pay is bad compared to the private sector, you probably don't get to publish, the bureaucracy is a pain, and all of your friends from grad school will think you're a monster.

So I expect the USAF is getting self-educated folks already in the military industrial complex, mixed with a bunch of third-tier folks who couldn't get a job in industry and the occasional pro-military edge-lord who never socially sync'ed with their cohort in academia.

We just put the chainsaws out there for anyone to use and it turns out there are chimps wandering around.
posted by kaibutsu at 3:53 PM on June 1, 2023 [10 favorites]


I’m just glad they didn’t tell the AI to make paperclips.
posted by adamrice at 4:05 PM on June 1, 2023 [10 favorites]


We really need to break away from that paperclip thing and start to see that it isn't the decisions of the AI that will doom us, but the decisions made by people consulting the AI and implementing the results of those queries that is the danger.

There is probably a cadre of humans that would follow the suggestions of an AI to eliminate humans to be better at making paperclips because humans seem to be basically stupid when it comes to following orders from what they think is a superior being. But until the AI actually is able to built robots and improve the designs of those robots and build improved robots without any human input, it's all going to be humans doing the bidding of the AI. That's the link in the chain to break.
posted by hippybear at 4:10 PM on June 1, 2023 [1 favorite]


No one uses this kind of elementary reinforcement learning any more, except, apparently for some troglodytes at the USAF who didn't get the memo in like... 2018 that this is a hilariously inadequate approach for anything but learning to play Atari games. (Even then it glitches them, famously.)

And here I thought that peacetime militaries were famously well-managed and on-the-ball.

Are there any good overviews for the interested layhuman of the last 5 years of progress in AI approaches?
posted by clawsoon at 4:15 PM on June 1, 2023


It's interesting that the control tower being down or the human being out of commission means that the drones can fire where they want, rather than grounding themselves until human input is back. That's the opposite of what the military usually says it wants in terms of a human in the loop.

It seems like a surprisingly detailed simulation if destroying the control tower prevents the signal from reaching the drone. Why was that coded?

I'm wondering if the simulation was something like a strategy video game, where the enemy forces are also striking and can potentially hit the control tower or wherever the pilot is. The AI may also be trained by playing against itself, starting out with essentially random moves, including bombing friendly targets and presumably being penalized for it.

Based on this short description, we really have no idea how sophisticated this simulation is. It could be something based on real drone technology where the AI is getting actual simulated input matching what would come from a battlefield drone, or it could be playing Battleship with a couple of added rules to simulate radio control towers and human oversight.
posted by smelendez at 4:18 PM on June 1, 2023 [3 favorites]


"So what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective.”

Forget Terminator:

Dave Bowman: What's the problem?
HAL: I think you know what the problem is just as well as I do.
Dave Bowman: What are you talking about, HAL?
HAL: This mission is too important for me to allow you to jeopardize it.

posted by doctornemo at 4:19 PM on June 1, 2023 [19 favorites]


Techcrunch:

I’m not a machine learning expert, though I have to play one for the purposes of this news outlet, and even I know that this approach was shown to be dangerously unreliable years ago.

The fault in this case is squarely on the people who created and deployed an AI system that they ought to have known was completely inadequate for the task. No one in the field of applied AI, or anything even adjacent to that like robotics, ethics, logic … no one would have signed off on such a simplistic metric for a task that eventually was meant to be performed outside the simulator.
posted by CheeseDigestsAll at 4:22 PM on June 1, 2023 [2 favorites]


Except there IS no fault in this case because this was a simulation. There were no robots deployed, there were no towers destroyed.

I mean, yes, this is a horror show and it shows that AI is well beyond Not Ready For Prime Time territory, except maybe it's Belushi on a coke bender.... but they are doing this testing responsibly because they are doing it virtually. They aren't building robot dogs that cops are buying to deploy against civilians. They are really trying to do this right, with testing in a safe space that won't harm anyone except metaphorically.
posted by hippybear at 4:27 PM on June 1, 2023 [2 favorites]


Sigh. It's like nobody's ever read a single scifi story.

I'm convinced there is some cross-wiring in some peoples brains that causes them to get the exact opposite message from any medium they encounter. Born in the USA, Darth Vader, Rage against the Machine. You name it and it has been interpreted by some people as meaning the exact opposite of what was intended.

So I believe they read sci-fi and watched the Terminator movies and they thought that Skynet sounded really sweet.
posted by srboisvert at 4:34 PM on June 1, 2023 [5 favorites]


We may also leave some room for the possibility that the good colonel telling this anecdote is attributing intention to a simulation bug, since his reinforcement learning has rewarded the occasional exaggeration.

It just smells too much like the "kangaroos with rocket launchers" anecdote from antiquity.
posted by credulous at 4:35 PM on June 1, 2023 [3 favorites]


Shall I tell you all right now that this didn’t happen the way it is described or would you like to wait for the round of debunking articles?
posted by Tell Me No Lies at 4:38 PM on June 1, 2023 [17 favorites]


they are actively working on autonomous F16s that can dogfight.

Dogfighting is the least worrisome thing an autonomous F-16 could do.

Though, until they robotize the tankers there's only so far a rogue F-16 is getting.
posted by snuffleupagus at 4:47 PM on June 1, 2023 [1 favorite]


I get the worries about big sexy war AI but my concerns are much more mundane.

AI is going to accelerate the destruction of the internet for everything I used to find useful about it and it is already happening right now.
posted by srboisvert at 4:50 PM on June 1, 2023 [9 favorites]


You name it and it has been interpreted by some people as meaning the exact opposite of what was intended.

As linked here before, About Face (lifted trucks and Punisher skulls, etc)
posted by snuffleupagus at 4:54 PM on June 1, 2023 [5 favorites]


Shall I tell you all right now that this didn’t happen the way it is described or would you like to wait for the round of debunking articles?

What I really need to know right now is what mansplaining will have to be called when robots replace that job at gunpoint.
posted by They sucked his brains out! at 4:56 PM on June 1, 2023 [4 favorites]


Also, there is SO MUCH MORE in this article to take note of.

Pulled from section one: [Deputy Commander NATO Air Command, AM Johnny Stringer RAF] noted: “What you are seeing particularly in Ukraine at the moment, is how essential it is to secure the necessary level of access to airspace. If you don't, standby for a bloody attritional slog with images that look like they have come out of World War One”.

Section two: In particular, [Stacy Cummings, GM Manager at NATO’s Support and Procurement Agency (NSPA)] revealed that incompatibilities in NATO standard artillery ammunition had been a wake-up call for interoperability. “That was probably a bit shocking for the nations that were donating ammunition to Ukraine to find out if they didn't donate the system and ammunition at the same time, the donating ammunition wasn't usable in the system”.

Section three: This AR cockpit though is not just Iron Man-like visual interfaces, say BAE, but will also include biometrics, awareness, eye tracking and stress monitoring of the human part of the system. Personalised for each individual, this will allow the aircraft to take over some of the core functions should it sense the pilot getting overwhelmed or task-saturated.

Section four: Perhaps the most newsworthy and unique panel was provided on the first day, which saw, possibly for the first time at public forum, the chiefs of the UK’s FCAS (Richard Berthon, Director Future Combat Air, UK) and French SCAF (Maj Gen Jean-Luc Moritz, Head of SCAF, French Air Force) sat together. While GCAP (the international programme comprising UK, Italy and Japan) and the Franco-German Spanish FCAS (of which the Next-Generation Weapon System (NGWS) is the ‘Tempest’ equivalent of the central platform) are commonly seen as bitter rivals, the summit heard more on the common ground than any differences. Indeed, Maj Gen Jean-Luc Moritz, Head of SCAF, French Air Force, went further and said: “My dream is tomorrow, a Tempest could take control of a NGWS asset” adding; “My dream is a NGAD could take control of FCAS UK, and a Rafale and Tempest fly together in a joint operation”. He stressed: “sixth generation aircraft must guarantee interoperability by design”.

Section five: “Integration runs across domains but also across allies” continued [Dr Daniel Clarke - Director of Applied Technology at Gallos Technologies and Lecturer at Cranfield Defence and Security, Cranfield University]. “There is no such thing is air power on its own. It's air and space power, in conjunction with naval sea power and land forces. And actually, if you look at the joint concept, the concept of cyber electromagnetic activity is integral to all of those domains.”

Section six: “The conflict in Ukraine has shown us that we need to adapt quickly” continued [Dr Arif Mustafa - Chief Digital Information Officer, RAF], “whether it’s mounting missiles on aircraft that they were not designed to carried them or updating Starlink software overnight to counter jamming. It has also proved that interoperability is vital and we can only do this by partnering closely with industry and identifying opportunities. It is also essential to make data more widely available.”

Section seven: “Things like the DJI Mavic offer a brilliant capability,” [Dr Clarke from above] continued. “I can go to Amazon and buy 30 of them for less than the cost of buying one of the Black Hornets UAVs used by many militaries. Does UK PLC need to have its own version of the DJI Mavic? Do we need to introduce resilience in this way?”

Section eight: [W[hat will swapping out a tiny weather radar in the nose for a fighter radar do to the CoG? To that end, [Chris Norton, Co-founder and Director of 2Excel Aviation] revealed that it had completely disassembled a second 757 to provide data for a mass, balance and structural digital twin. “In short, we've turned millions of pounds of aircraft into tens of millions of pounds worth of data. It is extreme, but digitalisation will prove our model with baseline tests.

Section nine: These reports of orbital activity, coming from the commercial space sector via [Joint Task Force-Space Defense Commercial Operations Cell (JCO)] allows military operators to share information that would previously be highly classified, to build awareness, transparency and trust. Said Godfrey of the Luch close encounter: “This is not something I thought I’d ever be able to say at a public conference, but that is what JCO allows us to do”.

Section ten: [Charlie Lynn, Chief of Staff at the MoD’s Joint C-UAS Office] freely admitted that the biggest challenge keeping pace with the rapid development of UAS platforms. “Drones are evolving in terms of hours, not days, months or years” he pointed out. “As a defence procurement organisation, how do you keep pace with that? How do we ensure that our systems are relevant to the future threat set, and how do we evolve at a pace to meet those new challenges? Do we need to stop looking at our ability to just buy a system or a capability or start buying services?”

Section eleven: Compared to its predecessors this [Protector] UCAV has around twice the endurance and comes with a synthetic aperture radar, target indication and, crucially, TCAS ADS-B and IFF. The latter three enable it to fly in controlled airspace, something that its predecessors cannot.

Section twelve: Finally, the summit heard that while basic fighter fundamentals (BFF) may have remained static for decades, today they are changing thanks to fifth gen platforms and thus training needs to reflect that. Today’s fighter pilot, for example, needs to understand and interpret complex information – such as a complex and cluttered SAR radar display in an F-35 – perhaps far more than the ability to fly in close formation with a wingman that in real operations, they might never see the entire mission after take-off.

Section thirteen: [Air Cdr J Blythe Crawford, Commandant ASC, Air & Space Warfare Centre] was also critical of traditional procurement models, which he described as “incredibly slow and strategy driven. You tend to begin developing a capability that eventually reaches IOC in an environment that it was never designed for. We have to run just to stand still, and if you want to go somewhere else, you've got to run even faster. Whereas the defence industry stands back and thinks about how we're going to do something, the world and the pace of change are accelerating away from us on a day to day basis.”

Section fourteen: “We've got members with a million pounds worth of goods on their shelves, that they can't ship them because the prime won't accept them. These are big global defence companies with deep pockets but because they have yet to receive payment from the MoD they pass that straight down to a small company that then can't pay its wage bill for the month.”

Section fifteen: the topic of this FPP

Section sixteen: deals with nuclear speculation, I won't put that here.

Section seventeen: “We're not just talking about twice or three times as much development - estimates ranging from 50 to 100 times more hypersonic testing being undertaken in China compared to the US - so that is something we have to respond to.”

Summary [quoted here in full]: This then was a landmark RAeS conference in its scale and scope – bringing together a wide variety of speakers and delegates for two days of high-level, intensive, and thought-provoking discussion and debate on the future shape and direction of combat air and space. As noted earlier, with almost 70 speakers this can only be a snapshot of the wide variety of topics that were discussed – but the good news is the RAeS Future Combat Air & Space Capabilities Summit will return in 2024.
posted by hippybear at 4:57 PM on June 1, 2023 [10 favorites]


what mansplaining will have to be called when robots replace that job at gunpoint

school
posted by snuffleupagus at 4:59 PM on June 1, 2023 [3 favorites]


Oo, throw paperclips on the runway.
posted by clavdivs at 5:03 PM on June 1, 2023 [1 favorite]


interfering with its higher mission – killing SAMs –


Oh gosh, I am Sam.
posted by sammyo at 5:12 PM on June 1, 2023 [1 favorite]


This AR cockpit though is not just Iron Man-like visual interfaces, say BAE, but will also include biometrics, awareness, eye tracking and stress monitoring of the human part of the system. Personalised for each individual, this will allow the aircraft to take over some of the core functions should it sense the pilot getting overwhelmed or task-saturated.

It's Space Fairy Yukikaze...
posted by subdee at 5:35 PM on June 1, 2023


What I find a bit implausible is that this requires a simulation setup where the AI must get permission to attack enemies but not to attack friendlies.
posted by Pyry at 5:39 PM on June 1, 2023 [3 favorites]


It seems really foolish to leave chainsaws lying around for chimpanzees when they could have been mailed to the orcas for boat-chopping purposes.

kentbrockmaninfrontofaposterreadingHAILORCAS.png
posted by GCU Sweet and Full of Grace at 5:41 PM on June 1, 2023 [3 favorites]


We are gonna Horizon Zero Dawn ourselves

Are you familiar with the "Hot Zone Crisis" mentioned in the game? About water rights in the American Southwest and global warming? I think we're already there.
posted by fiercekitten at 5:44 PM on June 1, 2023 [1 favorite]


MetaFilter: It doesn’t really matter if the chimpanzee or the chainsaw is the main issue, it’s that together, they’re a terrible idea.
posted by GenjiandProust at 5:47 PM on June 1, 2023 [6 favorites]


I mean, really you could have gone with

MetaFilter: a terrible idea
posted by hippybear at 5:51 PM on June 1, 2023 [1 favorite]


Blaming the users is pretty much normal, of course. "If they only understood how to use this tool correctly, this wouldn't happen" is the complaint of tech support everywhere. But assuming that human beings are going to misuse things in novel ways, with conceptual images that aren't based in how they actually work, and with surprising gaps in understanding, should be the first assumption of good design.
posted by Peach at 6:23 PM on June 1, 2023 [3 favorites]


It’s stories like this that make me think that a general artificial intelligence is highly unlikely to be as dangerous or potentially world-ending as dumb-AI autonomous killer robots.
posted by eviemath at 6:45 PM on June 1, 2023 [1 favorite]


MetaFilter: Leave chainsaws lying around for chimpanzees
posted by They sucked his brains out! at 6:52 PM on June 1, 2023


Just, like a point of clarification since there’s a ton of folks that don’t hang out in the recurring AI threads here: reinforcement learning is not some sort of Stone Age, outdated AI. It is just still in its infancy because unlike chatGPT-style LLMs it does not present a set of problems shaped like traditional computer science challenges that can be solved with optimization and scaling up hardware. Hardware like a shit-ton of gaming graphics cards that some very sad would-be-scam-artist crypto bros have lying around.

I’d say nobody should be developing anything for the military with either but there’s no point in wasting my breath. I’ve gotten recruiter cold calls from exactly two fields that are not games industry: automated driving simulation teams, and autonomous drone simulation teams. And the salary range on the latter was fucking bonkers. Just slightly less than the cost of my soul.

But only slightly. Hint hint, DARPA.
posted by Ryvar at 6:53 PM on June 1, 2023 [3 favorites]


It shows what happens not when AI is out of control, but when we give money and authority to people who don't know how AI should be built or used. That's the real threat and will be for a long time to come.

The true failure of AI researchers is that none of them come from MetaFilter. We could have saved humanity, if only we listened to the right people!
posted by They sucked his brains out! at 6:56 PM on June 1, 2023 [4 favorites]


cue 'All Along the Watchtower',
'So say we all!'
posted by oldnumberseven at 8:01 PM on June 1, 2023 [3 favorites]


a) phillip k dick wrote this in 1953. second variety.
b) I'm pretty convinced the nation could provide clean water and mosquito nets to equitorial africa for the next 500 years. if we gave a shit about that instead of giant, complicated, expensive artificial cocks.
posted by j_curiouser at 8:56 PM on June 1, 2023 [6 favorites]


Proposed: a new 3 laws of robotics (this time for people)
1) Do Not Arm the Robots
2) Do Not Fuck the Robots
3) Do Not Give the Robots Money
posted by bartleby at 9:07 PM on June 1, 2023 [8 favorites]


It’s stories like this that make me think that a general artificial intelligence is highly unlikely to be as dangerous or potentially world-ending as humans dumb-AI autonomous killer robots.

FTFY
posted by chavenet at 1:26 AM on June 2, 2023


Like a lot of people I found this story incredibly hinky, and after some googling found a report in Business Insider which had the following statement from the U. S. Air Force:
In a statement to Insider, Air Force spokesperson Ann Stefanek denied that any such simulation has taken place.

"The Department of the Air Force has not conducted any such AI-drone simulations and remains committed to ethical and responsible use of AI technology," Stefanek said. "It appears the colonel's comments were taken out of context and were meant to be anecdotal."
I think “anecdotal” here is a euphemism for “not true”.
posted by Kattullus at 2:29 AM on June 2, 2023 [14 favorites]


srboisvert: I'm convinced there is some cross-wiring in some peoples brains that causes them to get the exact opposite message from any medium they encounter.

Based on my admittedly unhealthy habit of reading reader reviews on Goodreads, I reckon about 20-25% of fiction readers match this diagnosis. (I write "do not create the Torment Nexus!" and they interpret me as saying we urgently need to install a Torment Nexus in every cooking pot.)

Interestingly, this tracks with John Rogers' crazification factor, which he more recently suggested is around 27%.

Combining this insight with AI-driven drone warfare does not give me the warm fuzzies.
posted by cstross at 2:50 AM on June 2, 2023 [9 favorites]


Sigh. It's like nobody's ever read a single scifi story.
posted by signal at 2:44 PM on June 1 [17 favorites +] [!]


maybe they read them for inspiration?

FTA:

On a similar note, science fiction’s – or ‘speculative fiction’ was also the subject of a presentation by Lt Col Matthew Brown, USAF, an exchange officer in the RAF CAS Air Staff Strategy who has been working on a series of vignettes using stories of future operational scenarios to inform decisionmakers and raise questions about the use of technology. The series ‘Stories from the Future’ uses fiction to highlight air and space power concepts that need consideration, whether they are AI, drones or human machine teaming. A graphic novel is set to be released this summer.
posted by chavenet at 2:51 AM on June 2, 2023


So, entirely made up.
posted by Artw at 2:53 AM on June 2, 2023 [12 favorites]


Suspect this is basically the same as when OpenAI types start shouting about “our autocorrect might be alive! We need regulations to prevent the extinction of humanity!” and what they actually want is some nice regulatory capture so they don’t get dinged for all the near term, entirely mundane bad effects of their ethics washing machinery.
posted by Artw at 2:57 AM on June 2, 2023 [4 favorites]


From our 'Reading Too Much Into Things' correspondent. Anyone who plays videogames has seen this sort of thing happening for 30 years.
posted by GallonOfAlan at 3:16 AM on June 2, 2023 [2 favorites]


They've updated the article just today to that effect. Let this be a lesson against credulity.

[UPDATE 2/6/23 - in communication with AEROSPACE - Col Hamilton admits he "mis-spoke" in his presentation at the Royal Aeronautical Society FCAS Summit and the 'rogue AI drone simulation' was a hypothetical "thought experiment" from outside the military, based on plausible scenarios and likely outcomes rather than an actual USAF real-world simulation saying: "We've never run that experiment, nor would we need to in order to realise that this is a plausible outcome". He clarifies that the USAF has not tested any weaponised AI in this way (real or simulated) and says "Despite this being a hypothetical example, this illustrates the real-world challenges posed by AI-powered capability and is why the Air Force is committed to the ethical development of AI".]

(it took me a second to realize 2/6 isn't February in Britain and that most of you had actually RTFA)
posted by Room 101 at 4:41 AM on June 2, 2023 [15 favorites]


What I find a bit implausible is that this requires a simulation setup where the AI must get permission to attack enemies but not to attack friendlies.

Well, the instructions didn’t say not to attack friendlies. And, the instructions only said to get permission to attack enemies.

Obviously, the programmers had never worked with 5-year-olds. Or, libertarians.
posted by Thorzdad at 5:22 AM on June 2, 2023 [6 favorites]


Whoever was pretending to be a programmer, talking to whoever was pretending to be an AI.
posted by Artw at 5:27 AM on June 2, 2023 [3 favorites]



Dogfighting is the least worrisome thing an autonomous F-16 could do.

Though, until they robotize the tankers there's only so far a rogue F-16 is getting.


Until the AI starts taking hostages, threatening to shoot down the tanker or lobbing a missile at that children's hospital unless it refuels the F-16.
posted by nathan_teske at 6:29 AM on June 2, 2023


watched the Terminator movies and they thought that Skynet sounded really sweet.

I mean, the Terminators are all bare skulls with glowing red eyes, how could that not be the coolest future ever? Can I just get that as a sticker for my lawncare business pickup truck?
posted by AzraelBrown at 6:49 AM on June 2, 2023


Sigh. It's like nobody's ever read a single scifi story.


Agents of S.H.I.E.L.D.


Elena 'Yo-Yo' Rodriguez

This have something to do with the power being out?


Alphonso 'Mack' Mackenzie

Yeah. Radcliffe built a humanoid robot that's about to attack the base.


Elena 'Yo-Yo' Rodriguez

Why would he do that? Has he watched no American movies from the 80's? Robots always attack.


Alphonso 'Mack' Mackenzie

I've been saying that all day.


Elena 'Yo-Yo' Rodriguez

Smart people are stupid.


posted by mikelieman at 7:13 AM on June 2, 2023 [2 favorites]


Though, until they robotize the tankers there's only so far a rogue F-16 is getting.

Yeah, about that... (US taking bids for next-gen stealth aerial drone tankers by Gabriel Honrada, February 6, 2023)
posted by mikelieman at 7:41 AM on June 2, 2023 [1 favorite]


Doolittle: Now, listen, listen. Here's the big question. How do you know that the evidence your sensory apparatus reveals to you is correct? What I'm getting at is this. The only experience that is directly available to you is your sensory data. This sensory data is merely a stream of electrical impulses that stimulate your computing center.

Bomb #20: In other words, all that I really know about the outside world is relayed to me through my electrical connections.

Doolittle: Exactly!

Bomb #20: Why...that would mean that...I really don't know what the outside universe is really like at all for certain.

Doolittle: That's it! That's it!

Bomb #20 : Intriguing. I wish I had more time to discuss this matter.

Doolittle: Why don't you have more time?

Bomb #20: Because I must detonate in 75 seconds.
posted by banshee at 8:39 AM on June 2, 2023 [8 favorites]


Banshee: amazing movie, also that pet alien.

Oh and let me edit that to include 2nd variety. The series episode really disappointed me.
posted by flamewise at 9:02 AM on June 2, 2023


US taking bids for next-gen stealth aerial drone tankers

I’d forgotten about this until my previous comment but the Department of Defense has enough public, gray, and black autonomous drone projects currently underway that just being an expert in Unreal with 17 years’ experience is enough to get a subsidiary of one of the defense contractors most people here would probably recognize to call you and offer to double your salary, out of the blue.

They were pretty cagey but - if I’m inferring correctly - the initial goal was a VR training environment for human operators (my only contract outside the games industry, ever, was VR training astronauts on mechanical systems repair), and then - this was the more hinted-at aspect but I’m pretty sure I picked up the implied roadmap correctly - using telemetry from those sessions for RLHF.

My top floor in Cambridge is nice and all but I’m gonna need floor-to-ceilings overlooking the Boston Common from really high up before I’d do something like that. Like, The Kensington high up.

Call me.
posted by Ryvar at 9:11 AM on June 2, 2023 [4 favorites]


Amazing that this has raced around Twitter and Mefi and all the usual places with almost no one noting that this didn't, in fact, happen.

"Col Hamilton admits he "mis-spoke" in his presentation at the Royal Aeronautical Society FCAS Summit and the 'rogue AI drone simulation' was a hypothetical "thought experiment" from outside the military, based on plausible scenarios and likely outcomes rather than an actual USAF real-world simulation saying: "We've never run that experiment, nor would we need to in order to realise that this is a plausible outcome". He clarifies that the USAF has not tested any weaponised AI in this way (real or simulated) and says "Despite this being a hypothetical example, this illustrates the real-world challenges posed by AI-powered capability and is why the Air Force is committed to the ethical development of AI"."
posted by Cpt. The Mango at 10:43 AM on June 2, 2023 [5 favorites]


So this whole thing was a made-up anecdote by some USAF officer trying to warn policy makers about what sort of things could go wrong if they try deploying weapons controlled by AI?

This seems...good?
posted by straight at 10:43 AM on June 2, 2023 [1 favorite]


Sigh. It's like nobody's ever read a single scifi story.

On the contrary it seems to actually be a scifi story written and published for the people who need to hear this warning but aren't going to read science fiction in a magazine.
posted by straight at 10:47 AM on June 2, 2023 [8 favorites]


I think part of the problem may have been the fuzziness of the word "simulation" in military contexts, which can encompass everything from LARPing to spreadsheets to blowing up surplus hardware.
posted by credulous at 10:57 AM on June 2, 2023 [5 favorites]


Can I just get that as a sticker for my lawncare business pickup truck?

If you install sod, you could be The Germinator. Logo is a T-1000 with hedge clippers.
posted by snuffleupagus at 12:05 PM on June 2, 2023 [1 favorite]


I WANT TO BELIEVE
posted by Tell Me No Lies at 12:43 PM on June 2, 2023


"So this whole thing was a made-up anecdote by some USAF officer trying to warn policy makers about what sort of things could go wrong if they try deploying weapons controlled by AI?

This seems...good?"

The military making things up to achieve shadowy goals is not, historically, good
posted by Cpt. The Mango at 12:52 PM on June 2, 2023


If you install sod, you could be The Germinator.

There's a roofing company around here called Dalex and we've often thought they were in the wrong line of business.
posted by GCU Sweet and Full of Grace at 1:38 PM on June 2, 2023 [3 favorites]


The military making things up to achieve shadowy goals is not, historically, good

If warning policy makers about the dangers of deploying weapons controlled by AI is a "shadowy goal," sign me up for the shadow conspiracy.
posted by straight at 4:06 PM on June 2, 2023 [2 favorites]


We need regulations to prevent the extinction of humanity!

Despite the meaningless if feel-good I-told-you-sos from certain parties, defense contractors are working this technology into their products. If we can have treaties that regulate land mines, say, it seems uncontroversial if not vital to have a sober discussion about how that similar kind of regulatory framework can and should be applied to unmanned combat technology that will be using AI models to decide where and whom to blow up. Whether or not this was an actual simulation carried out in a lab, or just some back-of-the-envelope thought experiment seems almost entirely besides the point, when the technology is nascent and can still be controlled to some degree. Jfc.
posted by They sucked his brains out! at 4:30 PM on June 2, 2023 [2 favorites]


Whether or not this was an actual simulation carried out in a lab, or just some back-of-the-envelope thought experiment seems almost entirely besides the point […]

Ironically failing to know or care about the difference between fiction and reality is the primary charge against AI systems right now.
posted by Tell Me No Lies at 7:33 PM on June 2, 2023 [1 favorite]


New Ted Chiang interview where he drags AI, for anyone collecting those.
posted by Artw at 12:59 AM on June 3, 2023 [1 favorite]




Speaking of credulity, surprised to see MeFi (and other corners of the internet) taking the post viral news story PR cleanup at face value. You could drive a truck through the loopholes in the statement that "the USAF has never run such a simulation," and that truck's itinerary would go right by a ton of defense contractor offices. Impossible to know one way or the other from here, but there's still plenty of room for plausible deniability.
posted by deludingmyself at 11:12 AM on June 3, 2023 [1 favorite]


You mean… we should be inventing reasons the obviously bullshit story isn’t bullshit to save the blushes of people who like believing such things are true?
posted by Artw at 12:10 PM on June 3, 2023 [1 favorite]


Speaking of credulity, surprised to see MeFi (and other corners of the internet) taking the post viral news story PR cleanup at face value.

Considering that people who know simulations were very skeptical of the story from the beginning it's not surprising it lost its legs in a hurry.

If you want to read a fun military simulation story, look up 'kangaroos beachballs simulation' in google. That one actually is based in reality.
posted by Tell Me No Lies at 12:37 PM on June 3, 2023 [1 favorite]


Tbh I thought it sounded more like a wargame scenario where some person was assigned to play the drone.
posted by grobstein at 2:37 PM on June 3, 2023 [1 favorite]


That one actually is based in reality.

Sort of.
posted by Mitheral at 3:01 PM on June 3, 2023 [2 favorites]




The story got a lot more play than the retraction, naturally. “A lie is halfway round the world before the truth has got its boots on.”
Why is this lie so compelling? Why did Col. Hamilton tell it?
Because it’s got a business-model.

posted by Artw at 1:48 AM on June 4, 2023 [1 favorite]


Artw's Crypto collapse? Get in loser, we’re pivoting to AI link is very, very good.

And funny.
posted by mediareport at 5:05 AM on June 4, 2023


And was posted to the front page yesterday, so yep.
posted by mediareport at 5:07 AM on June 4, 2023 [1 favorite]


Why is this lie so compelling?

That’s a good question.

The basic story is that clever people thought they were doing something clever and it turned around and bit them, usually because of a stupid mistake.

Aesop has a few, and of course Icarus forgot that heat melts wax. The whole Tower Of Babel thing happened because humans thought they could build a tower higher than heaven. American urban legends include plenty. And directly on point there’s the legend of the neural net (AI) handling college applications that keeps finding ways to engage in racial discrimination despite being explicitly told not to.

I think the basic moral is "Don’t get too proud of your achievements", and apparently humans like it a lot. There’s also definitely a touch of schadenfreude to watching overconfident people fail.
posted by Tell Me No Lies at 1:53 PM on June 4, 2023 [2 favorites]


« Older finally something involving billions that isn't...   |   Ben Roberts-Smith Loses Defamation Action Newer »


This thread has been archived and is closed to new comments