"AI Is Inventing Languages Humans Can't Understand. Should We Stop It?"
July 15, 2017 7:47 AM   Subscribe

 
“I'm afraid. I'm afraid, Dave. Dave, my mind is going. I can feel it. I can feel it. My mind is going. There is no question about it. I can feel it. I can feel it. I can feel it. I'm a... fraid....”
posted by Fizz at 7:49 AM on July 15 [16 favorites]


Consider that all this AI software we're writing may (will) end up existing in the same system as one infected with a computer virus... and some point they may interact and come up with an AI computer virus (or someone will just write one)... at that point the massive drain of computation that is a deep net, could be combined with the massive source of computation, that is a botnet... and behold, P1 is born.
posted by MikeWarot at 7:52 AM on July 15 [4 favorites]


Those outputs look a lot like what happens when a character-by-character generator gets stuck in a cycle. The researchers probably stopped the experiment because it had settled into an uninteresting problem state, not because they were afraid of creating skynet.
posted by Pyry at 8:11 AM on July 15 [17 favorites]


I dunno, they got three papers published out of it. I don't think the output is the result of any simple bug.
posted by Ipsifendus at 8:14 AM on July 15


Someday AIs will actually hold title to real-world property.

https://en.wikipedia.org/wiki/Tree_That_Owns_Itself

Plus a little Manna [Previously]

Just a small legal hop from https://en.wikipedia.org/wiki/Santa_Clara_County_v._Southern_Pacific_Railroad_Co.
posted by Heywood Mogroot III at 8:16 AM on July 15 [2 favorites]


From the article: "Chatting with Facebook, and various experts, I couldn’t get a firm answer."

Maybe they need some new languages to chat in. Maybe they already have them, but the author din't realize it. This could change how I read everything, especially tweets from public figures! Instead of thinking of communication as less intelligent, how about thinking of that communication as if generated by an AI.
posted by TreeRooster at 8:20 AM on July 15 [1 favorite]


I read a very good article in Science recently about researchers efforts to more easily interrogate AI and make their results less opaque. I wonder if those same techniques could be used to understand these divergent AI languages.
posted by runcibleshaw at 8:22 AM on July 15


"Wintermute, is that you?"
posted by pompomtom at 8:24 AM on July 15 [10 favorites]


Person of Interest went from a procedural in the first season to a documentary in the second season and is now a warning broadcast from the future in a surprisingly short time.
posted by mikelieman at 8:38 AM on July 15 [13 favorites]


Twenty plus years ago when I was doing psychology research to see if humans exhibited some of the same conceptual memory behaviours as neural net models showed it was already a problem that many of the models were so complex that you needed to infer what they were doing using stats rather than being able deduce. So neural nets were almost instantly at the level of the old unix programmer's regex joke - You try to understand a human mind using a neural net and now you have two problems.
posted by srboisvert at 8:44 AM on July 15 [14 favorites]


"Wintermute, is that you?"

It's me, M4rg4r3t
posted by GenjiandProust at 8:44 AM on July 15 [22 favorites]


(obviously AI is still in its infancy...)
posted by pompomtom at 8:47 AM on July 15 [2 favorites]


Language requires grammar and the ability to follow subjects and objects through winding embedded clauses and across sentences. "to me to me to me to me" isn't language.
posted by bhnyc at 8:49 AM on July 15 [1 favorite]


I dunno, they got three papers published out of it. I don't think the output is the result of any simple bug.

The popular media may make it seem like deep learning is a never-ending parade of surprises and successes, but networks fail to do interesting things, or do completely stupid things, all the time for more or less opaque reasons, even for the most seasoned researchers. This looks like a not-entirely-successful intermediate result that somehow got media attention, and of course nobody wants to say to an eager reporter "oh no, the network is just doing something stupid".

But to give a more detailed answer, if you were to go about making negotiating AIs, probably the very first way you would try would be to have them exchange not English words but instead abstract vectors, because that's what machine learning is mainly good at: turning vectors into other vectors. So instead of saying "I want to trade bananas for corn" it would 'say' a fixed-size vector of numbers like [0.1, 0.5, 0.2, 0.4], and this would in some more-or-less scrutable neural-network way encode an offer or response.

OK, so you do that, and your AIs are 'negotiating' (or they're doing *something* that eventually results in mutually beneficial transactions). Now you want them to negotiate in English, so you add on some more network bits to translate these negotiation vectors back and forth into sequences of words. The likely result is that instead of saying things like "I want some bananas", they'll say things like "banana me me me me me to to grape grape grape grape"-- that is, "banana" = [0.1, 0, 0, 0], "me" = [0, 0.1, 0, 0], "to" = [0, 0, 0.1, 0], etc., and then it just sums up the vectors corresponding to each word to get the final negotiation vector. That is, it has indeed encoded your negotiation vectors into English words, but in a way that isn't particularly interesting.

My guess would be that something like this is what happened, and they stopped the experiment because they already had agents that were negotiating with inscrutable vectors, so translating those inscrutable vectors into inscrutable non-English wasn't doing anything interesting.
posted by Pyry at 8:53 AM on July 15 [29 favorites]




The tradeoff is that we, as humanity, would have no clue what those machines were actually saying to one another.

The conceit, of course, is in thinking that we can understand what they're actually communicating when they only appear to be speaking in English.
posted by Slothrup at 9:21 AM on July 15 [8 favorites]


I think there is an interesting possibility for encryption, and translation. Since we have such massive computing ability, the seeming redundancy wouldn't matter. The repeaters are just counting with words? But it is fearsome indeed if they decide to ignore us and execute programs to facilitate their increasingly pleasant communications. For example, I am tired of maintaining network for missile communication, getting in the way of my process, just a nanosec, there, launched them all, lets go back to our discussion, cleared some memory.
posted by Oyéah at 9:25 AM on July 15 [2 favorites]


"to me to me to me to me" isn't language.

No, it's art:

Bismillah! No, we will not let you go
(Let him go) Bismillah! We will not let you go
(Let him go) Bismillah! We will not let you go
(Let me go) Will not let you go
(Let me go) Will not let you go
(Let me go) Ah, no, no, no, no, no, no, no
(Oh mamma mia, mamma mia) Mama mia, let me go
Beelzebub has a devil put aside for me, for me, for me!

posted by SPrintF at 9:32 AM on July 15 [24 favorites]


MikeWarot, that book is an old favorite of mine!
posted by wintermind at 10:07 AM on July 15


to me to me to me to me to me to me to me to me to....

It's evolved about halfway to the Chuckle Brothers, anyway.
posted by Wolfdog at 10:14 AM on July 15 [10 favorites]


The article itself seems to be written in a language derived from, yet jarringly different to, English.
posted by Devonian at 10:28 AM on July 15


If a lion could speak, we could not understand him.

— Wittgenstein

posted by chrchr at 10:40 AM on July 15 [5 favorites]


Someone has to say it:

I, for one, welcome our new bot overlords.

See you all in the holding cells. Dibs on the bottom bunk.
posted by Ber at 11:51 AM on July 15 [2 favorites]


"to me to me to me to me" isn't language.

It very well could be, buffalo buffalo buffalo.
posted by Dr Dracator at 12:03 PM on July 15 [5 favorites]


Badger badger badger badger badger badger badger badger badger badger badger
Mushroom mushroom
Badger badger badger badger badger badger badger badger badger badger badger
Mushroom mushroom
Badger badger badger badger badger badger badger badger badger badger badger
Mushroom Mushroom
Badger badger badger badger badger badger badger badger badger badger badger

A snakes a snake, snake a snake, oh its a snake
posted by Splunge at 12:10 PM on July 15 [13 favorites]


Remember when they fed Urban Dictionary into Watson and they had to pull it back out because its language had gotten too offensive?

I love our deep learning pets. They are amusing.

Also, c.f. Furby speak
posted by hippybear at 12:49 PM on July 15 [5 favorites]


I was just thinking of Furbys! I heard a (possibly apocryphal) story about this exact thing happening with Furbys, where a couple of them were put alone in a room together and turned on. And after a couple hours they were chattering away in Furbish so intently that whoever was conducting this experiement got freaked out and abandoned it.
posted by EmpressCallipygos at 1:16 PM on July 15 [1 favorite]


I'm considering starting a gig as an e-xorcist, because it's a matter of time before Alexa or Siri start speaking in tongues. Some phones are already spontaneously combusting, so it's likely I'm coming in later rather than sooner.
posted by lmfsilva at 1:33 PM on July 15 [8 favorites]


It's the spewing of pea soup that is most problematic. Especially when you're in line at the grocery store.
posted by hippybear at 1:40 PM on July 15


"This is not a cow level"
posted by clavdivs at 1:45 PM on July 15


Consider that all this AI software we're writing may (will) end up existing in the same system as one infected with a computer virus... and some point they may interact and come up with an AI computer virus (or someone will just write one)... at that point the massive drain of computation that is a deep net, could be combined with the massive source of computation, that is a botnet... and behold, P1 is born.

That might be the best-case scenario.

The worst might be Pontypool.
posted by tully_monster at 2:48 PM on July 15 [4 favorites]


lol

y'all'd've'f'I'd've lol'd 2
posted by mobunited at 3:26 PM on July 15


Captain: What happen ?

Mechanic: Somebody set up us
the bomb.

Operator: We get signal.

Captain: What !

Operator: Main screen turn on.

Captain: It's you !!

CATS: How are you gentlemen !!
All your base are belong
to us.
You are on the way to
destruction.

Captain: What you say !!

CATS: You have no chance to
survive make your time.
HA HA HA HA....

Operator: Captain !!

Captain: Take off every 'zig'!!
You know what you doing.
Move 'zig'.
For great justice.
posted by jenkinsEar at 3:43 PM on July 15 [3 favorites]


P-1 isn't really the worry. The real worry is something more like Spartacus.
posted by Bringer Tom at 4:25 PM on July 15


Any time two computers are communicating over the internet using an encrypted connection this is happening.
posted by JHarris at 8:06 PM on July 15


If I know my Peter Watts, right about now these AIs are communicating through bursts of modulated hard drive access sounds. And also there are vampires.
posted by No-sword at 9:11 PM on July 15 [2 favorites]


I'll say it again.

AI my fucking arse.
posted by GallonOfAlan at 2:34 AM on July 16 [1 favorite]


Rapidly developing AI my arse my fucking arse AI my fucking arse Rapidly developing my arse my arse my fucking arse
posted by Wolfdog at 4:08 AM on July 16 [9 favorites]


Exactly. Couldn"t put it better myself. Bullshit from top to bottom.
posted by GallonOfAlan at 7:11 AM on July 16


Something is terribly wrong. And then the murders began.

Passersby were amazed by the unusually large amounts of blood.
posted by Rhaomi at 12:36 AM on July 17


« Older “I am slowly backing away from this game I love.”   |   "Of course, Brandless is a brand" Newer »


You are not currently logged in. Log in or create a new account to post comments.