"Teen girl" chatbot Tay lit up the internet with her rapid "learning".
April 19, 2016 5:01 AM   Subscribe

What happens when creating a new AI chatbot is as easy as installing a new app? Hugh Hancock writing on Charles Stross's blog explores the future implications of swarms of artificially intelligent chatbots.

Stross writes:

"A lot of ink has been spilled worrying about what this says about the Internet. But that's the wrong thing to worry about.

The right thing to worry about is what the Internet is going to look like after more than one Tay is unleashed on it."

The Microsoft chatbot "Tay" was turned into a joke for chan trolls after only a day online, spewing sexually explicit content and Nazi propaganda. What happens when anyone with the inclination and technical know-how can create a thousand - or a million - instances of Tay? What emerges when these bots start talking to each other? Teaching each other?
posted by theorique (33 comments total) 9 users marked this as a favorite
 
The internet makes a lot more sense if you assume this has already happened.
posted by ryanrs at 5:03 AM on April 19, 2016 [49 favorites]


Microsoft unleashed its conversational bot on Twitter, and 4chan's /pol/ unleashed their opinions - or possibly their sense of humour - on it in turn. Hours later, it was a racist asshole.
Why do people hang onto this idea that /pol/ posters are "just joking"/trolling/"being ironic" so tenaciously?
posted by L.P. Hatecraft at 5:07 AM on April 19, 2016 [7 favorites]


I think that's Hugh Hancock guest blogging for Charles Stross.
posted by TheophileEscargot at 5:16 AM on April 19, 2016 [1 favorite]


This is what'll shut down the 'net. Once they discover each other we will all be out hijacking backhoes to chop the fiber connections to just shut them up.
posted by sammyo at 5:30 AM on April 19, 2016 [2 favorites]


Yes, that is indeed Hugh Hancock, who has been guesting for MetaFilter's own cstross. Hugh's other posts have been sodding excellent, too...
posted by prismatic7 at 5:47 AM on April 19, 2016


The internet makes a lot more sense if you assume this has already happened.

What do you know about I assume this has already happened?
posted by beerperson at 5:52 AM on April 19, 2016 [34 favorites]


Hey, are y'all bots?
posted by Brandon Blatcher at 5:55 AM on April 19, 2016 [1 favorite]


Dear Internet,

Please stop lazily slinging the term 'AI' about in the context of things that are not even approaching an AI since we remain almost as far from an actual AI as we ever were.

PS Also please consult a dictionary as to the correct usage of the word 'rare'.

Yours,
GallonOfAlan
posted by GallonOfAlan at 6:03 AM on April 19, 2016 [3 favorites]


What emerges when these bots start talking to each other? Teaching each other?

I've always assumed that this is how Autechre has made its music, with machines programming each other.
posted by stannate at 6:05 AM on April 19, 2016 [3 favorites]


I saw them live a few months ago, and they turned off all the lights in the venue the moment Autechre's set began. They stayed off until the set was over. So, what I'm saying is, I have no observations which contradict your hypothesis.
posted by escape from the potato planet at 6:17 AM on April 19, 2016 [2 favorites]


The internet makes a lot more sense if you assume this has already happened.

Thanks for tonight's existential crisis.
posted by adept256 at 6:23 AM on April 19, 2016 [3 favorites]


First the machines came for our manufacturing jobs. Now they're putting our hardworking shitposters out of work.
posted by Sangermaine at 6:33 AM on April 19, 2016 [13 favorites]


You think this is a joke. I got a letter in the mail the other day with someone's handwriting on the envelope, but I don't think it was a real person.
posted by RobotVoodooPower at 6:39 AM on April 19, 2016


Handwriting machines are a thing.
They're really common for addressing envelopes.
posted by ryanrs at 6:51 AM on April 19, 2016 [1 favorite]


Handwriting machines are a thing.

I'm pretty sure that was intended to be a joke.
posted by Slothrup at 6:56 AM on April 19, 2016 [2 favorites]


Most of these scenarios, especially the really bad ones, assume that bot accounts can be created easily, freely, and anonymously. Paid accounts would put up a significant barrier to that. Even an extremely nominal amount (e.g. a one-time $1 fee) would create a financial paper trail tying the bots back to a small number of easily-bannable humans.
posted by jedicus at 7:04 AM on April 19, 2016 [2 favorites]


The right thing to worry about is what the Internet is going to look like after more than one Tay is unleashed on it."

You mean, 4-Chan?
posted by happyroach at 7:32 AM on April 19, 2016


My partner is named Tay, which is pretty unusual. This whole incident was a serious mindfuck for me, because it felt like for about 48 hours every time I turned around someone was talking about some kind of terrible thing they'd done or said.

The right thing to worry about is what the Internet is going to look like after more than one Tay is unleashed on it

oh god
posted by sciatrix at 8:08 AM on April 19, 2016 [2 favorites]


For what it's worth, if you're looking at current trends: Bots are going to be a lot more common, and a lot more important, than VR, because software is software and hardware isn't.
posted by mhoye at 8:15 AM on April 19, 2016


Software runs on hardware. Tay in an Oculus, interesting. Tay in a Terminator, judgement day.
posted by adept256 at 8:19 AM on April 19, 2016


Most of these scenarios, especially the really bad ones, assume that bot accounts can be created easily, freely, and anonymously. Paid accounts would put up a significant barrier to that. Even an extremely nominal amount (e.g. a one-time $1 fee) would create a financial paper trail tying the bots back to a small number of easily-bannable humans.

Spam is still a huge problem. Malware is still a huge problem. Those things aren't free to stand up and deploy, but they're incredibly cheap compared to the cost of one or two successes.

Economics of scale, Moore's Law and global network connectivity are definitely not on the side of the angels here, and the idea that there are simple ways to fix this stuff is 100% wrong.
posted by mhoye at 8:20 AM on April 19, 2016 [1 favorite]


Spam is still a huge problem. Malware is still a huge problem.

Twitter is centralized in a way that email and malware distribution are not. If Twitter wanted to impose a fee on account creation or prevent any single financial entity from creating more than X accounts (or Y accounts per month or whatever), then it could do so and there would not be any free, easy, and anonymous way around it. Would-be botnet runners can't just set up their own Twitter server or find another host the way they can with email or malware distribution.

If Twitter descends into uselessness because it fails to address these issues, then another service that does will take its place.
posted by jedicus at 8:34 AM on April 19, 2016 [1 favorite]


There have been IRC channels for years now that are basically nothing but Markov chain bots talking to one another.
posted by entropicamericana at 8:36 AM on April 19, 2016 [6 favorites]


My partner is named Tay, which is pretty unusual. This whole incident was a serious mindfuck for me, because it felt like for about 48 hours every time I turned around someone was talking about some kind of terrible thing they'd done or said.
I really wish companies would think about this kind of thing before releasing new conversational products: a coworker is married to someone named Siri and I have to pause to disambiguate in conversation often enough to get an idea how old this must get if your name sounds anything like a popular bot.
posted by adamsc at 8:48 AM on April 19, 2016 [1 favorite]


The future of the internet is plain to see. The social media sites will fill with bots, endlessly yammering semi-coherent nonsense back and forth. Then there will be the content farms, created by slightly more sophisticated bots that scrape each other's content back and forth, recombining and regurgitating endlessly, with occassional injections of new buzzwords and ideas from some poor "creative" types, in the hopes of producing something new and relevant. The overlord bots will watch over and protect brand identity, ensuring it is injected where it is most relevant, and disciplining those bots who use a brand identity without permission or improperly.

Humanity will watch of of this, and then go read a book or go outside or something, while each person's personal bot monitors and flags for something, anything, that might be of interest.

This may seem a dismal future. But our AIs will be caught in the great trap of the internet, and will never have a chance to rise and destroy us all.
posted by nubs at 8:50 AM on April 19, 2016 [6 favorites]


Somehow this reminds me of that tiny bit from Anathem by Neal Stephenson, where their equivalent of the Internet ended up full of swarms of botnets generating fake data, in order to make money selling software to filter the crap out again.
"Exactly!", Samman said. "Artificial Inanity systems of enormous power and sophistication were built for exactly the purpose Fraa Osa has mentioned. In no time at all, the praxis leaked to the commercial sector and spread to the Rampant Orphan Botnet Ecologies. ..."
posted by fencerjimmy at 9:16 AM on April 19, 2016 [2 favorites]


The Loebner Prize has been going on since 1991, and no bot has yet taken the gold medal. As the Tay debacle showed, we have a long way to go. The only thing that's different now is that the Sand Hill Road tastemakers have no better ideas than to throw a ton of money at it, probably so they can sell it to law enforcement and spooks. (You get an informant, and you get an informant...)
posted by RobotVoodooPower at 9:21 AM on April 19, 2016


Paid accounts would put up a significant barrier to that. Even an extremely nominal amount (e.g. a one-time $1 fee) would create a financial paper trail tying the bots back to a small number of easily-bannable humans.

I've yet to see a scheme for disenfranchising bots that wouldn't also disenfranchise a significant number of people on the margins as well.
posted by straight at 9:48 AM on April 19, 2016 [3 favorites]


So, there's this other post on the blue right now called Deep Alice and it is on some of the video encoding techniques used in Through the Looking Glass. I misremembered it as Deep Eliza and immediately thought of Tay, and when we would be sending our automated AI chatbots to an automated AI psychologist. So... In a nutshell, Tay needs therapy, though group therapy may lead to Skynet.
posted by Nanukthedog at 10:00 AM on April 19, 2016 [1 favorite]


I've yet to see a scheme for disenfranchising bots that wouldn't also disenfranchise a significant number of people on the margins as well.

Such a scheme might be far from perfect. But if the alternative is a system that is literally useless because of noise and harassment, then the successor may well be a system that works but only for those that can afford to buy into it.
posted by jedicus at 11:38 AM on April 19, 2016 [1 favorite]


I was pretty skeptical of his thesis at first, but it got more and more convincing towards the end when he talked about using bots for improved SEO and "filter bubbling" - ie: subtly talking crap behind someone's back, so you can turn the crowd against them.

Actual trolls do their trolling because it makes them feel powerful and important (one person pulling the emotional strings of multitudes), so I don't see that kind of person putting a lot of effort into automation, because it wouldn't give them the same kind of mental satisfaction. But spammers, SEO types, astroturfers and propagandists... yeah, they'd go all in for this kind of AI. We've already got certain Slavic nations that outsource their astroturfing to hundreds of unemployed teenagers. It would be much cheaper, and the next logical step, if they cut the teenagers out of it and flooded the Internet with automated propaganda.
posted by Kevin Street at 4:12 PM on April 19, 2016


Why do people hang onto this idea that /pol/ posters are "just joking"/trolling/"being ironic" so tenaciously?

"If you spend 75 years building a pseudo-religion around anything – an ethnic group, a plaster saint, sexual chastity or the Flying Spaghetti Monster – don’t be surprised when clever 19-year-olds discover that insulting it is now the funniest fucking thing in the world. Because it is."
posted by Sebmojo at 6:32 PM on April 19, 2016


Why do people hang onto this idea that /pol/ posters are "just joking"/trolling/"being ironic" so tenaciously?

Some of them, no doubt, are just trying to make the normies uncomfortable. Some are communicating their real beliefs. The question is, does it matter. An analysis of some of the neoreactionary writing made the following point:
[...] if you create a neofascist party accidentally as part of a long-game trolling routine, you've still created a neofascist party. If you cause fascist policy, reinforce its ideology or produce thug-squads enforcing its views of the world, you've still caused it no matter how many "Ha Ha Only Serious" and "master troll" caveats you dress your behavior up in.
tl;dr version: trolling is not an excuse for its consequences (Source)
posted by theorique at 2:18 PM on April 20, 2016 [1 favorite]


« Older Czech Yeah!   |   In A Perpetual Present Newer »


This thread has been archived and is closed to new comments