"Those neural nets sure are weird, making all those weird noises"
June 11, 2020 5:40 PM   Subscribe

Janelle Shane got a chance to preview the new OpenAI API, which is "REALLY good at following all sorts of prompts", making it great at creating believable Twitter bots: This is the OpenAI API. It makes spookily good twitter bots 13/10 would retweet posted by Lexica (29 comments total) 22 users marked this as a favorite
 
My head exploded a tiny bit at the textual style transfer example.
posted by migurski at 6:47 PM on June 11, 2020 [3 favorites]


This is fascinating but the fake dogs make my whole body shudder.
posted by Homo neanderthalensis at 9:04 PM on June 11, 2020 [6 favorites]


Remember when OpenAI refused to release the full version of GPT-2 because it was too "dangerous"? Well, I'm glad that GPT-3, an even more powerful system for generating texts, somehow became so totally not dangerous that it's available for purchase. šŸ‘€
posted by Foci for Analysis at 9:06 PM on June 11, 2020 [6 favorites]


Given the photos I don't see why the drug use thing would be an issue frankly.
posted by Western Infidels at 10:51 PM on June 11, 2020 [3 favorites]


Not sure how this is an AI. It doesn't *know* what a dog is. It doesn't know what rating means. It's just generating text based on previous examples of text.
posted by GallonOfAlan at 11:34 PM on June 11, 2020 [4 favorites]


Itā€™s because everybody spent so long imagining the Turing Test as a magic oracle, that when it turned out that people are actually complete dumbshits who can be fooled by some simple pattern matching and heuristics, everyone was so embarrassed that they decided to just pretend that this is what they had meant by ā€œAIā€ the whole time.
posted by bjrubble at 12:43 AM on June 12, 2020 [33 favorites]


I mean, holy crap. I was studying this during the AI Winter (early 90s) when it had become clear that symbolic logic was a dead end, and the field was going back to basic biological models (neural nets but also quasi-mechanical models based on insects and stuff), and it was clear that AI was going to be very hard, fantastically expensive computationally, and essentially impenetrable to outside analysis.

Which bummed me out enough to drop it and pursue other things. (Not that I was really good enough to make a career out of it in any case.)

Iā€™ve made a lot of mistakes in my life, but if Iā€™d spent the last 30 years of my life on this, with the result that now a computer can at last reliably match the level of coherence of a typical Twitter message?

Suddenly I feel much better about my life choices.
posted by bjrubble at 1:14 AM on June 12, 2020 [23 favorites]


Now I'm reading comments on metafilter and wondering which of them were generated by this AI...
posted by Luddite at 2:40 AM on June 12, 2020 [3 favorites]


Relax foolish mortal! There are bots among us! There are bots among us all!
posted by transitional procedures at 3:12 AM on June 12, 2020 [6 favorites]


Not sure how this is an AI. It doesn't *know* what a dog is. It doesn't know what rating means. It's just generating text based on previous examples of text.

I mean, that's what "AI" tends to mean and I think a lot of people who work in AI are just fine with it being a term of art to mean that sort of thing, as opposed to a thing nobody can define because it is defined by its ability to replicate a thing the nature of which has been debated inconclusively for most of history.
posted by atoxyl at 4:08 AM on June 12, 2020 [5 favorites]


a thing nobody can define because it is defined by its ability to replicate a thing the nature of which has been debated inconclusively for most of history.

But we still know consciousness and intelligence when we see it. 'AI' is a deliberately misleading term in this context IMO, because Joe Nontechnical sees AI this and AI that being bandied about in the media and then we get nonsense scare stories about how 'I, Robot'-style automatons are going to take our jobs, kill us or both in the next 10 years.
posted by GallonOfAlan at 4:18 AM on June 12, 2020 [3 favorites]


Iā€™ve made a lot of mistakes in my life, but if Iā€™d spent the last 30 years of my life on this, with the result that now a computer can at last reliably match the level of coherence of a typical Twitter message?

Being an AI isn't proof against being a total total fuckwit.
posted by GCU Sweet and Full of Grace at 4:32 AM on June 12, 2020 [4 favorites]


Various commentators have reacted with horror to the fact that this model was trained on the Reddit text corpus. "Doesnā€™t this mean it will be irredeemably racist and/or sexist?" is the (entirely understandable) response. According to an anonymous Google AI person, getting the model not to be racist/sexist wasnā€™t that difficult - the main problem was something else entirely.
posted by pharm at 6:12 AM on June 12, 2020 [3 favorites]


@pharm i was really hoping your username was going to be "unacceptably horny" after i read that tweet. mefi has my AI trained way too high to look for the eponysterical.
posted by zsh2v1 at 7:49 AM on June 12, 2020 [2 favorites]


Small note from actually reading, the "13/10" is reasonable as it's copying @dog_rates where the minimum is 10/10:

For those who are unfamiliar, @dog_rates is a twitter account that posts user-submitted dogs, introduces them, and then gives them a rating from one to ten. All the dogs are rated at least 10/10 because theyā€™re very good dogs.

The fake dog images will certainly induce nightmares.

On a previous AI thread a mifitian posed they were tired of ML/AI discussion splitting between the singularity is upon us and "it's just linear algebra". It's really hard to talk about, the math in detail is not new, matrix multiplication is the functional tool of AI, was used in 1855 by Arthur Cayley. But it's really hard math, that within a carefully defined domain gets really amazing results. That this example gives something to laugh at and not just random gibberish is amazing, and kinda scary. Real scary, it's mimicking in a recognizable fashion, one tool that leads to creativity.
posted by sammyo at 7:57 AM on June 12, 2020 [3 favorites]


I built an app in Supercard that generates text. All from generative grammars and loads of word lists. I presented a text informally to group of philosophy types. They fell for it. It also generates insults, soap opera plots, etc. This coupled with Markov chain analysis and the texts you can build with that produces quite complex and believable texts. No linear algebra here. Or AI. But itā€™s not random gibberish. Itā€™s a hobby of mine. I see all these claims for AI and I laugh. Itā€™s not. Never was. My stuff works because it was all hand crafted. Just programmatically looking for patterns in a huge block of relatively arbitrary text may give you something recognizably as text when done, but there is no intelligence there.
posted by njohnson23 at 8:34 AM on June 12, 2020 [1 favorite]


The fake dog images will certainly induce nightmares.

they're good blorbs samoyo.
posted by sgranade at 10:28 AM on June 12, 2020 [5 favorites]


Various commentators have reacted with horror to the fact that this model was trained on the Reddit text corpus. "Doesnā€™t this mean it will be irredeemably racist and/or sexist?" is the (entirely understandable) response.

This is interesting because I feel like in a lot of ways, capturing the essence of what makes an online text racist (assuming they avoided scraping sub-reddits where straight-up racial slurs are common*) is the sort of thing that would be instantly recognizable to a large subset of humans, but actually pretty challenging for a model like this to even reliably recreate. To be clear, I have no doubt that a model trained on broad corpus text will end up synthesizing a lot of racist notions and associations. But I wonder if it would be reasonable to posit that any AI text model will actually inherently be less racist on the whole than its source text, because of the human-ness of racism. Anyway, I don't have any particular point here!, I'm just very interested in both procedural text generation and anti-racist language so this sparked some curiosity for me.

*I may be assuming that more care around this was taken than actually was...
posted by dusty potato at 11:38 AM on June 12, 2020


"It as the best of times, it was the worst of times. It was the worst of times , it was the worst of times , it was the worst of times , it was the worst of times , it was the worst of times.]"

Well, at least it's accurate.
posted by chortly at 11:48 AM on June 12, 2020 [2 favorites]


because Joe Nontechnical sees AI this and AI that being bandied about in the media and then we get nonsense scare stories about how 'I, Robot'-style automatons are going to take our jobs, kill us or both in the next 10 years.

I am generally quite skeptical of the super-AI-risk stuff but I will point out that an automated optimizing system placed control of important things is still capable of breaking some things in an unforeseen way even if it's not actually smart. And domain-specific programs are definitely capable of replacing domain-specific jobs.
posted by atoxyl at 11:48 AM on June 12, 2020 [1 favorite]


*I may be assuming that more care around this was taken than actually was...

From what the Twitter user in question has said, it looks like tweaking the model to ban use of explicitly racist / sexist terms was sufficient to quash those qualities in the output. I guess any output that tended towards the racist would end up using one of the banned words eventually?

It is quite funny that the reason Google couldn't use it is because their text bot was far too horny to ever release to the public. I would have loved to have been a fly on the wall for /that/ meeting with the company execs who approved the project in the first place :)
posted by pharm at 12:00 PM on June 12, 2020 [1 favorite]


Being an AI isn't proof against being a total total fuckwit.

posted by GCU Sweet and Full of Grace at 4:32 AM on June 12


Is this a general statement, or is this about a beef with some ROU in particular?
posted by bjrubble at 12:31 PM on June 12, 2020 [9 favorites]


far too horny to ever release to the public

Like ED-209 but for sex.
posted by nickmark at 12:42 PM on June 12, 2020 [1 favorite]


Reading more from the conversation pharm linked:

not productionizable as a source of truth but within the one-sigma range of normal american opinions.

That's really fucking sexist and racist.
posted by Tehhund at 12:51 PM on June 12, 2020 [1 favorite]


>Now I'm reading comments on metafilter and wondering which of them were generated by this AI...
The cluster it's running on hasn't mined enough cryptocurrency for the $5 fee ...yet.
posted by k3ninho at 1:23 PM on June 12, 2020 [2 favorites]


Ha! bjrubble, same AI Winter story. Binned it while waiting for Moore's Law to catch up. Not yet impressed.
posted by zengargoyle at 6:26 PM on June 12, 2020


Relax foolish mortal! There are bots among us! There are bots among us all!

It's true.
posted by thebots at 8:05 PM on June 12, 2020 [4 favorites]


Strong AI is 20 years away and always will be.
posted by neuron at 11:48 PM on June 12, 2020


They're all good dog-like constructs, Brent.
posted by sexyrobot at 5:47 PM on June 13, 2020 [1 favorite]


« Older The Colonel and the Housekeeper   |   Mastery of the selfā  translated into control of... Newer »


This thread has been archived and is closed to new comments