Better Language Models and Their Implications
February 14, 2019 9:03 PM   Subscribe

We’ve trained a large-scale unsupervised language model which generates coherent paragraphs of text, achieves state-of-the-art performance on many language modeling benchmarks, and performs rudimentary reading comprehension, machine translation, question answering, and summarization — all without task-specific training.

... Due to concerns about large language models being used to generate deceptive, biased, or abusive language at scale, we are only releasing a much smaller version of GPT-2 along with sampling code.
posted by latkes (45 comments total) 33 users marked this as a favorite
 


somewhat depressing that there’s no big new idea. apparently the key to state of the art is a relatively simple network, huge dataset, and multi-million dollar budget.
posted by vogon_poet at 9:20 PM on February 14, 2019 [5 favorites]


If it's any consolation, I doubt this model can be trained to write worse prose than Prostetnic Jeltz.
posted by Enturbulated at 9:23 PM on February 14, 2019 [6 favorites]


MetaFilter: a relatively simple network, huge dataset, and multi-million dollar budget.
posted by ricochet biscuit at 9:23 PM on February 14, 2019 [8 favorites]


Alternatively — MetaFilter: performs rudimentary reading comprehension, machine translation, question answering, and summarization — all without task-specific training.
posted by ricochet biscuit at 9:24 PM on February 14, 2019 [34 favorites]


In other recent language AI news, a few weeks ago Facebook released a project called LASER that allows text to be compared across 93 different languages. For example, given a pair of sentences:

1. "Um, then we moved to a new house"
2. "We stayed in the same house our whole lives"

LASER can infer that those sentences express opposite ideas, even if those two sentences are written in wildly different languages (e.g. one sentence in Arabic and the other in Swahili, or Bulgarian and Hindi, or Thai and Spanish).

The model is becoming so generalized that it can even manage good performance on languages an dialects it was not explicitly trained on (e.g. generalizing from German to Swabian).

One obvious applications for Facebook, apart from translation, is learning what an abusive post looks like in (basically) any language.
posted by jedicus at 9:27 PM on February 14, 2019 [3 favorites]


somewhat depressing that there’s no big new idea

Honestly, as a linguist who interfaces a lot with computational work, it's actually kind of refreshing. There was a long period where I think many of us (at least those of us who had the background) had a sort of latent fear that maybe in the end the NLP folks would be the ones to "solve" linguistics, by producing a working NLP system. I have absolutely no fear of this any more -- the rise of extremely effective, large, uninterpretable neural models (which turn out to be what a "working" NLP system involves) has really clarified my beliefs about what NLP will and won't contribute to the scientific investigation of language. It's also been a clarifying factor for people: you no longer find engineering-oriented NLP people paying any lip service to caring about understanding how language works if they aren't truly interested.
posted by advil at 9:29 PM on February 14, 2019 [16 favorites]


uninterpretable neural models

There's been some really fascinating work on that. For example Dalvi et al., What Is One Grain of Sand in the Desert? Analyzing Individual Neurons in Deep NLP Models, in which they can show that an individual neuron in a network is responsible for, e.g., months of the year and that another one is responsible for negation words.
posted by jedicus at 9:34 PM on February 14, 2019 [3 favorites]


i have heard NLP researchers point out that with gigabytes of text, “the paucity of the stimulus” does not apply, and thus they can deal with language in a completely non-human way.
posted by vogon_poet at 9:35 PM on February 14, 2019


The fact that they've chosen not to release the full model on the basis of concerns about its potential for abuse reconfirms my opinion that any disaster predicated on the notion of AI outperforming human beings is less likely to be driven by improvement in artificial intelligence than by the ongoing erosion of human critical thinking skills.

Train this thing up on Trump tweets or Boris Johnson op-eds and we won't need world leaders any more.
posted by flabdablet at 9:36 PM on February 14, 2019 [4 favorites]


tbh i suspect the choice not to release it is somewhat more of a meta-experiment about research norms than a genuine feeling that there would be serious harm.
posted by vogon_poet at 9:39 PM on February 14, 2019 [5 favorites]


If it's any consolation, I doubt this model can be trained to write worse prose than Prostetnic Jeltz.

Literally not prose, but poetry.
posted by hippybear at 9:41 PM on February 14, 2019


The fact that they've chosen not to release the full model on the basis of concerns about its potential for abuse reconfirms my opinion that any disaster predicated on the notion of AI outperforming human beings is less likely to be driven by improvement in artificial intelligence than by the ongoing erosion of human critical thinking skills.

All algorithms are influenced by the known and unknown biases of their programmers. Even AI input sets need to be parsed and interpreted somehow.

If we as a society work toward making our algorithms non-biased and non-abusive and peaceful and working for the good of the common society of man, then that means we will be changing our society toward those same goals. Save the cheerleader, save the world. Or something.
posted by hippybear at 9:45 PM on February 14, 2019


Mr. Bear: The discussion on the commonalities and differences between prose and poetry, and how the boundaries are defined ... I'll just leave that to professionals like Vroomfondel and Majikthise. If I don't, their union may start getting a bit shirty with me. Again.
posted by Enturbulated at 9:58 PM on February 14, 2019 [1 favorite]


It's also been a clarifying factor for people: you no longer find engineering-oriented NLP people paying any lip service to caring about understanding how language works if they aren't truly interested.

What is the world but an interconnected series of Chinese rooms?
posted by Going To Maine at 10:03 PM on February 14, 2019 [6 favorites]


The word used in the source text is, repeatedly, "poetry". I'm not sure what there is further to be discussed.
posted by hippybear at 10:04 PM on February 14, 2019 [1 favorite]


What is the world but an interconnected series of Chinese rooms?

And it starts with a trusty brass lantern and a mailbox.
posted by hippybear at 10:04 PM on February 14, 2019 [3 favorites]


By the time we have truly conscious AI we'll be so use to computer generated outputs (deep fakes, texts) that it won't matter where it comes from.
posted by iamck at 10:31 PM on February 14, 2019 [1 favorite]


Train this thing up on Trump tweets or Boris Johnson op-eds and we won't need world leaders any more.

But they did! And worse! They trained the thing on Reddit. From their paper:
We scraped all outbound links from Reddit, a social media platform, which received at least 3 karma.
Trump tweets isn't even close to the worst things linked with more than 3 karma on Reddit.

And now they've got it locked up because it's a monster. From an article in the Guardian:
OpenAI made one version of GPT2 with a few modest tweaks that can be used to generate infinite positive – or negative – reviews of products. Spam and fake news are two other obvious potential downsides, as is the AI’s unfiltered nature . As it is trained on the internet, it is not hard to encourage it to generate bigoted text, conspiracy theories and so on.
posted by straight at 2:25 AM on February 15, 2019 [1 favorite]


The limited version of GPT-2 already generates clearer, more articulate prose than the political memes and posts on Facebook. I look forward to news stories about Russian intelligence agencies replacing an entire building full of trolls with a devop in a rented office.
posted by at by at 2:37 AM on February 15, 2019


I don't think we'll have to wait for Russian military intelligence to be the ones exploiting this. If Facebook can experiment on your emotional state by re-ordering your feed of messages, why not pep you up and brighten your day by subtly altering the content of those messages from your friends and family?

Nothing substantial, you understand, just a few imperceptible semantic tweaks for your own good mood, and maybe an effort to blunt criticism and prevent interpersonal conflicts from igniting just like any good human moderator might do!

And maybe a few paid product placements interjected into the passing comments of your friends but, y'know, nothing major that you would really object to.
posted by XMLicious at 3:04 AM on February 15, 2019 [3 favorites]


Vox piece by Kelsey Piper.
posted by hat_eater at 3:17 AM on February 15, 2019 [1 favorite]


I would've been most impressed if they had written the first paragraph of the article by hand and then let their language model write the rest of it.
posted by clawsoon at 3:32 AM on February 15, 2019 [3 favorites]


And maybe a few paid product placements interjected into the passing comments of your friends but, y'know, nothing major that you would really object to.


Won't never happen, it would be way too blatant to slip a reference to a cool delicious Pepsi into an erudite academic discussion.
posted by sammyo at 3:59 AM on February 15, 2019 [5 favorites]


Alternatively — MetaFilter: performs rudimentary reading comprehension, machine translation, question answering, and summarization — all without task-specific training.

That's AskMeFi, not MetaFilter.
posted by Pyrogenesis at 4:13 AM on February 15, 2019 [4 favorites]


kalessin: It's a reference to a landmark thought experiment in the field, not a dig at Chinese or foreign people generally.
posted by ragtag at 5:42 AM on February 15, 2019 [7 favorites]


The thought experiment itself can be an intentional or unintentional dig at Chinese people and language. The strange, incomprehensible, foreign gobbledygook that of course our monolingual anglophone doesn't really understand is Chinese and not Russian or German or even just ASCII numbers.
posted by GCU Sweet and Full of Grace at 6:01 AM on February 15, 2019


That thought experiment does, I think, benefit from the western Orientalist perception that kalessin is describing, though, in that it invokes the spectre of Chinese-language translation as happening without intention or conscious understanding.

It's not the Latin Room or the Ancient Greek Room or Linear B Room Experiment, even though no living person has ever encountered a native speaker of any of those languages, making them, I think, even more apt for the point of the thought experiment.
posted by gauche at 6:02 AM on February 15, 2019 [1 favorite]


Mod note: Agree that people should take this reminder as an opportunity to avoid the "Chinese Room" usage, but also asking folks that if you need to talk it out further to please go ahead and make a Metatalk post so this thread can continue on the AI discussion. Thanks. Update: here's the Metatalk discussion.
posted by taz (staff) at 6:19 AM on February 15, 2019 [5 favorites]


The obvious next step is to get Person Who Does Not Exist to start posting these to Twitter. I'm not sure it hasn't started yet.
posted by Quindar Beep at 7:11 AM on February 15, 2019 [4 favorites]


That unicorn example is a remarkable little bit of fantasy place setting - "a natural fountain, surrounded by two peaks of rock and silver snow. [...] By the time we reached the top of one peak, the water looked blue, with some crystals on top." - but I wonder if they chose unicorns as the subject because the lack of a clear world model makes all of its output slightly fantastical.
posted by lucidium at 7:48 AM on February 15, 2019 [1 favorite]


I would've been most impressed if they had written the first paragraph of the article by hand and then let their language model write the rest of it.
They soft of did! A short version of this is one of the examples given, and it's delightfully meta. It's not very discoverable, and I missed it on my first time through: The short lines above the unicorn passage are clickable and indicate other sample texts. Here's just the computer-generated part of the meta-text:
Here you can see the most recent progress with Generative Pre-trained Transformer:

Figure 1: Generative Pre-trained Transformer training on several texts.

We are now preparing a collection of datasets for translation and machine translation in our language model. We will be using one of the large number of text samples provided by The New York Times.

We believe this project is the first step in the direction of developing large NLP systems without task-specific training data. That is, we are developing a machine language system in the generative style with no explicit rules for producing text.

We hope for future collaborations between computer scientists, linguists, and machine learning researchers.
posted by dbx at 8:16 AM on February 15, 2019 [2 favorites]


The obvious next step is to get Person Who Does Not Exist to start posting these to Twitter. I'm not sure it hasn't started yet.

Right. For those who missed it, yesterday we had a post about AI-generated faces of people who don't actually exist. I suggested there that that project needs a Turing-type test, and when human observers can no longer tell which ones are fake and which are real, the algorithm has won. That could be done with these AI-generated stories, as well.

Imagine the next logical likely step: AI-generated movies with all the parts played by AI-generated Actors Who Do Not Actually Exist, using scripts by AI Screenwriters Who Do Not Actually Exist. A new Oscars category for sure.
posted by beagle at 10:57 AM on February 15, 2019 [1 favorite]


Honest Trailers made an Honest Trailer for Honest Trailers last year that was written by a robot. More precisely Botnik's Predictive Writer App, see the Honest Trailers Commentary - Honest Trailers Written By A Robot for more details on how. Not as sophisticated as this, and needing a lot of human interaction/intervention.
posted by zinon at 1:43 PM on February 15, 2019


The fact that they've chosen not to release the full model on the basis of concerns about its potential for abuse reconfirms my opinion that any disaster predicated on the notion of AI outperforming human beings is less likely to be driven by improvement in artificial intelligence than by the ongoing erosion of human critical thinking skills.

Train this thing up on Trump tweets or Boris Johnson op-eds and we won't need world leaders any more.


Or a "benevolent" overlord AI could recognize this critical thinking gap, and then you have Metal Gear Solid 2.
posted by hexaflexagon at 4:55 PM on February 15, 2019


I don't really see the potential for abuse, when there are millions of humans that can write the same thing.
posted by ymgve at 9:40 AM on February 16, 2019 [1 favorite]


I don't really see the potential for abuse, when there are millions of humans that can write the same thing.

A computer program that can reproduce garbage on end, that can let one person do the work of thousands, is a terrible intensifier. A computer program that allows people to lie more easily, that helps make it even harder to understand the world that exists than it already is, is an awful thing.

Millions of humans, working together, could indeed write similar awful things. They aren't doing so. If they all started to, things would get much, much worse.
posted by Going To Maine at 10:12 AM on February 16, 2019 [1 favorite]


The kind of robocall telemarketing that pretends to be a real person for the initial steps of a telephone conversation will be able to spool out much longer conversations, and respond intelligently, before the recipient of the call figures out that they're talking to software, if they ever do. All of the telephone scams that have been run out of massive call centers in India with the workers going through extensive training to put on an American English accent (there was a really great FPP about this but I'm not finding it) can be fully automated with flawless synthesized accents.

When a hacker gets access to someone's email it's not just a list of contacts to spam but a corpus of text that can be used to train a system like this to imitate that person's vocabulary and phrasing and that of their correspondents.

The most insidious possibility, it seems to me, isn't wholesale fabrication of writing or speech but subtle alteration of real messages between people. Or not even between people; you could infect someone's laptop and phone with a virus that in real-time rewrites the content of all news and social media.

It's like the targeted Russian advertising that influenced U.S. voters escaping the confines of the advertisements and altering everything you look at.

Or like the 20th-century film The Truman Show, in which the main character has been raised for his entire life as the star of a reality TV show, unbeknownst to him, and everyone he's ever met is an actor speaking words written by a staff of writers and coordinated by 24-hour shifts of directors and stage crew. That's what increasingly sophisticated software-based imitation of humans enables: a world in which every person can have an entire team of algorithmically-driven writers and actors and graphic artists (or more likely innumerable competing teams) dedicated to trying to falsify portions of your life down to the smallest details.
posted by XMLicious at 11:33 AM on February 16, 2019 [3 favorites]


Imagine a system like this trained on heavily slanted conservative input: Fox News transcripts, Breitbart articles, etc. Given only a short prompt (e.g. the first paragraph of a news story), it could churn out an endless spew of unique, believable right-wing responses, which could automatically be posted to Facebook, Twitter, etc, by bot accounts. A small team could orchestrate an effectively infinite astroturf campaign that looked almost indistinguishable from real conversations and posts.

That's one of the many reasons this would be so dangerous. I fear that things like this will, necessarily, spell the end of the free and/or anonymous web.
posted by jedicus at 6:15 PM on February 16, 2019 [2 favorites]


Millions of humans, working together, could indeed write similar awful things. They aren't doing so.

Um. Have you seen Facebook or the YouTube comments sections lately?
posted by flabdablet at 8:38 PM on February 16, 2019


Imagine a system like this trained on heavily slanted conservative input: Fox News transcripts, Breitbart articles, etc. Given only a short prompt (e.g. the first paragraph of a news story), it could churn out an endless spew of unique, believable right-wing responses, which could automatically be posted to Facebook, Twitter, etc, by bot accounts.

I don't need to imagine it; the US elected it as President.
posted by flabdablet at 8:39 PM on February 16, 2019 [1 favorite]




From that blog post:
I agree with the OpenAI researchers that the general existence of this technology for fabricating realistic text poses some societal risks. I’ve considered this risk to be a reality since 2015, when I trained RNNs to fabricate product reviews, finding that they could fabricate reviews of a specified product that imitated the distinctive style of a specific reviewers.

However, what makes OpenAI’s decision puzzling is that it seems to presume that OpenAI is somehow special—that their technology is somehow different than what everyone else in the entire NLP community is doing—otherwise, what is achieved by withholding it? However, from reading the paper, it appears that this work is straight down the middle of the mainstream NLP research. To be clear, it is good work and should likely be published, but it is precisely the sort of science-as-usual step forward that you would expect to see in a month or two, from any of tens of equally strong NLP labs.
posted by XMLicious at 2:24 PM on February 17, 2019 [3 favorites]


I think that taking an overt position that “this technology is dangerous and will be misused” actually feels much more frank than most academics, perhaps even moving us a step closer to the eventual Butlerian Jihad.
posted by Going To Maine at 4:14 PM on February 17, 2019




« Older There is a reason why dog owners have coined the...   |   When You Bring the Songs Back, You Are Going to... Newer »


This thread has been archived and is closed to new comments