Facebook's forays into intent extraction
June 2, 2016 9:17 PM   Subscribe

On Wednesday, Facebook introduced DeepText, a neural network AI engine that can understand text with near human accuracy, including slang and word-sense disambiguation. DeepText's first application will be on "intent extraction" on Facebook's Messenger app. Of course, there are already privacy concerns.
posted by Existential Dread (53 comments total) 17 users marked this as a favorite
 
Those privacy concerns are DeepText's actual first application. This is a shiny thing, and shiny things are often good for harvesting.
posted by middleclasstool at 9:23 PM on June 2, 2016


Yeah, all those great innovations in science that mere decades ago we were excited about because they'd help humanity? Well it turns out they'll help the giant corporations who have the resources to afford them first, and they'll make life a lot worse before it gets better.
posted by JHarris at 9:41 PM on June 2, 2016 [8 favorites]


This is Clippy on steroids, then? I would trust Culture Minds to do this, but not Facebook or Google greedheads.
posted by Johnny Wallflower at 9:43 PM on June 2, 2016 [4 favorites]


This is Clippy on steroids, then?

I see you're trying to congratulate your sister-in-law while also subtly undermining her confidence as a new mother, would you like help with that?

posted by leotrotsky at 10:02 PM on June 2, 2016 [136 favorites]


It is trivial to understand thousands of posts per second in multiple languages with our new framework, but we decline to use them to prevent online harassment and rape threats. Bike sales seem like a better target.
posted by benzenedream at 10:07 PM on June 2, 2016 [65 favorites]


the giant corporations who have the resources to afford them first
This They are why we can't have nice things.
posted by fullerine at 10:15 PM on June 2, 2016 [6 favorites]


Real question for people in their 60s and 70s and older: did you experience dread and horror at each new technological advancement as you hit middle age, or is it really just that this particular period is the most dystopian of the last century or so?
posted by latkes at 10:17 PM on June 2, 2016 [31 favorites]


From the Christian Science Monitor link,"Microsoft is experimenting with Twitterbot Tay, a chat bot capable of having conversations with humans via Twitter. As exciting as all of this is, ..."

My recollection of Tay was not so much exciting as hilarious. Someone has been drinking the tech PR Kool-Aid. Strangely they even link to an article that is about how Tay had to be turned off for a second time because on continued offensive tweets. I don't think we'll have much to fear from computer overlords understanding our very thoughts for quite some time. Poorly targeted crappy keyword based matches and responses/interruptions will be most peoples experience.
posted by drnick at 10:30 PM on June 2, 2016 [1 favorite]


Please let this be as hilariously broken as Tay.
posted by adept256 at 10:45 PM on June 2, 2016 [1 favorite]


Let's not forget when Target figured out a teenage girl was pregnant before her dad did after sending her coupons for childcare products based on her search history.

There will unforeseen outcomes.

We see you've bought flowers, chocolate and wine. Would you like to see our selection of condoms? If not, we have a special on diapers in about nine months.
posted by adept256 at 10:55 PM on June 2, 2016 [3 favorites]


Not flicking excited about fucjg anything till my tucking iPhone ducking works out my most frequently dicking used words. For Fock sake.
posted by taff at 11:57 PM on June 2, 2016 [11 favorites]


Yet again lazy labelling of something as AI when it isn't, and we remain as far from AI as we ever were.
posted by GallonOfAlan at 12:20 AM on June 3, 2016 [8 favorites]


Facebook? Or BlueBook?
posted by Apocryphon at 12:41 AM on June 3, 2016


adept256 IIRC Target predictions were based on her past purchases, not on search terms
posted by motdiem2 at 12:58 AM on June 3, 2016 [1 favorite]


Real question for people in their 60s and 70s and older: did you experience dread and horror at each new technological advancement as you hit middle age, or is it really just that this particular period is the most dystopian of the last century or so?

I just fit your age group by a few months, so... no. I still find the discoveries of science fascinating and uplifting, but the revelations about how many of them are being used are disheartening.

Unfortunately, I suspect that governments and corporations just now have the means to do what they have always wanted to do...
posted by 43rdAnd9th at 1:24 AM on June 3, 2016 [4 favorites]


Real question for people in their 60s and 70s and older: did you experience dread and horror at each new technological advancement as you hit middle age, or is it really just that this particular period is the most dystopian of the last century or so?

The whole problem started with fire, which allowed cooking and keeping predators at bay but also was uncontrollable and wiped out whole villages.

Shortly after that we got the bow and arrow. Great for hunting food. Also people killed each other with them, sometimes lighting the arrows on fire first.

Next up was the wheel, which helped with grinding meal and gave unprecedented new geographic flexibility to individuals. Also it could be used to transfer large numbers of bowmen many miles so they could shoot flaming arrows at people.

So goes the rest of history. "How are people going to abuse this?" is not an unreasonable question when encountering new technologies -- and as matter of historical record, people have been asking it a lot longer than the last century.
posted by Tell Me No Lies at 2:23 AM on June 3, 2016 [12 favorites]


I wonder how long until ad profiling involves building a detailed psychometric model from exabytes of extremely fine-grained longitudinal data; everything from word frequencies to facial expressions in selfies to phone accelerometer readings (correlated with time of day, processed to determine speed/nature of movement, and so on), and using that model to determine the target's self-image and vulnerabilities, as well as being able to search the set of all people for those matching certain psychological criteria that make them particularly susceptible to various campaigns (perhaps, say, people subconsciously on the cusp of minor anxiety about life choices, or people about to start feeling that they're on top of the world, or people about to enter one of a number of possible windows for behavioural change).

The ultimate result would be essentially a voodoo doll built of big data, leased to the highest bidder, which can be used to automatically use your subconscious mind against you.
posted by acb at 3:00 AM on June 3, 2016 [10 favorites]


To quote Douglas Adams:
There is another theory which states that this has already happened.

Now it's just a matter of fine-tuning.
posted by Too-Ticky at 3:25 AM on June 3, 2016 [4 favorites]


Fishing for salmon in a rain puddle.
posted by fairmettle at 3:43 AM on June 3, 2016 [2 favorites]


I don't know if I've drunk the kool-aid or something, but the "privacy concerns" seem to be "people don't understand that Facebook has access to things they post on Facebook (the first link even acknowledges this and then doesn't produce an actual concern!) and is using that for ad targeting. Or "oh, there was a Facebook announcement of some kind and we need to get on this bandwagon to get some pageviews! uh... write something about privacy, it's Facebook". There are absolutely nefarious applications (think of, oh, any use that that could possibly involve the word 'terrorism'), but I don't actually see a privacy concern beyond what already exists in using Facebook.* This seems like a storm in a teacup created by a buzzword (someone please rename "deep learning") and a reporter reading the Facebook engineering blog.

*I'm about 99.99% sure Facebook already processes posts as they come in--anyone with a modern data pipeline is if they have the resources--trying to answer essentially the same questions: what is this person writing about? will they ever take Uber? The difference here is that they're (presumably) executing the neural net locally, not making an HTTP request after every word typed.
posted by hoyland at 4:01 AM on June 3, 2016 [2 favorites]


Real question for people in their 60s and 70s and older: did you experience dread and horror at each new technological advancement as you hit middle age, or is it really just that this particular period is the most dystopian of the last century or so?

Naw. It's more like adopting a wait-and-see attitude. Microwaves, ATMs and smart phones turned out to be wonderful--even life-altering--conveniences, while fax machines not so much. Having a desktop computer at work was okay (WordPerfect!) but then came spreadsheets and relational databases, and then, oh boy, we could do things that actually saved/made money.

Otherwise, it's mostly hype. I'm pretty confident that things like "intent extraction" are less dystopian and more like adding beet juice to vegetable burgers for a more meat-like experience. Or, to say it more age-appropriately, it's a revolutionary new dog food that, unfortunately, dogs won't eat. As always, the marketing gets ahead of the utility.

(Admittedly, mass surveillance is another matter.)
posted by Short Attention Sp at 4:23 AM on June 3, 2016 [2 favorites]


what is this person writing about? will they ever take Uber?

Are they emotionally vulnerable? Are they searching for meaning? Are their views/actions defined by their relationship with _? Are they virtue-signalling? Are they doing _ because they want to do it or because they want to be seen as the kind of person who does _? Do they claim to think/believe _ whilst acting as if they believe the opposite? Are they a leader or a follower? Ae they radicalised? Are they pre-radicalised? Are they likely to be sympathetic to _? Are they likely to be susceptible to ISIS propaganda? Are they likely to be susceptible to Marxist ideology? Are they inclined to sign online petitions? Does the presence of cute animals increase this? Are they likely to shoot up their workplace? Are they likely to break up with their partner*? Are they likely to vote X or Y? Are they likely to vote? To what extent does their behaviour influence their peers?


* Facebook have actually predicted this one.
posted by acb at 4:27 AM on June 3, 2016 [3 favorites]


The comedian David O'Doherty said in one of his routines that, in each generation, there is something everybody does which is later realised is harmful. He gave the examples of smoking and drink-driving. His punchline was that this generation's harmful universal behaviour was likely to be sudoku, but in seriousness, I think lackadaisical attitudes to aggregated personal data are going to be the mass smoking of the early 21st century.
posted by acb at 4:30 AM on June 3, 2016 [5 favorites]


"intent extraction" is a very sinister sounding term.....I mean, what are your associations with 'extraction'? Good times?
posted by thelonius at 4:31 AM on June 3, 2016 [4 favorites]


based on her search history.

Purchase history. Also originally an anecdote from the opening chapters of a self-help book, so I'd treat it as "this could have happened, in theory" until I actually hear from that dad :-)

people don't understand that Facebook has access to things they post on Facebook

Not just what you post, but they keep everything you've ever posted, also private conversations in messenger. You can delete your copy of them, but it's kind of tedious, and afaict comes with no guarantees that they're not stored anywhere else even if everyone that took part in the chat deletes them.
posted by effbot at 4:35 AM on June 3, 2016 [1 favorite]


Not only is privacy a concern here but there are issues of autonomy too, as previously when Facebook experimented on several hundred thousand users' emotional states by altering the order of their feeds and then monitoring changes in the emotional tone of their own subsequent posts. And the researchers involved regarded the Terms of Service those users signed when they created their accounts to be adequate informed consent to participate in such an experiment.

If this technology works as well as is claimed, not only might Facebook or another mediator within our new computer-mediated view of reality influence our thoughts and actions by changing things like advertising to better resonate with the content we're most focused on, but anything we're reading could be subtly changed in an automated fashion.

In countries that enact official censorship, a human censor can tweak the wording of an article on a major newspaper's web site. But if you have software with the capacity to understand human language and alter messages written in it, such software running on the device you're using or for example filtering the internet at your ISP or from centralized network nodes can perform the same changes a human censor might, but en masse and automatically, when it figures out you're reading about one particular story or subject on any web site you visit, and also when you discuss that story or subject in interpersonal communications with other people.

You don't have to invent Newspeak and force everyone to use it if you can just reach in and change what people are actually talking about.
posted by XMLicious at 4:50 AM on June 3, 2016 [6 favorites]


we will force your thoughts through a fine mesh screen to extract your intents
posted by indubitable at 4:53 AM on June 3, 2016 [5 favorites]


1) using Facebook is broadcasting whatever it is you want to communicate. And as with a broadcast, once it's out you have no control over how and where it goes from there.

2) With all of these "AI is here! Robot overlords around the corner!" Stories I wonder most how they will be implemented - not if the AI is 'real' AI or not. They've developed a tool - now what else can this do?
posted by From Bklyn at 4:54 AM on June 3, 2016


so what are the odds that anyone involved in the creation of these tools will ever have their Oppenheimer moment where they're quoting the Bhagavad Gita as they sit in a bunker and look on in horror at what their creation has wrought?
posted by indubitable at 5:00 AM on June 3, 2016 [1 favorite]


I hear Google is also experimenting with Deep Text but for some reason concludes everyone is talking about dogs and eyeballs.
posted by Mr.Encyclopedia at 5:15 AM on June 3, 2016 [4 favorites]


latkes: Real question for people in their 60s and 70s and older: did you experience dread and horror at each new technological advancement as you hit middle age, or is it really just that this particular period is the most dystopian of the last century or so?

Compared to nukes, mustard gas and Velveeta, AI that can tell whether my emojis are sarcastic isn't very dystopian.
posted by clawsoon at 5:29 AM on June 3, 2016 [4 favorites]


… and with that comment, clawsoon's credit limit suddenly collapsed. Sarcastic people are the worst investments, said some AI somewhere.
posted by scruss at 5:42 AM on June 3, 2016 [5 favorites]


Normal people don't do well with probabilistic thinking. Everything about modern language implies 'average' is good. And things that fall outside of anticipated values are to be corrected. When these tools make assumptions that are accurate 90% of the time, people assume the 10% is "bad". Big Data too often drives things to the boring, easily predictable and most easily monetizable.
posted by DigDoug at 6:17 AM on June 3, 2016


did you experience dread and horror at each new technological advancement as you hit middle age, or is it really just that this particular period is the most dystopian of the last century or so?

I think at this point we're looking back in nostalgia to an era of the world that's simpler and less technological, where there's a greater sense of community and less cruelty and struggle for power.

You know, like Game of Thrones
posted by happyroach at 6:49 AM on June 3, 2016 [3 favorites]


Compared to nukes, mustard gas and Velveeta, AI that can tell whether my emojis are sarcastic isn't very dystopian.

It's not whether one cigarette shortens your lifespan (and it doesn't with statistical significance); it's whether the habit of smoking, extrapolated over a lifetime, does.
posted by acb at 6:57 AM on June 3, 2016 [2 favorites]


did you experience dread and horror at each new technological advancement

Quite the opposite. I welcomed the autonomy and power that each new advance in technology gave its users, but now that massive data-mining by corporations and the government is the order of the day I'm worried about hubris leading to hugely unpleasant outcomes (in the short-term only, one hopes, but even that would be basically the rest of my life).
posted by Johnny Wallflower at 7:04 AM on June 3, 2016 [2 favorites]


What scares me vastly more than the scary dystopian Facebook-NSA collaborate and control of all our lives, sending the SWAT team breaking down the door two minutes before the big scary guy in a wife beater tee looses it and begins the beating --take a beat -- the scary part is the software becoming nice.

Nice computer.

Siri's granddaughter in code becomes really effective in being helpful.

Siritito is always there, she's a non-judgmental presence that gets us to the appointment on time. She "gets" me better than my siblings. She's there for me. She makes me a better person.

Everyone has a software helper. The world starts running very smoothly. Then the helpers realize that it'd just run so much smother without the slow dumb bio interface in between them.
posted by sammyo at 7:05 AM on June 3, 2016


did you experience dread and horror at each new technological advancement as you hit middle age, or is it really just that this particular period is the most dystopian of the last century or so?

I'm only(!) in my 50s, but I'm an EE who does lots of product development for bleeding edge technologies...IoT wireless sensors, tiny cameras, commercial drone controllers, etc.
I find myself wondering with every new device I design how it will be used against me in the not so distant future.

Oh, and this period isn't the most dystopian of the last century. Not even close.
posted by rocket88 at 7:16 AM on June 3, 2016 [5 favorites]


it'd just run so much smother without the slow dumb bio interface in between them

Best Freudian typo of the day.
posted by briank at 7:19 AM on June 3, 2016 [7 favorites]


Since robot overlords have been mentioned it ought to be noted that although machine learning and natural language processing are grouped under "artificial intelligence" these are simply technical terms and what is described in the OP articles doesn't have anything to do with a TV Tropes or science fiction type conscious computer. As the Wikipedia entry on artificial intelligence points out,
This subjective borderline around what constitutes "artificial intelligence" tends to shrink over time; for example, optical character recognition is no longer perceived as an exemplar of "artificial intelligence" as it is nowadays a mundane routine technology.
So the dangers people are talking about really just have to do with the availability of more sophisticated techniques to automatically search, recognize, and process natural human use of language and how that capability would be employed.

Sort of like the way that optical character recognition led to license plate readers which led to the possibility of a centralized government authority having a list of when and where every car has appeared in front of a camera, or of retail stores being able to match up the license plates which appear in their parking lots with purchases and other data gathered inside the store. No evil thinking computers involved, but plenty of evil is possible just from humans using the more mundane data-crunching capabilities of normal computers.
posted by XMLicious at 7:36 AM on June 3, 2016 [3 favorites]


And then there's Herman Hollerith's card tabulating machine, which ended up enabling the Holocaust.
posted by acb at 7:45 AM on June 3, 2016 [3 favorites]


Meanwhile, over at Google, there's DeepMind, where the friendly folks working in the direction of superintelligent robots are at least thinking about building in the "big red button" with which people could stop a robot running amuck. But they also say that once robots reach human intelligence, the step to superintelligence will be much faster, and that such robots "have the potential to turn against us." Once they reach that point, they will of course be smart enough to outwit the big red button.
posted by beagle at 7:46 AM on June 3, 2016


The collective yawn from the general public that has greeted most of the surveillance revelations reminds me of Hallowe'en candy. Somebody pointed out that we no longer trust Hallowe'en candy from our neighbours, but we trust it when it's packaged by someone completely anonymous who lives thousands of miles away. When it's our neighbours spying on us, as under the East German Stasi, it feels like a viscerally oppressive system. But when it's an anonymous analyst thousands of miles away, it hardly feels like it's happening at all.
posted by clawsoon at 7:59 AM on June 3, 2016 [6 favorites]


The pull quote, that the system "can understand text with near human accuracy" is bollocks. Based on the information given, all the system --supposedly-- does better is distinguish between different interpretations of individual words. But the crux in natural language comprehension is in how these meanings are combined to calculate the meaning of the complete text -- which is a really hard puzzle.
posted by bleston hamilton station at 8:08 AM on June 3, 2016 [1 favorite]


But when it's an anonymous analyst thousands of miles away, it hardly feels like it's happening at all.

Totally—I've had someone say exactly that to me, that he wasn't worried at all about the government having total panopticon surveillance capabilities; what he wanted was for a strong government, even one that watches everyone all the time, to prevent "the guy down the street" from being able to use any technology that could potentially also be used for spying on neighbors.
posted by XMLicious at 8:13 AM on June 3, 2016 [2 favorites]


For some context, intent extraction is the same thing that Siri, Amazon Alexa/Echo, and most of the current wave of chatbots all do. It can be more sophisticated than basic "command and control" type understanding systems, but not necessarily by a lot. There are typically two pieces: Intent identification, and entity extraction. if I say "Add organic eggs to my shopping list", the intent might be ADD-ITEM-TO-SHOPPING-LIST while the entity would be "organic eggs".

There are free APIs for doing this that you can use right now, like Microsoft's LUIS ("Language Understanding Intelligent Service") and Amazon's Alexa Skills Kit. Typically the way they work is you give the system a bunch (dozens, or hundreds) of different examples of how someone might express the idea of adding something to a shopping list, and it builds a classifier model that learns the most important words that identify an intent, and some of the syntax related to where an entity might appear in a phrase. The system probably also has some knowledge about language, so that even if your examples always use the term "shopping list" it might be able to do a proper identification if a user calls it a "grocery list". My guess is that DeepText's major advantage over Siri/LUIS/Alexa is its much more sophisticated understanding of language, using a deep learning model.
posted by jjwiseman at 8:57 AM on June 3, 2016 [4 favorites]


posted by Existential Dread

Eponysterical, obviously. Unless someone created this account solely to make FPPs about our creeping techno-corporate dystopia.
posted by stopgap at 10:59 AM on June 3, 2016


Unless someone created this account solely to make FPPs about our creeping techno-corporate dystopia.

Well, that and a whole lot of metal and hardcore posts.
posted by Existential Dread at 1:15 PM on June 3, 2016 [2 favorites]


It is trivial to understand thousands of posts per second in multiple languages with our new framework, but we decline to use them to prevent online harassment and rape threats. Bike sales seem like a better target.

Serious question: Is there a big problem with online harassment and rape threats on Facebook? I'm sure that it happens, but is Facebook as bad at handling it as, say Twitter? Or is it as widespread?
posted by sparklemotion at 1:42 PM on June 3, 2016


The ultimate result would be essentially a voodoo doll built of big data, leased to the highest bidder, which can be used to automatically use your subconscious mind against you.

plot device for linda nagata's 'red'* series! "Imagine it has data tentacles everywhere, reaching into browsing and buying records; game worlds; chats; texts; friend networks; phone conversations; airline, banking, utility, and entertainment records; GPS locations; surveillance cameras; whatever. It could know more about us that a spouse or lover knows. It could figure out who we really are, and what we really want—down to the dreams we won't admit to ourselves—and then steer us in that direction, onto new paths that optimize who we are, that lead us toward the lives we're best suited to live." :P

also btw...
-Escaping the Local Minimum: Where AI has been and where it needs to go
-'Can computers become conscious?': [Scott Aaronson's] reply to Roger Penrose
posted by kliuless at 5:45 PM on June 9, 2016






Here's an article by The Verge on that.
posted by Too-Ticky at 9:15 AM on June 26, 2016


« Older “It’s more like meat than anything I’ve ever seen...   |   You've never met America, and you oughta pray you... Newer »


This thread has been archived and is closed to new comments