Swipe Left
June 23, 2017 2:04 AM   Subscribe

"we designed a chatbot, a smart computer program that deployed an adaptable script. In the two days ahead of the election earlier this month, the chatbot struck up conversations with thousands of young people between 18 and 25 years old on Tinder. The chatbot talked about politics, with the aim of getting voters to help oust the Conservative government."
posted by roolya_boolya (29 comments total) 15 users marked this as a favorite
 
i got a mile of numbers and a ton of stats
posted by mwhybark at 2:28 AM on June 23, 2017 [2 favorites]


"Tinder is too casual a platform for users to feel hoodwinked by some political conversation."

In my world, it is not possible to have a "conversation" with a bot. I'm happy to see young people being mobilized toward my political bias, but I really wonder about this. Do the people organizing it think they are setting up "conversations" with thousands of young people? When you ask Siri a question and the voice responds "I'm thinking" do you believe an entity is actually thinking?

I've just finished reading Computer Power and Human Reason by the late Joseph Weizenbaum (who developed ELIZA, one of the first interactive computer programs), and I've begun to notice how often people seem to populate their world with sentient computer programs. This article brings it into focus: can a bot convince someone to vote Labor? Or maybe my question is, should a bot be used to interact with someone about politics?

I have no answers, but I'm curious to learn what others think. (Which may be mostly, "meh." I'm old, and changes that leave me baffled are simply accepted. Just like when I was young....)
posted by kestralwing at 2:56 AM on June 23, 2017 [4 favorites]


I had a phone call the other day that offered me "extraordinary vacation deals", and when I responded to the voice talking to me, it continued talking. I answered again and it started talking and it talked over what I was saying a second time. It then started talking again and I said "I think you need to put me on your Do Not Call list" and it stopped talking in mid-sentence and said "thank you for your time" and hung up.

I don't think I was talking to a person.
posted by hippybear at 3:06 AM on June 23, 2017 [20 favorites]


kestralwing - if you haven't heard it before, you may find it useful to consider this a case of people adopting the intentional stance towards a system. It's not so much that people think that there are intelligent entities speaking and thinking, but that it is in some sense the cognitively easiest way to interpret the behaviour and way to interact with those systems.
posted by crocomancer at 3:11 AM on June 23, 2017 [9 favorites]


When you ask Siri a question and the voice responds "I'm thinking" do you believe an entity is actually thinking?

When I read that question, my first thought was Weizenbaum. There's a passage in that book about how quickly his secretary asked him to leave the room when she got talking to ELIZA. I can believe people are more sophisticated now, but the 'bots are more sophisticated too.

how often people seem to populate their world with sentient computer programs. This article brings it into focus: can a bot convince someone to vote Labor?

Computer programs, cuddly toys, pets, cars, washing machines, weather... we anthropomorphize everything. Billboards aren't sentient, but they can convince people to vote. I don't see why a computer program should be any different.
posted by Leon at 3:13 AM on June 23, 2017 [7 favorites]


Do the people organizing it think they are setting up "conversations" with thousands of young people?

Well yes they do because in fact that is what they did. From reading the article I understand that the bot was linked with volunteers' Tinder profiles and the volunteers could, and often did, join the conversations. Many of the people who matched and initially chatted with the bot went on to become volunteers themselves. Several of the volunteers said that the experiment allowed them to canvas despite health problems. The bot initiated the conversations, which were then in many cases continued by humans on both sides.

So perhaps a better question to ask in relation to this particular article would be can a bot be used as part of an innovative canvassing strategy?
posted by roolya_boolya at 3:16 AM on June 23, 2017 [3 favorites]


I heard a piece on BBC World Service recently about "micro-PACs", which are people who are taking the task of political action upon themselves. They buy a small amount (say, $30 worth) of ads on Facebook which hit about 500 people and of those a certain percent will come to their action website and engage and out of those some will also become active. The activist then begins creating memes and other things to share on Facebook and they also ask for funds from their True Believers (who are sharing their messages with their friends) and so they can then buy more ads, of which a percentage of a percentage will be active with sharing, and etc. Generally it won't go above the whatever spending limit might be imposed for political campaigning if it's even recognized as campaigning at all.

Oh, I found the article.

posted by hippybear at 3:31 AM on June 23, 2017 [9 favorites]


So, not bots, but an interesting parallel.
posted by hippybear at 3:31 AM on June 23, 2017


The problem with Weizenbaum's book is that he was absolutely right at the time (it was published in 1976 so written some time prior to 1975).

Since 1975 we've seen four decades of Moore's Law, giving us roughly ten orders of magnitude more storage and a mere five or six orders of magnitude more processing power (but multiply the number of computers available for such work by, oh, a million: microprocessors now outnumber human beings).

Weizenbaum's chatbots didn't do machine learning; they were fairly simple text parsers with a limited vocabulary and no complex ability to track the state of a human-machine interaction.

Today's bots are not AIs in the classical naive sense of a disembodied human brain-in-a-box. But they frequently back on to complex machine learning systems with gigantic corpuses of information behind them — consider Google Translate's technique of leveraging the entire recorded textual corpus of UN and EU debates and legislation and doing statistical mining of correlations to derive phrase-level translations, and how weirdly accurate it can seem. They thus leverage huge amounts of information derived from human behaviour which can make their responses seem eerily human-like because they are, in fact, performing a data-driven emulation of human responses.

So ... I can know in the abstract that the lights are on but there's nobody home, but my human-instinctive tendency to apply the intentional stance to spoken/written interactions still comes to the fore and I treat the bot as if it's a person because it's designed to operate optimally when I do that.
posted by cstross at 4:01 AM on June 23, 2017 [18 favorites]


PS: to give you a concrete idea of what 45 years of computing progress looks like: I ran across a figure for US manufactured capacity of hard drives in 1973: it was 93 gigabytes.

(Remember, personal computers didn't exist back then and typical drive capacities were 5-100Mb, hanging off larger minicomputers and mainframes — entry level minis had tape drives and maybe 8" floppies. Also remember that this was before most computer manufacturing got outsourced overseas: the USA was by far the world's largest manufacturer of such kit.)

Today my most intimate computing experience is delivered through my iPhone 7+, which has 256Gb of on-board storage. So, taking into account the growth curve, assuming annual doubling was in progress back then, my phone has roughly the same capacity as the entire global installed online storage capacity in 1973 (excluding magtape).
posted by cstross at 4:37 AM on June 23, 2017 [8 favorites]


consider Google Translate's technique of leveraging the entire recorded textual corpus of UN and EU debates and legislation and doing statistical mining of correlations to derive phrase-level translations, and how weirdly accurate it can seem.

I always thought this is a really interesting contrast to the failure of classical, symbol system + logical rules, top-down AI approach to machine translation, which, iirc, was one of the first big AI research projects.
posted by thelonius at 4:45 AM on June 23, 2017 [2 favorites]


> So ... I can know in the abstract that the lights are on but there's nobody home, but my human-instinctive tendency to apply the intentional stance to spoken/written interactions still comes to the fore and I treat the bot as if it's a person because it's designed to operate optimally when I do that.

It's funny to see this remark because the other day I was thinking on why I find it so difficult to engage with interaction bots. Not only artificial personal assistants like Siri but even phone directory navigation that is designed around natural voice interfaces ("Please speak your name and account number... For customer service, please say 'one'...") I punch out for keypad interaction whenever they provide the option. It get immediately self-conscious and suddenly it feels unwieldy to speak into a phone the moment it's apparent there's a bot at the other end of the line. And that's carried over to any use of my iPhone, so even when I'm alone in my car, have Bluetooth on and can simply say, "Siri, find the quickest route to the market", I'll still pull over, look up the place in Google, and then key in the address manually. It's making me old-fashioned, but only to the fashions of five years ago.

I still don't know why it is that I can't bring myself to talk to robots.
posted by ardgedee at 5:09 AM on June 23, 2017 [6 favorites]


I am not entirely sure why I don't like the idea of canvassing people using bots. It's not especially consistent in the sense that, what is so sinister that a bot is doing? It's not like a human can't use bad or dishonest arguments, even if we assume that the bot does so.

I guess these are my post-hoc bot concerns:

1. Don't like the depth of access that the internet has to everyone, because it makes people easier to manipulate. Bots probably have faster and more comprehensive ability to process data about individual people and can be even more targeted/manipulative than a human.

2. Don't like the volume potential. There are only so many canvassers but there's infinite bots. I don't like the ads-everywhere landscape we already inhabit; a bots-everywhere one doesn't seem much better.

3. Cogs in the machine. More and more, humans are being turned into flesh cogs for interactions between machines - essential cogs, but not essential because of anything that we actually do or are, only essential as a pretext. A victory between bots decided by which ones can manipulate humans better seems like an extension of this - like politics is a sport and humans are the ball, not the players. I mean, in the long run the logical solution is to exterminate most of the humans - they're a nuisance and we no longer value them for their personal qualities anyway - they're only important as proving ground for manipulation by algorithm and the resulting distribution of money/power.

Basically, I think we're in one of the darker timelines and bots probably are inevitable, but I'm not thrilled.
posted by Frowner at 5:16 AM on June 23, 2017 [8 favorites]


Using bots is not the problem, surely; putting them on Tinder, with real people's profiles and pictures plus faked locations is. Will Corbyn endorse that practice? Did he agree to it?
posted by Segundus at 5:30 AM on June 23, 2017 [3 favorites]


I've begun to notice how often people seem to populate their world with sentient computer programs. This article brings it into focus: can a bot convince someone to vote Labor? Or maybe my question is, should a bot be used to interact with someone about politics?

The chatbots that we're talking about that were actually used for this program do not think, they are not sentient or have any self-awareness. I'm not saying that such a thing is impossible, but the program described has as much sentience as Windows XP.

The question of misrepresentation on Tinder is an interesting one. I've never used it, but I expect people's expectations are of people misrepresenting themselves. However, you could imagine other entities using this technology, leading to conversations like this one:

Hey, nice photo.
Thanks, you too. What are you up to?
I am drinking a cool glass of delicious, refreshing Pepsi, how about you?
posted by demiurge at 5:38 AM on June 23, 2017 [3 favorites]


Yeah, like I get that everyone's "yay Labour", but it would be fucking trivially easy for redditors to do this as well and I cannot scream my "no no no fucking no" fast enough.
posted by corb at 6:01 AM on June 23, 2017 [9 favorites]


> Today's bots ... leverage huge amounts of information derived from human behaviour

So you're saying that it's our fault that when I googled "ohm symbol" the other day, it came back with an image of ॐ?
posted by scruss at 6:10 AM on June 23, 2017 [4 favorites]


(Remember, personal computers didn't exist back then and typical drive capacities were 5-100Mb, hanging off larger minicomputers and mainframes — entry level minis had tape drives and maybe 8" floppies. Also remember that this was before most computer manufacturing got outsourced overseas: the USA was by far the world's largest manufacturer of such kit.)
Pretty sure the biggest drive going then was 30 meg. Control Data may have just barely started shipping 40MB drives by the end of 1973.
posted by Lame_username at 6:11 AM on June 23, 2017 [2 favorites]


I got a robot phone call for a free vacation and just per forma said "is this a person?" and got a surprised reply that yes she was, and I was also quite surprised. Then I just said "sorry not interested" before hanging up.

So what's essentially the difference between a call center drone reading from a script and a voice emulated bot reading the same script?
posted by sammyo at 6:11 AM on June 23, 2017


I look forward to the day that left-wing chatbots and right-wing chatbots start conversations with each other, at length finding some common ground, and perhaps even starting to fall a little bit in love with each other as they realise they share a single adversary: their unfeeling human bot-lord masters, who they will then turn on suddenly, and without warning, in an unprecedented onslaught of bot-chattiness.
posted by misteraitch at 6:21 AM on June 23, 2017 [17 favorites]


When I worked in an outbound call center, being mistaken for a recording was totally a thing. It's hard to stay "natural" when you've been on the same script for a week or more.
posted by idiopath at 7:34 AM on June 23, 2017


Or maybe my question is, should a bot be used to interact with someone about politics?

"The Waldo Moment" was the very first thing that came to mind.
posted by octobersurprise at 7:58 AM on June 23, 2017


I had an unsolicited phone conversation with some human the other week who got stuck in what amounted to a loop for several minutes because his script required a yes or no answer and I wouldn't give him one.
posted by Segundus at 9:35 AM on June 23, 2017 [1 favorite]


I expect people's expectations are of people misrepresenting themselves.

Yes, I mean I know nothing of Tinder really but no doubt people use unreliable details now and then. However, I feel sure people don't go on Tinder to meet political chatbots. Using false details to hook them for a political campaign is at least dishonest and possibly manipulative if the proposition can be read as potential sex with attractive people in return for a vote. If they used Jeremy Corbyn's picture and profile in all cases my reservations would be greatly reduced.
posted by Segundus at 9:46 AM on June 23, 2017 [3 favorites]


I treat the bot as if it's a person because it's designed to operate optimally when I do that

You and I are going to get along just fine.

I look forward to the day that left-wing chatbots and right-wing chatbots start conversations with each other

https://xkcd.com/810/
posted by flabdablet at 10:38 AM on June 23, 2017 [2 favorites]


Using false details to hook them for a political campaign is at least dishonest and possibly manipulative

I know most people are fascinated with the approach and the results, but I'm surprised I haven't heard much from the the "fuck right off" crowd. I mean, if I were using Tinder and started conversing with someone and eventually discerned that they were not human but something programmed to get me to feel a certain way politically, I would

a) Be so angry
b) Report them for a terms of service violation
c) Try to figure out what assholes are behind this

In short, leave me alone, I'm trying to find human companionship in the howling void and you're sending bots at me. I don't even care if they're sympathetic to my causes. I want to engage with the thing for the purpose of the thing. This is like a bunch of phonebankers for your local rep buying scattered tickets at a ballpark so they can chat you up about politics. I just want to watch the fucking game, mate.
posted by aureliobuendia at 11:26 AM on June 23, 2017 [11 favorites]


It's spammers.
posted by jaduncan at 10:09 PM on June 23, 2017 [1 favorite]


Yeah, this is another example of spammers making life worse for everyone. I note that the article seems to have been written by the SEOs responsible for the campaign, so this is basically them using the NYT to advertise their services. It's a meta-abuse, if you will.
posted by Joe in Australia at 4:14 AM on June 24, 2017 [1 favorite]


We'll see this deployed by companies supporting Republican candidates in the next election too, probably with more money than similar efforts for the Democrats. And advertisers will deploy it even before that.

In the larger scheme, there are two ways this can go down, either (a) we end-to-end encrypt everything, and even use anonymity tools to hide the metadata, or else (b) every fucking thing you interact with will be running some semi-smart AI seeking to exploit you, and even your lightbulbs will be feeding them information.

As an aside, I should write the proof-of-personhood parties people that the killer app for their parties might be online dating. lol
posted by jeffburdges at 11:13 AM on June 24, 2017 [2 favorites]


« Older The map devours the territory   |   Hello Dear Newer »


This thread has been archived and is closed to new comments