When fakes are common, liars have cover
June 14, 2019 7:27 AM   Subscribe

Deepfakes are troubling. But disinformation doesn’t have to be high tech to be damaging. The deepfake is becoming more common. But the recent fake video of a "drunken" Pelosi was more of a cheapfake- and it was just as damaging. Numerous news organizations quickly debunked the video, but the damage to public trust was already done. Many will never know the video was a fake, but the advantages it gave to pundits will echo into the future. It’s a recent example of what legal theorists Bobby Chesney and Danielle Citron call the liar’s dividend. Those wishing to deny the truth can create disinformation to support their lie, while those caught behaving badly can write off the evidence of bad behavior as disinformation.

Since facebook is refusing to take the video down, some people are fighting fire with fire.
On Tuesday, Canny, an advertising company, posted a faked video of Mark Zuckerberg to Instagram. With the help of artists and a proprietary video dialogue replacement model, Canny produced a video of Zuckerberg talking about how he had amassed power and control through Spectre, a thinly guised pseudonym for Facebook and the name of the evil organization in the James Bond franchise.

The A.I.-generated “deepfake” video implicitly but unmistakably calls for Facebook to make a public statement on its content moderation polices. The platform has long been criticized for permitting the spread of disinformation and harassment, but it became particularly acute recently, when the company said that it would not remove the “Drunk Pelosi” video.
posted by Homo neanderthalensis (20 comments total) 11 users marked this as a favorite
 
Quick, someone do a crazy Trump mashup next. Oh wait... Not necessary I guess.
posted by jkaczor at 7:42 AM on June 14, 2019


It is truly a head-scratcher to me that Facebook wouldn’t take down the FakePelosi video. For many years I thought their bad decisions on moderation were about ego (“we’re as important to free speech as the government”) and maybe boxing out potential competitors (never leave room for someone to come in and scoop up even your most odious customers).

Now it just seems like pure bad faith, focusing on their profit streams, and maybe some grudge-holding against the people and organizations that bring this stuff to their attention. Stupider and stupider. It’s as blinkered as Trump saying he’d welcome any help from foreign governments.
posted by sallybrown at 7:43 AM on June 14, 2019 [3 favorites]


This is also an illustration of the problem with Section 230 as it stands. No newspaper or broadcaster would act as Facebook is, because doing so would open themselves up to immediate liability. But since we've granted Facebook blanket indemnity, there is no reason for them to pull the video, since there's no liability for them in letting it remain up.
posted by NoxAeternum at 7:44 AM on June 14, 2019 [19 favorites]


Yup. Was coming in to rend my garments and yell about liability. It’s probably the most practical fix that we could implement relative quickly.

Theoretically, anyway.

Tech needs to be regulated into the ground.
posted by schadenfrau at 7:50 AM on June 14, 2019 [6 favorites]


You know what the Democrats could do, if attention wasn’t such a precious commodity and in such limited supply and they weren’t a terrified about the election?

Hold lots and lots of hearings about whether the big tech companies have been infiltrated by white male supremacists. Use that congressional subpoena power. Turn over allll the rocks.

I don’t even fucking care about the obvious comparison anymore. These people are propagandizing for fucking nazis; I’m over handwringing about McCarthyism. No, it’s not the fucking same. Christ.
posted by schadenfrau at 7:56 AM on June 14, 2019 [14 favorites]


Good article. Deepfakes are certainly troubling, but it's useful here to distinguish between possible threats and most likely threats. You could use deepfake technology to make a "drunk Pelosi" video (or worse), but it's probably a lot easier to just do some deceptive editing in iMovie and get the same result. (Relevant xkcd.) As always, it's the human element that's easiest to exploit -- in this case, Mark Zuckerberg's beautiful mind.
posted by Cash4Lead at 8:02 AM on June 14, 2019 [3 favorites]


Donald Trump will absolutely, positively call footage of himself clearly saying something horrible on live national television a "deepfake" at some point in the campaign. He's already laid the groundwork for this.
posted by Cookiebastard at 8:19 AM on June 14, 2019 [13 favorites]


It's all bullshit, it's all just a joke. Here we are, FFS.
posted by Meatbomb at 9:01 AM on June 14, 2019


There was a congressional hearing on this yesterday, The National Security Challenge of Artificial Intelligence, Manipulated Media, and ‘Deepfakes’." That link contains a link to a recording of the livestream, which I think is still available to watch. It was about two hours and change. I watched it live, and don't have my notes on me, but some of the questions were astounding--a representative from Ohio was very keen on pushing a question about extradition to the US for people who create political deepfakes overseas--and it's highly worth a watch.

They did discuss how you can both prove something is a deep fake and how you can prove something is not at length, specifically with regards to political actors who both are being represented in deepfakes and political actors who claim something is a deepfake that is not.
posted by sockermom at 9:20 AM on June 14, 2019 [7 favorites]


I refuse to believe there are no fakes that could be used successfully against Trump. It would take a lot of creativity to hit one of the weak spots of his base's worldview, like a perfect recreation of the discredited claim about an old interview where he said he should run as a Republican because Rep voters are idiots. But then, it won't take any damage to his 27% base to make him lose by a landslide and take the Senate with him.
posted by oneswellfoop at 9:35 AM on June 14, 2019


very keen on pushing a question about extradition to the US for people who create political deepfakes overseas

Bwahahaaaa.

Obviously the people who create deepfakes for political purposes are non-Americans attempting to influence Americans.


Americans won't ever get that C21 isn't about nation-states, because they've been told that they live in the nation-state that won C20.


The people with the skills and resources to utilise this tech are the Oligarchs, and they don't care about locations. They'll still reap profits, and they'll have the same freedom-of-movement that very rich people have always had*.


*OK, maybe a bit more than during the Cold War
posted by pompomtom at 9:42 AM on June 14, 2019 [1 favorite]


It would take a lot of creativity to hit one of the weak spots of his base's worldview

You'd not aim for one weak spot. A Katyusha approach would be best.
posted by pompomtom at 9:45 AM on June 14, 2019 [3 favorites]


It's all bullshit, it's all just a joke. Here we are, FFS.

Many years ago, the end of 1987, I think, I wrote a think-piece for a local music mag about various cultural-societal trends I'd noticed over the year. One thing that got noted was the "reality" that the tech now existed for photographs to be faked digitally in such fine detail that the average person couldn't notice the fake. In other words, all photo-evidence of anything was now officially doubtful ... or certainly, it would be soon enough. I think the example I used was Wayne Gretzy mainlining heroin. This felt like a simple extrapolation to make and yeah, given the punk rock tone of magazine in question, I wasn't afraid to hint at vast apocalyptic possibilities.

Anyway, video inevitability followed still images down the fakery hole (and quite a while ago, it's worth noting), and now here we are, so deep into the confusion of it all that it seems, vast numbers of people don't even care anymore. It's all just fake yadda-yadda-yadda, "... unless I agree with it, of course."
posted by philip-random at 10:10 AM on June 14, 2019


I've always thought that if Dump could be caught talking contemptuously about his base ala A Face in the Crowd, it might be something that turns them against him--hearing that he thinks of them as rubes and marks (which I have no doubt he does) might do the trick. I always felt it would take someone in his inner circle (i.e. family) recording him on the sly, but I guess that isn't needed anymore if it could just be manufactured.
posted by agatha_magatha at 12:06 PM on June 14, 2019 [1 favorite]


Image manipulation has been around for a long time, and yet, I can't recall any headline news of when manipulated information had an impact.

The conspiracy nuts about "the moon landing was faked" - we have the technology to do that! Why aren't more people faking more things?

The iraq war was based on lies about WMD. But that was a government using lies to do as it liked. I feel like that's different. I feel like there is a big opportunity, and people just.... aren't taking it.

Which makes me wonder, how much power do faked images actually have?
posted by rebent at 1:07 PM on June 14, 2019 [3 favorites]


I mean, for what it's worth fake images are great at making me feel bad about my body!

Joking aside, it's a weird that we're still discussed edited images as a basis for fake news when they're better for re-editing the past à la Stalin having people removed from photos. It's not really a new thing. Maybe someone else can remember a front page photo that's later been shown to be a fake?
posted by Braeburn at 1:44 PM on June 14, 2019 [1 favorite]


There's a bunch of news right now about Adobe's new algorithms that can detect manipulated photos ("I can see the pixels!"), I'm sure something very much like it could be applied to video as well.

But it still requires trust.

In between 'a picture is worth a thousand words, what about video' and technology ever outpacing larger segments of demographics, I can see deepfake videos doing a lot of harm until society develops some kind of mental vaccine against them.
posted by porpoise at 3:46 PM on June 14, 2019


Interesting suggestion for responding to deepfakes in this Karl Schroeder short story.
posted by doctornemo at 12:42 PM on June 15, 2019


It's not really a new thing.

I don't have ready access to a copy, but I dimly remember Errol Morris exploring the 19th century history of photography by arguing that fakes were there from the start. Believing Is Seeing: Observations on the Mysteries of Photography (2011).
posted by doctornemo at 12:43 PM on June 15, 2019 [1 favorite]


Any algorithm that detects fakes can be run antagonistically as a metric to improve fakes by minimizing the fakery on every iteration of the deepfake algorithm.

One of the scary things on the horizon are personalities created whole-cloth with the goal of becoming influencers. Create 10,000 with different parameters, A/B test the heck out of them and keep the ones that are the most loved, use the passable rest as bots to keep the popularity algorithms gamed for the influencers. Now you have a digital “base” of support just like Donald Trump.

Instagram influencers spend almost their entire day DMing and replying to fans to keep them engaged, because personal engagement is Patreon bucks.

Up and coming politicians also spend hours and hours every day pumping donors for cash. This is a true bottleneck for politicians and 98% of those conversations follow a strict phone script a la the Google Duplex AI.

The team behind the GPT2 bot have hesitated to share their most sophisticated model because it could farm out fake news, except some undergrad recreated their work and is going to release it anyway.

What I’m talking about is the Black Mirror episode with the stationary bikes, except it will be used to farm Alt Right troll blogs instead of entertainment.

Here’s the thing: Yeah, we’ve had Industrial Light and Magic and imperceptibly real CGI for a while. But just because it’s a difference in quantity rather than quality doesn’t mean it won’t be a problem. Quantity is also disruptive. Just ask Gutenberg.
posted by Skwirl at 3:34 PM on June 15, 2019


« Older What Will the Farms of the Future Look Like?   |   Oh no! Newer »


This thread has been archived and is closed to new comments