And you may ask yourself, well / How did I get here?
May 24, 2019 10:08 AM   Subscribe

Samsung researchers have developed a GAN-based method for building "talking heads" or animated figures from one or a handful of still photographs (paper).

Linked video includes reanimations of formerly dead celebrities, including Marilyn Monroe, Albert Einstein, Fyodor Mikhailovich Dostoevsky, and Mona Lisa. Interestingly, the use of training "landmarks" from people other than the person in the source image can assign unique "personalities" to that individual.
posted by They sucked his brains out! (41 comments total) 18 users marked this as a favorite
 
Why ?
posted by Pendragon at 10:11 AM on May 24, 2019 [1 favorite]


Great. Deepfakes are going to get "better", and by "better" I mean worse.
posted by NoxAeternum at 10:11 AM on May 24, 2019 [6 favorites]


Yeah, this is definitely a "could" but "should not have" moment.
posted by grumpybear69 at 10:13 AM on May 24, 2019 [8 favorites]


way to read the room, samsung
posted by Atom Eyes at 10:15 AM on May 24, 2019 [9 favorites]


Disney just shat themselves with glee realizing that having gotten out of having to pay screenwriters and storyboarders, they don't even have to pay actors now with a clever enough contract arrangement.
posted by griphus at 10:16 AM on May 24, 2019 [5 favorites]


There's definitely no way this can backfire.
posted by littlerobothead at 10:20 AM on May 24, 2019 [4 favorites]


I liked the cheery soundtrack they chose.
posted by umber vowel at 10:24 AM on May 24, 2019 [2 favorites]


Just wait until you realize that your friends and family have been replaced by this...
posted by happyroach at 10:27 AM on May 24, 2019 [1 favorite]


this is a cool thing that will be used to create the dumbest garbage you have ever seen
posted by prize bull octorok at 10:28 AM on May 24, 2019 [24 favorites]


Is there any legitimate use for this? Deception seems to be the only reason for its existence.
posted by dobbs at 10:28 AM on May 24, 2019


The animation of famous paintings at the end was mesmerizing.
posted by hurdy gurdy girl at 10:33 AM on May 24, 2019 [3 favorites]


Why do NONE of these videos EVER discuss the ethics of such endeavours?!

Because the tech industry is the poster child for the Ian Malcolm Problem.
posted by NoxAeternum at 10:40 AM on May 24, 2019 [21 favorites]


Well, it made the Mona Lisa look like Christopher Lambert's Tarzan, so whatever apocalypse this is bringing with it won't be ready to launch until next week. Gather ye rosebuds while ye may.
posted by Sing Or Swim at 10:43 AM on May 24, 2019 [1 favorite]


Has anyone asked Max Headroom for comment on this continued slide into technological dystopia?
posted by nubs at 10:43 AM on May 24, 2019 [8 favorites]


Is there any legitimate use for this?

Lots of ways to misuse this, sure. But as one consumer application, I could imagine a digital picture frame with animated videos of deceased relatives. It sounds creepy but maybe it would appeal to future generations more comfortable with artifice. Artists could use this to bring characters in some paintings to life, maybe tell some interesting stories along the way.

Has anyone asked Max Headroom for comment on this continued slide into technological dystopia?

Maybe: "And then there's politicians. It's easy to tell when a politician is lying: their lips move."
posted by They sucked his brains out! at 10:47 AM on May 24, 2019 [1 favorite]


Eh. There’s a video of Nancy Pelosi doing the rounds on Facebook right now that has been heavily edited to make her look intoxicated. This software is not going to make things worse than they already are.
posted by um at 10:48 AM on May 24, 2019


It sounds creepy but maybe it would appeal to future generations more comfortable with artifice.

I'm gonna quit making future generations unless they promise not to turn me into a god damned permanent Jib Jab when I leave this world
posted by prize bull octorok at 10:51 AM on May 24, 2019 [9 favorites]


Yeah, this is definitely a "could" but "should not have" moment.

On the contrary: doing research on this is the only way to make it easier to detect such fakery.
posted by supercres at 10:57 AM on May 24, 2019 [12 favorites]


Remember when you were a kid alone in a room, and got freaked out because you thought grandma/grandpa's eyes in the photo were following you around the room? Like that, but worse, because they are!
posted by cynical pinnacle at 11:04 AM on May 24, 2019 [2 favorites]


I'm gonna quit making future generations unless they promise not to turn me into a god damned permanent Jib Jab when I leave this world

Be good to your kids! Statistically, they get to decide how you are memorialized.
posted by They sucked his brains out! at 11:05 AM on May 24, 2019


Is there any legitimate use for this?

Video data compression. Presumably you could get very high fidelity on, say, Facetime, even on a very low bandwidth connection.

(No, of course it's not worth it, but I'm pretty firmly into Luddite territory these days.)
posted by ragtag at 11:07 AM on May 24, 2019 [2 favorites]


This technology may actually reduce the damaging impact of the surveillance state. Basically, video footage can no longer be trusted, so all of those surveillance cameras in public spaces will be useless in providing damning evidence in court.

The Shaggy defense, "It wasn't me!", will win the day.
posted by rocket88 at 11:21 AM on May 24, 2019 [6 favorites]


It's pretty easy to see artifacts that are clear give-aways in the artificially generated videos, but those artifacts don't look that dissimilar from the kinds of artifacts that are produced by poor quality video compression. I can imagine being fooled by these if I assume clues that it's not a real video are just compression artifacts.
posted by biogeo at 11:22 AM on May 24, 2019 [1 favorite]


I guess we have to go back to building institutions we can trust, huh?
posted by clawsoon at 11:32 AM on May 24, 2019 [6 favorites]


I guess we have to go back to building institutions we can trust, huh?

The guillotine is very trustworthy.
posted by maxwelton at 11:45 AM on May 24, 2019 [10 favorites]


I did notice that the characteristics of the base model person would show through and look off (especially on Marilyn Monroe.) So I guess one bright spot: lots of work now for celebrity lookalikes. It will be easier to reanimate Elvis to talk to you about your catfood choices if you use an Elvis lookalike to do the modeling off of.

haha sob
posted by emjaybee at 11:45 AM on May 24, 2019 [2 favorites]


OTOH, one could argue that such techniques would be discovered anyway, only by the GRU or the Russian Mafiya or some shadowy “public opinion management” contractor whose clients are mostly tyrants and aspiring tyrants, and the first time the public saw the result, they’d have no concept of what they were seeing.

Revealing it this way at least goes some of the way towards inoculating the public against its effects; imagine the damage it could do in a society that still believed the adage that the camera never lies.
posted by acb at 11:46 AM on May 24, 2019 [5 favorites]


Yeah tech dystopia etc etc but on the other hand we can have Harry Potter portraits! So that'd be neat.
posted by Wretch729 at 11:54 AM on May 24, 2019 [1 favorite]


Is there any legitimate use for this?

Adding talking-wolf faces to those Boston Dynamics robots as if they weren't already disturbing enough?
posted by Greg_Ace at 12:01 PM on May 24, 2019 [2 favorites]


The guillotine is very trustworthy.

Any tool is only as trustworthy as its operator.
posted by Greg_Ace at 12:11 PM on May 24, 2019 [4 favorites]


Remember that famous video footage of Eric Garner pulling a gun from his waistband a split second before police heroically wrestled him to the ground and incapacitated him?

You will.
posted by Atom Eyes at 12:25 PM on May 24, 2019


As a reminder, at least half a dozen western democratic elections in the past 5 years have been swayed using nothing more than the absolute shittiest web design, bog-standard "bots" that aren't much more complicated than a Markov Generator, money and persistence, and a knowledge of how to effectively weaponize white supremacy. Roughly 33% of the American population have a worldview that is nearly completely disconnected from reality outside of the right wing hate vortex, and within that 33% there's a significant number who believe an even more detached conspiracy theory about messages hidden in primetime reality TV broadcasts. I think that widespread dissemination of machine learning tools is bad - especially in the case of consumer level tech like DeepFakes - but I also think that the ability of a populace to be mislead lies on deeper cultural and structural problems than the technology used to do it.

That being said, I think that the real onus of this now lies on social media companies that should be answerable to democratic processes, but are instead opaque oligarchies. As soon as YouTube knows about technology like this, it should be a full throttle arms race to employ tools that recognize and destroy it, or at the very least have social mechanisms where harmful applications of the tech can reliably be removed, sort of how the World Health Organization is constantly on the lookout for new diseases and has a research wing to respond to them. Unfortunately, our current networked society lies in the hands of a few folks who actually have disincentives from moderating their platforms.
posted by codacorolla at 12:38 PM on May 24, 2019 [16 favorites]


Soon, you will have VR ocular implants so your croutons can have faces, that smile at you when you pet them. And because everything you see is streamed wirelessly to your new eyes and shared to the cloud, your close friends Elvis and Marilyn can pet them with you.
posted by otherchaz at 1:18 PM on May 24, 2019


A tough problem is that politicians and celebrities who would be targets of this technology have lots of video footage of them in public settings, which can be used to build the landmarks used to guide the model. The researchers observe that, with those training inputs and as few as 32 photographs, they can make decently convincing fakes.

The nightmare Eric Garner scenario is interesting. A determined state actor could perhaps use surveillance camera footage to build landmarks, but there is usually less video footage of private citizens. The Mona Lisa demo shows that using different landmarks gives results that are discernible from one another as "different", or having different "personalities". Reuse of landmark sets could create artifacts that make it obvious that such footage is likely faked.

We've been fortunate that efforts at malicious fake footage have been lazy, such as in what Trump's people have done to video footage of Pelosi, which has garnered front-page coverage in the NYT and WaPo. The work of determined members of the intelligence community with state-level resources would likely be harder to discern as false.

We live in a fascinating and terrifying world. I appreciate that these papers are openly published. Better to know what is possible and prepare for it.
posted by They sucked his brains out! at 1:22 PM on May 24, 2019 [2 favorites]


re: pelosi's 'dumb fake'
posted by kliuless at 2:16 PM on May 24, 2019 [1 favorite]


Wow, that Dostoevsky.
posted by doctornemo at 8:20 PM on May 24, 2019


I'm less interested in what will happen when incriminating deepfake videos start to be presented as real. I'm more interested in what will happen when real incriminating videos start to be claimed as fake.
posted by dephlogisticated at 8:46 PM on May 24, 2019 [4 favorites]


> Great. Deepfakes are going to get "better", and by "better" I mean worse.
...
> Yeah, this is definitely a "could" but "should not have" moment.

(etc)

These concerns are repeated every time this technology takes a small step forward. Printing was invented and people started to believe anything that was printed before the technology was disseminated widely enough for it to be obvious that anyone could print anything. Photography was invented and people believed photographic evidence before the technology to fix it was disseminated widely enough for it to be obvious that anyone could make a photograph of anything. Television was invented and people believed anything that was on TV before the technology was disseminated widely enough for it to be obvious that anyone could make a programme about anything.

Soon this technology will be disseminated widely enough for it to be obvious to everyone that no video can be trusted. Anything that makes that happen more quickly is a good thing as far as I can see - we need to get past trust in video as rapidly as possible.
posted by merlynkline at 4:05 AM on May 25, 2019 [3 favorites]


Counterpoint: A significant number of people still believe whatever spews out of Fox News, with not-insignificant consequences.
posted by Greg_Ace at 3:15 PM on May 25, 2019


Finland is winning the war on fake news. What it's learned may be crucial to Western democracy - "The exercises include examining claims found in YouTube videos and social media posts, comparing media bias in an array of different 'clickbait' articles, probing how misinformation preys on readers' emotions, and even getting students to try their hand at writing fake news stories themselves."
The course is part of an anti-fake news initiative launched by Finland’s government in 2014 – two years before Russia meddled in the US elections – aimed at teaching residents, students, journalists and politicians how to counter false information designed to sow division.

The initiative is just one layer of a multi-pronged, cross-sector approach the country is taking to prepare citizens of all ages for the complex digital landscape of today – and tomorrow. The Nordic country, which shares an 832-mile border with Russia, is acutely aware of what’s at stake if it doesn’t.

Finland has faced down Kremlin-backed propaganda campaigns ever since it declared independence from Russia 101 years ago. But in 2014, after Moscow annexed Crimea and backed rebels in eastern Ukraine, it became obvious that the battlefield had shifted: information warfare was moving online.
posted by kliuless at 11:22 PM on May 30, 2019


Anything that makes that happen more quickly is a good thing as far as I can see - we need to get past trust in video as rapidly as possible.

This is not healthy for us as a society. Trust is very much foundational, and that erosion of trust has been very problematic.
posted by NoxAeternum at 9:35 AM on May 31, 2019


« Older Classic videogame/anime music interpreted for the...   |   the railway’s revival hasn’t solely been due to... Newer »


This thread has been archived and is closed to new comments