Hi everyone. I'm happy to share with you an announcement about Lyrebird.
December 10, 2017 1:35 PM   Subscribe

Researchers at the Montreal Institute for Learning Algorithms present ObamaNet, the first architecture that generates both audio and synchronized photo-realistic lip-sync videos from any new text. Contrary to other published lip-sync approaches, theirs is only composed of fully trainable neural modules and does not rely on any traditional computer graphics methods.

They use three main modules to generate these videos, for which only one minute of audio data is needed: a text-to-speech network based on Char2Wav, a time-delayed LSTM to generate mouth-keypoints synced to the audio, and a network based on Pix2Pix to generate the video frames conditioned on the keypoints.
posted by sockermom (74 comments total) 21 users marked this as a favorite
 
What could possibly go wrong?
posted by escape from the potato planet at 1:39 PM on December 10, 2017 [13 favorites]


This could indeed be an amazing step to help people who cannot speak.

I hadn't realized before that -- since I am a public speaker and there are at least a dozen hours of audio recordings of me speaking, we're now past the point where someone could fake me saying something abhorrent. Not just people I think of as actual public figures, celebrities and politicians, but me. What kind of verification stamp on audio and video recordings will people come to trust? What would I trust?
posted by brainwane at 1:51 PM on December 10, 2017 [3 favorites]


So, I get the voice synthesis part, but how does creating altered video help people who cannot speak?

Is this also meant to let them make fuller use something like Facetime?
posted by oddman at 1:55 PM on December 10, 2017


So much technology with obviously dystopian uses is justified by its creators with some humanitarian fantasy. This, they say, will "change the world by helping those who lost their voice to a disease." Nano-drones funded by the DoD are for search and rescue. Etc. etc.

These researchers are either naive, malevolent, or they are so wrapped up in wankery around Solving Hard Problems (tm) that they can't see past their navels. Put down your matrices and look around!

It will be a supreme irony when their dream of helping those who are literally voiceless is rendered meaningless by the impossibility of reliable communication.
posted by cichlid ceilidh at 2:07 PM on December 10, 2017 [28 favorites]


I mean, this is amazing technology, and it could certainly be used for constructive applications – but it'll very obviously be put to evil applications pretty much immediately upon release.

The choice of Obama for the demo video only underscores this. This will be a potent tool for propagandists, who are already doing incredible damage to democracy. People believe what they want to believe – and this tech won't have to be refined much (if at all) before ideologues will be happy to take the results at face value.

You can be sure that Russian disinfo agents (among others) are already writing the script for the video where Obama calls on American leftists to rise up and impose sharia law.

Not only that, but the existence of this tech will make it even easier for people to dismiss real video footage. Trump (or someone in his orbit) said something terrifyingly fascist? Or admitted to committing a crime? Well, no he didn't – that video is obviously fake!

I mean, people are already doing a bang-up job of denying reality even without this tech. But this is certainly not going to help.

Sorry for the catastrophizing. Just...I feel like we've passed through some event horizon, and technology is now evolving way faster than we can figure out how to deal with the consequences of technological developments.
posted by escape from the potato planet at 2:09 PM on December 10, 2017 [43 favorites]


Oh, dear.
posted by Johnny Wallflower at 2:12 PM on December 10, 2017 [3 favorites]


I thought the new Black Mirror wasn’t coming out until December 29?
posted by ejs at 2:17 PM on December 10, 2017 [7 favorites]


You can tell that isn't Obama's voice, the endings are off and it is mechanical and lacking in base identity. But run! I am already aware Indon't discuss anything that matters on the phone.
posted by Oyéah at 2:17 PM on December 10, 2017


> These researchers are either naive, malevolent, or they are so wrapped up in wankery around Solving Hard Problems (tm) that they can't see past their navels.

I think the most likely scenario is they're looking for a buyout from google or amazon or something, and a cover story is part of the product. I think that ought not be generalized as simply malevolent.
posted by I-Write-Essays at 2:18 PM on December 10, 2017 [3 favorites]


One way to get around potentially malicious uses of this would be to record and encrypt your speeches with a public key, like we do for digital signature verification now.
posted by dilaudid at 2:19 PM on December 10, 2017 [2 favorites]


Malevolent, but with venture capital.
posted by cichlid ceilidh at 2:20 PM on December 10, 2017 [1 favorite]


How does greed make it less malevolent? Am I misunderstanding?
posted by zjacreman at 2:21 PM on December 10, 2017


Speaking as an employee of a silicon valley startup, I'm pretty well convinced now that the Internet (maybe computing generally) was a mistake.
posted by zjacreman at 2:23 PM on December 10, 2017 [15 favorites]


Not less malevolent. There should be room for more than one type of malevolence so we can call people out for exactly what they're doing.
posted by I-Write-Essays at 2:24 PM on December 10, 2017 [1 favorite]


Looking at the paper itself, it's interesting/worth calling out just how much of this is incremental improvements.
Our motivation behind the choice of method to perform video generation is the recent success of pix2pix (Isola et al. (2016)) as a general-purpose solution for image-to-image translation problems. This task falls within our purview, as our objective here is to translate an input face image with cropped mouth area, to an output image with in-painted mouth area, conditioned on the mouth shape representation.
Pix2Pix previously, Original Pix2Pix paper

I mean, I'm definitely all for greater application of ethics when it comes to CS, but these are tools which have been out for anyone to pick up for years. The same research which gets us new Snapchat filters and Pixar movies (seriously, Disney does a *lot* of original research in graphics rendering/modeling) is what gets us this, just differing in application/sample-set.

Or, to reframe back to the original point, the capabilities have been around, but I suspect getting this out there so more people are aware of what's possible is the first step to figuring out what sort of social/regulatory/technological antibodies/countermeasures can be put in place. I'm not sure that this being primarily just within the purview of megacorps & state actors is a safer state to be in.
posted by CrystalDave at 2:28 PM on December 10, 2017 [3 favorites]


I'm very uncomfortable with how this thread is a debate about whether a bunch of Canadian computer scientists are mad with avarice, or just abstractly evil
posted by theodolite at 2:28 PM on December 10, 2017 [5 favorites]


The door was left open for hopelessly naive, too.
posted by zjacreman at 2:43 PM on December 10, 2017 [9 favorites]


Nah, I'm with CrystalDave and theodolite. This tool will be used for grave evil, but Lyrebird didn't really have anything to do with that, just the relentless march of time. Kind of fascinating to see that it's finally here tbh.

(I've had a long time to panic about this and part of me is... not... sure... how much this is going to change, at least right away... because photography is ubiquitously and easily fakeable already, and yet we do pretty well with that, relatively speaking.)
posted by peppercorn at 2:46 PM on December 10, 2017 [2 favorites]


I mean, I'm definitely all for greater application of ethics when it comes to CS, but these are tools which have been out for anyone to pick up for years. And: I'm very uncomfortable with how this thread is a debate about whether a bunch of Canadian computer scientists are mad with avarice, or just abstractly evil.

I would be vastly more comforted by this tech and think more kindly of its creators if they had clearly acknowledged* the potential harm it could cause and exactly how they intend to mitigate it. Especially since their video press release demonstrates a clear example of how damaging it can be, and particularly since the ethical lapses of Facebook, Twitter, and Google have been all over the international press in recent months. The fact that their development builds upon existing tech does not comfort me, nor, imo does it give them a pass for not upfront addressing its potential--and obvious--harmful applications.

I also notice that someone from Google is on their faculty and Facebook invested $7 million in their group in September.

Does NOT fill me with confidence.

* Perhaps they acknowledge it somewhere but either I missed that or didn't click the right links.
posted by skye.dancer at 2:48 PM on December 10, 2017 [9 favorites]


On the bright side, I can now begin working on the virtual me that will live forever!
posted by grumpybear69 at 2:48 PM on December 10, 2017


These researchers are either naive, malevolent, or they are so wrapped up in wankery around Solving Hard Problems (tm) that they can't see past their navels. Put down your matrices and look around!

You end up with that sort of naivete by telling whole generations that if you want a good job, literature and philosophy are worthless to you, focus on STEM!

We have a whole culture that treats thinking about the long-term consequences of new technology is literally not on the table at all.

I mean, how else do you get an Israeli company selling Ethiopia software that gives them NSA-like capabilities, and then when it's shown they're using that software in an abusive manner against their own people (and people in other countries no less) their fucking canned response is this:

Reached for comment, Cyberbit said they were not responsible with what others do with their software, arguing that "governmental authorities and law enforcement agencies are responsible to ensure that they are legally authorized to use the products in their jurisdictions."

This is one hundred fucking percent what you get when you don't include ethics, philosophy, and literature in the education of people in STEM fields.
posted by deadaluspark at 3:09 PM on December 10, 2017 [25 favorites]


I suppose an alternately useful exercise would be:
If we take this as a point by which Man Should Not Have Meddled, then we probably should've stopped before this point & identified a point of research where the logical conclusion would have clearly led up to this. And given the chain of paper cites, one probably could create exactly such a chain and go about figuring out where that clear-in-hindsight point should've been. That's a bit more involved than I want to chase back myself at the moment, but the steps are there

Question is: When was that point we should've coordinated at? Adobe's been doing interesting/unsettling similar work from a different tech-base on voice generation, and their influence traces all the way back through Photoshop. I'm not sure exactly when Photoshop was advanced enough to attain the concerning threshold of "photorealistic", but I'm guessing not at its initial 1990 release, and in the time since then we've had both concern that it would lead to the death of truth and techniques for detecting fakes.

(This relatively-recent high-drive push towards compartmentalized reality/"Fake News!" is worrying on this front, but given how crude some photomanipulations I've seen on Snopes are, someone in a darkroom could meet the same bar. And again, state actors have had this sort of thing within their capacity for years and years)

It's a lot like the debates over encryption, I think. Diffie-Hellman came out in 1976 (though GCHQ had figured out the concept by 1970 as far as they've published) & Merkle Trees were out by 1979; after the cat was out of the bag there it took some time before PGP popped up initially for anti-nuclear activism use and proliferation/accessibility really took off, but even then the USG wasn't able to keep it contained and nobody really knew what sort of social impacts this would bring.
Could a 1980's-Metafilter have predicted that Ralph Merkle should've halted with that paper because now as a direct result Bitcoin is threatening to accelerate anthropogenic climate change?
Probably not, by my guess. (If so, hats off to you though)

That said, I entirely agree that being in that transitional phase before solutions end up figured out is going to be worrying best-case, but given the informational inertia leading up to this, I'm not sure where the "Here be Dragons" line could've been usefully drawn, and keeping this buried so only the groups most motivated to use it have access to it/know to defend against it prevents the global research community from being able to work on fingerprinting/detection so that transitional gap isn't so wide.

Which, again, none of this is to say "we don't need rigorous education & applications of ethics in Computer Science" (though given this came out of Canada where Professional Engineering is a protected field, they likely went through a lot more training/oaths than US counterparts would have), just that by the time the viability/potential applications of research become clear, it's likely way too late to have bolted the barn doors closed.
posted by CrystalDave at 3:25 PM on December 10, 2017 [5 favorites]


Researchers: "We CAN do this thing, so obviously we SHOULD do this thing."

Reality: 😐
posted by paco758 at 3:26 PM on December 10, 2017 [4 favorites]


> These researchers are either naive, malevolent, or they are so wrapped up in wankery around Solving Hard Problems (tm) that they can't see past their navels.

I think the most likely scenario is they're looking for a buyout from google or amazon or something, and a cover story is part of the product. I think that ought not be generalized as simply malevolent.


Anecdotal, but I know a guy who is working on developing high-resolution night vision, and whenever he talks about it, he talks about its application for search and rescue. Not as a cover story, but because he’s really excited to solve engineering problems and imagine that it will help people. When someone asked him if he’s thought about how his research could be used by the military, his face dropped and he said, after a pause, “I hadn’t thought about that, but yeah, I guess the military would be interested.”

Maybe a year ago one of my classes had a guest speaker, an alumnus who worked for the DoD, who came to talk to us about how DARPA funds major science and engineering projects on campus. He believed that this was a good thing, but that people aren’t aware of how the military can direct academic research without any public knowledge. By choosing which projects get funding, they get to dictate the direction of academic research in STEM fields. The point of the lecture was to point out that there is no such thing as purely academic research in these fields, because even at the theoretical level it’s being manipulated by military interests. So an academic project seems, to its researchers, like just an interesting engineering puzzle with the potential for eventual military use; but in fact, it may have crossed that line ages ago.

I told this story to another friend, and he said “wait, DARPA visited our lab.” He’s a neuroscientist.

I’m sure that applies to research in countries like Canada, as well. In other words, I suspect that this has always been a military product on some level.
posted by shapes that haunt the dusk at 3:27 PM on December 10, 2017 [16 favorites]


Does this software solution explain the things the predisent has been saying this year?

I mean, I knew something was kind of *off* with the thing’s mouth, but now it all makes a lot more sense — at least it’s a better explanation than the terrifying alternative. Donald Trump in politics, as if.

All experience is somebody’s artiface.
posted by Construction Concern at 3:45 PM on December 10, 2017


Two years ago, I would have naively believed that world shaking propaganda existed only in the fifties. Now, I know better, and the world shakes.

Unfortunately, this will go to horrible uses.

Inevitably, proof of concept makes others making it speed to their result, so even if the 'good folk' do it first, the bad folk will do it anyway.
posted by filtergik at 4:39 PM on December 10, 2017


Someone come get me when it's time for the Butlerian Jihad. I'll be cowering in my basement until then.
posted by soren_lorensen at 4:50 PM on December 10, 2017 [6 favorites]


I think the most likely scenario is they're looking for a buyout from google or amazon or something

So all we need to have happen is have their own software produce a 30 second video of one of the Lyrebird C?Os purportedly saying something awful the week before the sale. We'll never hear of them again.
posted by scruss at 5:32 PM on December 10, 2017 [1 favorite]


Somewhere, somehow - a thumbs up from Roger Ebert.
posted by davebush at 5:33 PM on December 10, 2017 [1 favorite]


The first thing I imagine happening is all those text posts of Obama saying unusual things in people's dreams will become an audio reality.
posted by solarion at 5:56 PM on December 10, 2017


Soon, They will say what it takes
To put you away
Your silence will not protect you.
Your privilege of poverty, stillness
Inconsequence will exist only
If resources allow it.
Your self driving car will only roll
On approved thorofares,
You may only be driven to heap praise
Or spend the sum of your days
For a better's pleasure.
Those words you didn't say,
Before they took you away
Speak clearly from the simulacrum
Of your own seeming mouth.
Shoo fly don't bother me
Shoo fly don't bother me
Robo fly watching
Big brother watching out for
Nothing short of,
A man sized heap of agreement
Or internment for reluctant caddies.
Yes we'll be washing your balls, sir!
We see your son repeated this,
We hear less than eagerness from
That girl of yours,
Not so happy at this baptism
Of firings. The things we said
You said, tend to stand out,
In the records generated at
The occasion of your seeming speech.
My silent doubts nevertheless nag,
Shoo fly don't buzz in me.
posted by Oyéah at 6:05 PM on December 10, 2017 [2 favorites]


I don't understand... This sounds awful and not like Obama at all? Why should I be scared?
posted by xyzzy at 6:06 PM on December 10, 2017


I think they used the tape they generated of Obama, because he is well known, stylistically, linguistically, hated or beloved a very known quantity, to show off the technology.
posted by Oyéah at 6:10 PM on December 10, 2017


Why should you be afraid? Because we are not there yet now, but we will be. Ten years ago voice recognition was a hope that we had not yet cracked. We could not reliably turn speech into text without human intervention. Now everyone with a smartphone has this technology in their pocket.

And yes on the sociology of science thing. Most scientific research is funded by federal grants. Granting agencies pick the scientific priorities in science. Your college professors were in bed with the federal government, whether they admitted it or not.
posted by sockermom at 6:18 PM on December 10, 2017 [1 favorite]


I don't understand... This sounds awful and not like Obama at all? Why should I be scared?
Because the future it portends is one where anyone can easily make anyone say anything they like, eventually in a way that can't (at least easily and/or definitively) be distinguished from reality.

Only recently could this be done at all, at enormous cost and with a team of effects artists (e.g. Paul Walker). This will revolutionize the ability to create disinformation, and to do so quickly and easily.
posted by ArmandoAkimbo at 6:20 PM on December 10, 2017


>Your college professors were in bed with the federal government, whether they admitted it or not.

Come again? How could they not admit it? The grants are public knowledge and celebrated right on the damn website. I would guess that around the world, 99% of scholarship runs on money from respective governments.

And yeah the voices sound hella fake. And I am partial to a little fearmongering myself, but come on comrades have we learned nothing from 2017? You don't need sophistication to create disinformation. Crude methods and constant repetition - work better and cheaper.
posted by tirutiru at 6:35 PM on December 10, 2017 [2 favorites]


Maybe... I guess? When movie people have hundreds of thousands of dollars to spend and still can't pull off anything convincing... I'm just not really worried yet.
posted by xyzzy at 6:54 PM on December 10, 2017


Radiolab did a bit about this kind of technology a few months back. I don't remember the guest's name, but they interview an engineer whose work is on the cutting edge. They explicitly ask her about the ethical quandaries involved and her response was essentially, "it's not my job to think about that, I just figure stuff out."

We need to do a better job helping people become complete humans.

("we" here being universities and communities and support networks and what-have-you. Which is to say: all of us)
posted by deadbilly at 7:09 PM on December 10, 2017 [6 favorites]


On second read-through, what deadaluspark said.
posted by deadbilly at 7:13 PM on December 10, 2017


Come again? How could they not admit it? The grants are public knowledge and celebrated right on the damn website.
Because academics like to think of themselves as free thinkers, often as radical ones who don't trust the government. It's easy to not think critically about the mechanisms that make funding possible. People like to think highly of themselves and often don't apply critical thought to the dissonance in their own lives. This is true of researchers and academics. "Sure, I get government funding," they say, "but this is my project and I'm driving the bus, and the government just happened to be funding in my research area this time around." Academics don't want to admit that their research agenda is shaped by the government. The way grants are administered obscures the role that funding plays in shaping agendas as well. (Source: I'm an academic and I see this happen constantly.)
posted by sockermom at 8:05 PM on December 10, 2017 [1 favorite]


If we take this as a point by which Man Should Not Have Meddled, then we probably should've stopped before this point & identified a point of research where the logical conclusion would have clearly led up to this.
Remember when Prometheus stole fire to give to mankind, and in return Zeus was like "They like gifts, huh? Pandora, bring them a bunch of 'gifts'."

I think that was last the point of research where we might have stopped.

But to get back to this specific example:

The alternative universe everyone's imagining right now, where after a few researchers figure out how to fake convincing video, they decide not to make that knowledge public after all, and then it goes away forever? That universe is a simple-minded fantasy, a thing which logically cannot exist and which only holds any place in our minds because we are willing to clutch at even a false hypothetical of escapist relief.

Remember when Alexander Graham Bell beat Elisha Gray to the patent office by a few hours? Most technology isn't that much of a horse race, but it's close. There is no magical secret which is available to a few Canadian scientists but which is somehow out of reach of the FSB, the CIA, or any other organization blessed with more budget than conscience. Our possibilities here were "this capability gets publicly announced before nefarious actors start using it" or "this capability does not get publicly announced before nefarious actors start using it", and as awful as the first option is, it's not nearly as bad as the second.
posted by roystgnr at 8:06 PM on December 10, 2017 [6 favorites]


Absolutely. And I love the conceit that these researchers could stop this catastrophe, if only they had taken an ethics course in college!
posted by tirutiru at 8:21 PM on December 10, 2017 [6 favorites]


Last year's Face2Face was better for video and Adobe's VoCo is better for audio. This demo could be more realistic.
posted by Monochrome at 8:29 PM on December 10, 2017


I don't think anyone's so conceited that we think it could stop this whole process.

It's more that we're sickened when we continuously ask these people about the obvious, glaring problems that immediately jump to our mind, and we're always kind of shocked and sad to find out they haven't considered it, and beyond that, that they act like you're rude for suggesting that they should consider it.

That's the part that could be fixed by proper education. The wheel of technological progress is always marching on, but it helps when the people developing it are at least attempting to consider the ramifications of their projects in good faith instead of being like:

"I'm a libertarian, so it's your problem to deal with the fact that the product I just sold to your neighbor means he can spy on you at all times. You have to solve that with him, not me. The fact that he got the technology to do it from me means nothing. Guns don't kill people, people kill people." Etc. etc. and so on.

As the great Eddie Izzard once said:

They say that 'Guns don't kill people, people kill people.' Well I think the gun helps. If you just stood there and yelled BANG, I don't think you'd kill too many people.”

I mean, I'm sorry, or did you miss the helpful link where a company selling NSA-tier software to spy on people says they're not responsible for governments abusing the software and not using it within the law? Perhaps there should be a little more regulation of such things when it's being sold willy-nilly to every government on the planet without purpose other than profit?
posted by deadaluspark at 8:29 PM on December 10, 2017 [8 favorites]


If we take this as a point by which Man Should Not Have Meddled, then we probably should've stopped before this point & identified a point of research where the logical conclusion would have clearly led up to this.

just that by the time the viability/potential applications of research become clear, it's likely way too late to have bolted the barn doors closed.

This situation is not a complicated philosophical puzzle.

You do not need a crystal ball to envision how your system might be hacked/exploited before you push your code to prod. Rather, you need to actually think through the implications of your tech when you design it: to not only think about how nifty/cool it is or how much money you can make on it.

If it is obvious to a bunch of random people on the internet that this system has a clear and dangerous vulnerability, then it should have occurred to the computer science researchers who designed, built, and launched it. Developers regularly guard against SQL injection hacks and the like: they can and should also take the time/effort to think about how best to prevent their systems from being misused.

Now, maybe these researchers attempted to fix the bug; perhaps they have inserted some kind of signature into the video data that makes it obvious, upon some kind of inspection, that the data are synthetic. If they did, then great, that type of development would be one step towards making the tech less malignant.

But it's not the only step. The existence of a real/fake signature won't reduce the harm caused to someone who has their image and their words misused nor will it reduce the harm caused to public discourse by adding more poor quality, false, and/or misleading video content into an already saturated system.

This is exactly the kind of failure mode exhibited by Google when they thought that combining gmail and social media sounded like a great idea. Why Facebook couldn't seem to understand why people might want to be pseudo-anonymous online. And how Airbnb ran afoul of fair housing laws. Etc. Money + cool ideas got ahead of reflection.

It is not quashing research or engaging in some quixotic endeavor to ask that developers think before they launch. And to make it clear, being responsible here does not imply not doing the research. It means installing safeguards, taking steps to minimize harm.
posted by skye.dancer at 8:34 PM on December 10, 2017 [3 favorites]


Photoshop is used to generate disinformation the world over. What safeguards does it have?

Presently the videos are very obviously synthetic. In fact they are not even close to the cutting edge of studio effects. But when they do become realistic, we will continue to vet them exactly as we do the thousands of fake images that crowd WhatsApp and Facebook right now. Technology helps detect when an image has been tampering with, but for the most part we rely on comparison, consistency and common sense.

(if we want to distinguish real from fake at all, which is not a given)
posted by tirutiru at 9:00 PM on December 10, 2017


Photoshop is used to generate disinformation the world over. What safeguards does it have?

Just one example, but it won't let you scan currency, obviously for the purpose of preventing counterfeiting.

Just because you're oblivious to the safeguards doesn't mean they don't exist.

Surely, you can work around it. However, putting the basic safeguard in place prevents the majority of people from even wasting their time.
posted by deadaluspark at 9:02 PM on December 10, 2017 [2 favorites]


The alternative universe everyone's imagining right now, where after a few researchers figure out how to fake convincing video, they decide not to make that knowledge public after all, and then it goes away forever? That universe is a simple-minded fantasy, a thing which logically cannot exist and which only holds any place in our minds because we are willing to clutch at even a false hypothetical of escapist relief.

The alternative universe I'm imagining is one where the researchers who best understand what new technology can and can't do feel some responsibility for considering the ways that technology might be used to hurt people and what steps could be taken to minimize the likelihood of it being used harmfully or to mitigate the harm done. And to consider whether their position on the cutting edge doesn't give them some power to help us take those steps.
posted by straight at 9:40 PM on December 10, 2017 [3 favorites]


Radiolab did a bit about this kind of technology a few months back. I don't remember the guest's name, but they interview an engineer whose work is on the cutting edge. They explicitly ask her about the ethical quandaries involved and her response was essentially, "it's not my job to think about that, I just figure stuff out."

Oh yeah, I remember that. Nearly made me want to throw something at the wall too.
posted by cendawanita at 1:14 AM on December 11, 2017 [1 favorite]


I don't know what these researchers could have done to be more ethical beyond installing a watermark that won't be in the version of this software a troll farm runs.
posted by zymil at 4:42 AM on December 11, 2017


You can be sure that Russian disinfo agents (among others) are already writing the script for the video where Obama calls on American leftists to rise up and impose sharia law.

I expect that the medium-term response is that people who generate those things, and the platforms that transmit them, receive very, very expensive lawsuits. And while the FSB-front that made one might not care, facebook doesn't like having to pay money.
posted by GCU Sweet and Full of Grace at 5:23 AM on December 11, 2017


I don't know what these researchers could have done to be more ethical beyond installing a watermark

There are many things they could have done/could do, starting with considering the kinds of extra-technical workflows and social systems in which this code might be deployed and acting to address possible misuse in those contexts.

For example: they could limit the kinds of words that can be synthesized, making sure that hate speech or pejorative speech is minimized. They could choose not to sell the software to social media firms or traditional media outlets or prevent the use of their videos within social media frameworks. They could limit the use of video content only to people who have been dead for X years, or people who have provided a signed statement that they have okayed their likeness to be used, or prevent the use of video content featuring children. Etc.

They could implement the kinds of basic safety features that other industries do before selling their products. They might even have to buy some liability insurance to address unexpected product misuse. Again, just like other industries have to do.

The misuse possibilities of this tech are not obscure. White supremacists will sample black activists' content and create faked gotcha videos of them saying "Kill whitey" and so forth. Anti-Muslim bigots will create faked videos of Muslim coworkers and acquaintances threatening terrorist acts. Swatting will get taken up a notch as griefers troll social media feeds and create faked videos of vulnerable people like trans gender persons and outspoken women online. Disgruntled exes will sample their former partners' Instagram and YouTube posts to create fake vids insinuating child neglect or child exploitation. Bullies and stalkers will create vids suggesting that their victims asked for it.

It is obvious that this is what will come of this tech if it has no safeguards.

And hey, maybe these computer scientists have already thought of all these vulnerabilities and plan to address them before selling the system. That would be great if they have. And in my opinion, it should be expected for them to do so.

40 years ago, enthusiastic software folk might have been excused for not thinking through the implications of their work. But today, after so many high profile failures, it is the lowest possible bar to expect for them to be cognizant of these issues and to act decisively to mitigate them. Note that I say mitigate here; I'm not expecting them to have a perfect solution, just to have made some effort to contain and minimize the harm.
posted by skye.dancer at 5:42 AM on December 11, 2017 [6 favorites]


There's a pretty good Radiolab episode on this with another example - the voice is much closer, the video poorer quality.

Adobe is working on this too [related], and if IIRC, they haven't even released to anyone external the current version of the tool, for many reasons including the fact that it is incredibly accurate, and they have no method of analysis / detection of the waveforms to determine authenticity.

This Canadian lab isn't the only group working on this, this tech is way better than what we are seeing publicly, and we should all be terrified.
posted by lazaruslong at 5:51 AM on December 11, 2017 [1 favorite]


That Radiolab episode made me furious.

We are fast moving into a world where everything can be faked, to a high degree of fidelity, by anyone. Human brains are not built to cope with this level of uncertainty. Doubting everything, even what your eyes see and ears hear, is exhausting. Most people reject that level of skepticism in favor of just assimilating information that confirms their own biases and I can't really blame them if the only other option is "assume everything is fake and that nothing is real." That's a terrifying way to live.
posted by soren_lorensen at 6:30 AM on December 11, 2017 [2 favorites]


>Just because you're oblivious to the safeguards doesn't mean they don't exist.

Your own link shows that even that safeguard fails more than half the time! And without any attempt at circumvention.

We have reached the point where you can often search for a fake photo that meets your needs - someone's already done the photoshopping. For example I looked for 'barack obama black panther' and here it is . Ready for dissemination.

It's easier than ever to check the veracity of quotes and we're still awash in obvious forgeries. I bet every single language supported by whatsapp has its own sub-culture of fake Hitler quotes.

>Most people reject that level of skepticism in favor of just assimilating information that confirms their own biases

Even worse: most people don't care when they do find out that something was fake. They just shrug and move on.
posted by tirutiru at 6:55 AM on December 11, 2017 [4 favorites]


Even worse: most people don't care when they do find out that something was fake. They just shrug and move on.

No one wants to think of themselves as having been a dupe. All of human nature is working against the reality we're currently living in. Too much information, too much complex information that doesn't fit into the patterns our brains are wired to try and recognize, too many choices, too much uncertainty, too many people working hard to trick other people, and every time someone gets fooled, they have already posted their initial reaction on social media for everyone to see. How many people do you know are capable of making a profound and public error and then gracefully walking that back when shown contrary evidence? No one can lie and say they always knew it was a fake anymore because they're all over Facebook telling everyone they know that OBAMA IS A BLACK PANTHER I SAW THE PICTURE!!!! one nanosecond after it crossed their eyeballs.

tl;dr everything is terrible, our technological reach has exceeded our neuropsychological grasp and I don't know where we go from here.
posted by soren_lorensen at 7:12 AM on December 11, 2017 [3 favorites]


...I came in here to provide software to impersonate other people and preach ethics, and looks like I'm all out of ethics.
posted by Nanukthedog at 7:41 AM on December 11, 2017 [7 favorites]


Even worse: most people don't care when they do find out that something was fake. They just shrug and move on.

I don't see why any of this lets developers off the hook for working to improve their systems, services, and products.

The fact that Photoshop does a poor job of thwarting counterfeiting means that Adobe should try harder, not throw up their hands.

Security vulnerabilities often creep into software with each new release, and we expect developers to fix them, not shrug and say “Oh, well, not my job.” Several times per year MetaFilter has FPPs that publicize some security hole in a popular mobile phone, computer, or embedded system. And in those threads people often lament how obvious the holes are, or how complicated they are to thwart, etc. But people in those threads seem fairly united in expecting developers to patch the holes.

What I want (and what I think that other folks in this thread are also asking for) is for researchers and systems developers to consider how their code behaves in the real world, how people actually use it/will use it to be part of their damned jobs. The exact same way that fixing bugs and patching security holes is their job.

None of this requires that developers take advanced ethics classes or pass their code through IRBs before launch. What it does do is require them to be aware of more than purely technical issues of network latency, compute speed, memory use, etc. and consider the social and cultural contexts in which their products will be deployed. And the consequences of bad/malignant system design are not mysteries known only to a few in arcane academic specialties: the popular press is full of decades worth of case studies.
posted by skye.dancer at 7:46 AM on December 11, 2017 [3 favorites]


This technology will make it impossible to tell reality from fiction, just like computer graphics, or photoshop, or fax machines, or cameras, or painting, or books or oral history or...
posted by runcibleshaw at 8:32 AM on December 11, 2017 [3 favorites]


I suppose we will eventually come to regard videos the way we regard a person telling a story -- might be true, might be false, gotta consider the source and maybe ask for corroborating evidence.

But it's gonna be a very rough ride between now and when everybody defaults to that level of skepticism. Maybe that's the best harm mitigation we can hope for? If so, the researchers demonstrating their technology with Obama, making it clear how harmful this could be, getting us ready to stop assuming video is real, is one of the best things they could do.

But really do expect them to think about and take some responsibility, and if that's a purposeful choice, they should explicitly say so.
posted by straight at 8:43 AM on December 11, 2017 [2 favorites]


This technology will make it impossible to tell reality from fiction, just like computer graphics, or photoshop, or fax machines, or cameras, or painting, or books or oral history or...

Huh?
posted by lazaruslong at 10:18 AM on December 11, 2017



Huh?

Translation: New technology and media in the past never blurred reality and fiction, so it will never happen ever in the future.

Which, obvs, I think is wrong, on both assertions.
posted by soren_lorensen at 10:25 AM on December 11, 2017 [3 favorites]


It really doesn't serve anyone to be totally unconcerned about new developments in technology. How many times have we heard people say "well, we survived the Cold War and MAD, so I think we can survive this" about global climate change? It's entirely possible for a new situation to be totally unprecedented, and to present new challenges we haven't had to deal with before.
posted by shapes that haunt the dusk at 10:33 AM on December 11, 2017 [2 favorites]


@tirutiru

so because code can endlessly be manipulated (hacked), we should just give up on worrying about whether people can hack it and stop using things like encryption all together (which exists as a safeguard to prevent data theft)? Because it seems that's what you're arguing.

Because the fact that Photoshop doesn't stop everyone isn't the point, the point is it stops massive amounts of people doing it all once, producing a massive, confusing issue where suddenly everyone is using counterfeit bills, because its so simple every teenager could do it without thinking. (And teenagers tend not to have a lot of deep thoughts about risk or safeguards, so having thousands of them counterfeiting all at once is actually a totally plausible scenario if one kid in a high school figures out you can scan money anytime all-the-time in Photoshop.)

No, any code can be hacked, manipulated, whatever. Every encryption can be broken. But we don't stop using them just because they're vulnerable. We don't do every banking transaction in cleartext for obvious fucking reasons.

But the argument of "but it fails half the time, so why do it" is screaming "but if the database can be hacked anyway, why bother securing it?" (Which is, I guess, the attitude EquiFax must have had).

You don't do it because it's some permanent solution that can never be worked around. You do it because there's no possible way for society to function properly if everyone is counterfeiting money with Photoshop, nobody is using encryption for anything, and nobody knows if money is real or if their bank statements are even real, because they are all in cleartext and endlessly manipulated. Where there is no ability for any nation to conduct warfare or diplomacy, because they all have complete access to each others communications. (Once again returning to EquiFax, I have talked to exactly zero people who still have confidence their credit score actually means anything anymore, and isn't potentially being manipulated by people who have stolen our personal data. That's the kind of fear and uncertainty in institutions lack of safeguards creates.)

Because sure, they currently do get manipulated/hacked, and sure, even if they chose not to sell their product to someone, that doesn't mean it can't be pirated. But by making it a difficult process to do so, less do it.

Unless you're really going to try to argue that encryption is somehow materially different from this?
posted by deadaluspark at 10:57 AM on December 11, 2017 [2 favorites]


I didn't listen to the video more than a few seconds because I can't find my headphones, but I have a couple of comments.

1) I don't think it's fair to say that the researchers haven't considered the implications of their work at all. The lyrebird website has a section titled "Ethics" with a (very cursory) explanation--that they want to release the software before it's ready for prime time so that people have a chance to develop safeguards. You can still argue about whether that is sincere or a good idea, but to say that no one on the team has thought about it just isn't true.

2) Most people I know who did a STEM degree in Canada were required to take humanities and social science courses.
posted by quaking fajita at 1:40 PM on December 11, 2017 [3 favorites]


My simple point was that Photoshop, working exactly as intended serves to generate a torrent of false information. In the face of all that, I really don't see how a tiny protection against loading images of currency is really noteworthy, even if it did work (which it doesn't).

Coming back to this prototype, which is pretty awful at present, I just don't understand some of the reactions. Everything you are afraid of, is already here. No badly synced video is going to unleash some fresh malevolence.

People who use the web/social media primarily in English, have seriously no idea of the alternate universes out there. A Pakistani journalist once tweeted that there is a separate 'Malala expozed as CIA agent' to fill every niche on whatsapp.
posted by tirutiru at 1:41 PM on December 11, 2017


I think the real issue here is we've been talking past each other, obviously non-purposefully. I think I understand where you'e coming from a bit more now, but I also think you're making assumptions about some of my positions here, although some of those assumptions you are correct in.

My reaction isn't that this is some new terrifying thing that we should be afraid of and that is going to destroy mankind. My reaction is that I wish people developing tech took more time to have thoughtful ethical and philosophical positions about their technology. And as quaking fajita pointed out, I was crass to assume so, because they have.

I suppose a lot of that is personal bias from years of living and working with programmers, and having always dabbled in it a tiny bit myself. Because yes, you are correct, I primarily use the internet in English, so I am in quite a bubble. I suppose that fear is driven because I do live in the United States, which approaches its systems of public education in a wildly different manner than a lot of the rest of the world.

For instance, I'm sure more humanities and literature and philosophy and ethics are taught to people who pursue tech fields in other parts of the world, whose societies have better values and sense.

My bias is I grew up around a generation of men who has absolutely decided it is not their responsibility to consider anything about the bigger picture. If we wanted to create compliant sheep who are glad to be just another cog in the wheel of progress without considering their position, well, in America we got it. We got it through a labor market that is so competitive, through driving wages to the bottom and requiring degrees to get a food in the door, that kids are practically being trained from the get-go to be compliant, good employees, but not necessarily thoughtful citizens. We got it from a toxic bro culture that thinks they're more thoughtful and sophisticated than they are simply because they develop technology, despite the fact that they won't address their own industries massive sexism. Which is obviously a philosophical and ethical issue in itself. We got it from an education system that has been captured by corporations and so what is considered "worthwhile" to become educated in become dictated by what makes a profit in business, instead of the purpose of an education to be to create a fully well rounded individual who can participate in all facets of civic life. It is rough to have been bombarded basically my entire life with the idea that the things I chose to pursue study in were "worthless" because they didn't increase my likelihood of getting a high paying job. Which is fucking stupid. I live in a stupid fucking culture.

So, do forgive me. Perhaps the rest of the world is not like this so much. I spend my time fearing this because I know too many code crunchers who are too busy prepping their bunker for the apocalypse and talking about states rights that I can't approach this subject without unfortunately thinking about the uniquely American experience.

The American experience is scary.

Anyway, I don't think the world of misinformation is here just because of this video. I know full well it's been here a long time, but I also know full well that people are working on such safeguards, and I'm glad for the ones that do create them and think about them.
posted by deadaluspark at 2:34 PM on December 11, 2017 [2 favorites]


Lately I've been watching a lot of Penn & Teller's Fool Us lately. One of the funniest things about the show is the magic that most surprised them is when a trick could easily be done with a computer is done with a no tech photograph. Point being, we have arrived full circle where we cannot believe the technical solution and must resort to luddite methodologies to establish trust.
posted by Nanukthedog at 2:57 PM on December 11, 2017 [1 favorite]


Here we have an example of the American experience:

Palihapitiya’s criticisms were aimed not only at Facebook, but the wider online ecosystem. “The short-term, dopamine-driven feedback loops we’ve created are destroying how society works,” he said, referring to online interactions driven by “hearts, likes, thumbs-up.” “No civil discourse, no cooperation; misinformation, mistruth. And it’s not an American problem — this is not about Russians ads. This is a global problem.”

So this is a man who seems to sincerely regret not having considered these ramifications sooner. He isn't the first to talk about this in regards to Facebook and social networks in general, and they are always referring to stuff like likes and upvotes and favorites, because we do have verifiable dopamine ticks when we get those. (He certainly won't be the last, either. I'm sore more and more American's who slowly figure out the ethics of a lot of the technology they've worked on isn't exactly sound will come forward. I think Edward Snowden might have just been the first in an eventual flood of people finally taking more consideration about what they're doing after seeing the real world results and affects.)

Exploiting the human psychology wasn't maliciousness in design. It was just a lot of young people who were trying to find ways to democratically make the "cream of the crop" rise to the top without perhaps realizing or understanding the innate human psychology they were exploiting. The ones who grow up to regret it probably ended up learning about ethics and philosophy and the like, but they probably wish they had learned about these things sooner, and had the capacity to approach them in tandem with their capacity to create technology, instead of having one skill outpacing the other, leading to this sort of problem. There's also those who have no scruples and are just as ready to exploit human psychology to make a quick buck and understand what they are doing very well. I am personally of the mind that some of those people and their actions can be mitigated through proper education that includes humanities/philosophy/ethics.

Once again, this is the American experience where we teach them to be proficient coders at a young age but are not instilling in them critical thinking skills to be able to approach that skill in a socially healthy way.
posted by deadaluspark at 3:28 PM on December 11, 2017


Also this.

Which is obviously another example of how the negative is already coming to light, despite people making similar technology having the ethics and philosophy to approach it in a sensible manner.

While Lyrebird might have considered it and have their best intentions in mind, it's very obvious others do not.

I still maintain better education (at least here in the states) could help combat this.
posted by deadaluspark at 3:45 PM on December 11, 2017


For example: they could limit the kinds of words that can be synthesized, making sure that hate speech or pejorative speech is minimized. They could choose not to sell the software to social media firms or traditional media outlets or prevent the use of their videos within social media frameworks. They could limit the use of video content only to people who have been dead for X years, or people who have provided a signed statement that they have okayed their likeness to be used, or prevent the use of video content featuring children. Etc.

I'm not sure what benefit any of this has over a watermark, as both options ensure that this specific software implementation won't be the one used by bad actors.

If what they were doing was difficult and novel and limited to people with access to large computing resources then controls like this would make sense because you could meaningfully slow the access troll farms or scammers have to these techniques.
posted by zymil at 4:37 PM on December 11, 2017 [2 favorites]


@ deadaluspark
Thank you for expanding on your comments. A quick response to a couple of them:

more humanities and literature and philosophy and ethics are taught to people who pursue tech fields in other parts of the world,

Exactly the opposite is true, and I am surprised that this is expressed so frequently. The American 4 year liberal arts degree is very much an anomaly (had to look up wiki for other examples). In India, China and most of Europe it's a 3 year Bachelor of Science with a couple of token humanities/language courses thrown in, if any. And believe me many of those graduates would consider an additional year of courses in philosophy, literature, history etc to be a complete waste of time. I don't mean that they necessarily think the disciplines are a waste but that passing courses in them is not worth the effort. On a related note, a medical education starts after the school-leaving examination and does not require an undergraduate degree.

As you can tell I cannot identify with this 'woe unto us' spirit of self-deprecation that liberal Americans have about their higher education. As a fortunate beneficiary, I think it does pretty well on the whole.

Re: this program: as zymil said, these are now readily available tools running on regular hardware. Much of the software is even open-source. That means many of the remedies suggested earlier in the thread are laughably irrelevant.

In the end the spread of disinformation is a global tragedy. The solutions will rely on old-fashioned networks of trust and verification. The enemy as always, is apathy.
posted by tirutiru at 6:13 PM on December 11, 2017 [3 favorites]


You really probably don't even need to take a course to understand that making fictitious representations in order to deceive people is wrong. Little children understand that idea.
posted by thelonius at 6:18 PM on December 11, 2017 [1 favorite]


Absolutely. And kids figure out that it can be fun to fib when you get away with it...
posted by tirutiru at 6:23 PM on December 11, 2017


« Older “Sizzling circuits!”   |   Be calmly aware that this may periodically expand... Newer »


This thread has been archived and is closed to new comments