A social experiment without consent, oversight or regulation
June 12, 2019 10:17 AM   Subscribe

 
Don't? Be Evil!
posted by cmfletcher at 10:21 AM on June 12, 2019 [4 favorites]


This is a vitally important topic (Russian trolling) of vast immediate importance. What would have been a better way to research and investigate?
posted by PhineasGage at 10:25 AM on June 12, 2019 [8 favorites]


This is not a social experiment. It's security research. No different than "this guy says he can crack any server, I'll pay him to crack my own server and watch what he does".
posted by allegedly at 10:36 AM on June 12, 2019 [28 favorites]


Reading the article, Jigsaw paid $250 for an attack on a target they created - an anti-Stalin website. They're not setting up campaigns against unknowing, real activists, which is what I'd expected from the FPP.

The attackers weren't consenting, but I don't see how they could be.

And yes, it feels to me like security research.
posted by bagel at 10:38 AM on June 12, 2019 [27 favorites]


As a matter of national security, I would hope it would be the myriad security organisations whose job it is, or should be. That the temporary benefactor of Russian trolls is obstructing and outright denying any kind of investigation other than that of investigating those investigating it is troublesome. But to allow private companies to go ahead and do human experiments without restraint is far more troublesome.
posted by adept256 at 10:38 AM on June 12, 2019 [3 favorites]


The problem as I understand it is that it wasn't hermetically sealed research. There was leakage as the accounts attacked other accounts outside of the experiment.
posted by rhizome at 10:38 AM on June 12, 2019 [4 favorites]


What was learned from this research? We didn't discover that these networks exist or the capability of the damage they can cause. We just found out the going rate.
posted by cmfletcher at 10:41 AM on June 12, 2019 [6 favorites]


This is a vitally important topic (Russian trolling) of vast immediate importance. What would have been a better way to research and investigate?
Given that disinformation is a known tactic used by Google/Alphabet in their war against regulation, I'd prefer that such research be conducted by almost literally anybody else.

There are also ethical issues with how this was done.
posted by Fish Sauce at 10:41 AM on June 12, 2019 [4 favorites]


One of my favorite parts:

Strangely, neither Jigsaw nor the security firm hired for the experiment said they were able to provide WIRED with more than a couple of samples of the campaign's posts, due to a lack of records of the experiment from a year ago.

So not actual security "research", but market shopping to determine the going rate then, no?

$250, same as in town...

At least now Alphabet knows what they should be charging when they release their upcoming (short-lived of course) service offering, "iMeddle".
posted by jkaczor at 10:56 AM on June 12, 2019 [5 favorites]


They're not setting up campaigns against unknowing, real activists, which is what I'd expected from the FPP.

Sure but they end paying for pro-stalin astroturfing, which isn't great.
posted by paper chromatographologist at 11:13 AM on June 12, 2019 [1 favorite]


At least now Alphabet knows what they should be charging when they release their upcoming (short-lived of course) service offering, "iMeddle".

Oh come on you can't be serious...

It'd be gMeddle.
posted by avalonian at 11:38 AM on June 12, 2019 [9 favorites]


cmfletcher: We just found out the going rate.

... for an approximate quantity and quality of astroturf disinfo --
Two weeks later, SEOTweet reported back to Jigsaw that it had posted 730 Russian-language tweets attacking the anti-Stalin site from 25 different Twitter accounts, as well as 100 posts to forums and blog comment sections of seemingly random sites, from regional news sites to automotive and arts-and-crafts forums. Jigsaw says a significant number of the tweets and comments appeared to be original post written by humans, rather than simple copy-paste bots. "These aren't large numbers, and that’s intentional," says Jigsaw's Gully. "We weren’t trying to create a worldwide disinformation campaign about this. We just wanted to see if threat actors could provide a proof of concept."
From earlier in the article, it also noted that these broad disinfo efforts are not sold explicitly, but when black-hat SEO work is offered, you can probably expect that there's political disinfo offered "behind the counter."


jkaczor: One of my favorite parts:

Strangely, neither Jigsaw nor the security firm hired for the experiment said they were able to provide WIRED with more than a couple of samples of the campaign's posts, due to a lack of records of the experiment from a year ago.


You didn't include the next line: "The 25 Twitter accounts used in the campaign have since all been suspended by Twitter." But the article doesn't state whether that's because Jigsaw/ Alphabet informed Twitter, or if they were suspended for generally being flagged as scammy accounts.

As for the issues -- In addition to this being something that is probably best done by anyone other than Alphabet/Google (or Facebook, or Twitter), their target (Stalin) wasn't without its own issues:
Even as Jigsaw exposes the potential for cheap, easily accessible trolling campaigns, its experiment has also garnered criticism of Jigsaw itself. The company, after all, didn't just pay a shady service for a series of posts that further polluted political discourse online. It did so with messages in support of one of the worst genocidal dictators of the 20th century, not to mention the unsolicited posts in support of Vladimir Putin.

"Buying and engaging in a disinformation operation in Russia, even if it’s very small, that in the first place is an extremely controversial and risky thing to do," says Johns Hopkins University political scientist Thomas Rid, the author of a forthcoming book on disinformation titled Active Measures.

Even worse may be the potential for how Russians and the Russia media could perceive—or spin—the experiment, Rid says. The subject is especially fraught given Jigsaw's ties to Alphabet and Google. "The biggest risk is that this experiment could be spun as 'Google meddles in Russian culture and politics.' It fits anti-American clichés perfectly," Rid says. "Didn't they see they were tapping right into that narrative?"
posted by filthy light thief at 12:01 PM on June 12, 2019 [6 favorites]


A moderation direction from the political tragedies post inspired this anemic post:

[If folks want to continue on the Jigsaw/Alphabet/Russia thing it should probably move into its own thread.]

If the title is inaccurate, make your case. I'm sick of tech company 'experiments'. As someone has pointed out, this isn't exactly scientifically rigorous or ethically conscious. We've already seen the devastating effects of misinformation campaigns, and while this one seems rather mediocre, it's alarming that this was done with impunity and a gross disregard for ethical responsibilities.

Facebook did an experiment testing the hypothesis that promoting negative content towards some users would alter their mood. How dare they. Other industries are subject to strict regulations on human experimentation, for reasons that are plain to all. In the case of the tech companies there are no such restrictions, they seem to be a law unto themselves.

I think that's unacceptable and something that requires action from our political leaders.
posted by adept256 at 12:03 PM on June 12, 2019 [10 favorites]


I mean, they named the company after the antagonist of the Saw movies...
posted by dirigibleman at 12:05 PM on June 12, 2019 [4 favorites]


There was leakage as the accounts attacked other accounts outside of the experiment.

...which also shouldn't be surprising. If you're paying Russian trolls, you shouldn't be surprised that they're also being paid to attack elsewhere and will use some of the same accounts to do so. I mean, it's sorta their thing?
posted by aramaic at 12:18 PM on June 12, 2019 [1 favorite]


you didn't include the next line

Well - I agree that also seems to indicate a lack of rigor with this research. (Juggling how much to quote is always fun), one would think there would be records, monitoring, follow-up and some effort put into communicating with Twitter - it's not like the two internet giants don't communicate with each other - they probably have far better formal and back channels between them than any of us "users/consumers" have.

...I mean, they named the company...

Heh... also, isn't "Alphabet", essentially "Umbrella Corporation"? At one point they owned Boston Dynamics, so... cmfletcher nails it...
posted by jkaczor at 12:31 PM on June 12, 2019


I mean, it's sorta their thing?

And once you are funding them, it's now your thing. You don't get to avert your eyes just because it's outside the original scope.
posted by NoxAeternum at 12:31 PM on June 12, 2019 [1 favorite]


There is certainly something about this bit of research that worries me, but I think I'll disagree with adept256 a little, at least when it comes to the details. The Facebook study (in Proceedings of the National Academy of Sciences, back in 2014; here's the paper) is, to my mind, a much clearer example of ignoring human subjects regulation. Whether this work violates it or not is much harder to say, because it's the kind of research that's way beyond what the regulation encompasses.

The relevant regulation in the US is 45 CFR 46, often referred to as the Common Rule. Very broadly, the Common Rule requires that, for any research with human participants, that the participants provide informed consent, that there is commensurate benefit, and that the research has been assessed as to the likely harm that may be inflicted on the participant. However, the Common Rule applies to research supported by Health and Human Services subordinate agencies (e.g., the National Institute of Health), and is usually expanded, at least in academic settings, to encompass all human participants research at the institution that would be covered by the Rule if the researchers were funded by an agency that mandates it. So, for example, even though my research is funded by an external corporate sponsor at my university, I still have to comply with the Common Rule for my research.

The Facebook study fell afoul, in my opinion, of all of the basic tests for human subjects protection (a point I've talked about on Metafilter before. Even if all I'm going to have you do is look at cat pictures, I'm not allowed to just have a clickthrough consent the way they did. There wasn't, as far as I can tell, an effort to determine the potential harms or the potential benefits of the work, and none of that is OK. That said, it's (to my mind) an obvious failure of the Institutional Review Board and the existing regulation - it was an experiment involving human participants, and as a result, it should have been held to a reasonable standard. It wasn't, and that's shameful.

This piece of work by Jigsaw is way, way outside the bounds imagined by 45 CFR 46, even in its updated form (it was updated a couple years ago, and the new version went into effect at the beginning of 2019). Now, should we have regulation for this kind of work, probably growing out of the Common Rule (and, more broadly, out of the Helsinki Declaration on Medical Research). Yes. Absolutely.

What does that regulation look like? It's a much harder problem, because the idea that you can, for example, get meaningful consent from every human involved here is just beyond the scope of the idea. At the same time, it's research that needs doing (albeit more carefully than this), and there should be an effort to do it well. More broadly, I'd like to see a much stronger culture of participant protections in the space where computer science and engineering intersect with people and how they behave. The human side of their problems is essential, deeply dangerous to get wrong, and yet we need to study it in a way that's safe, or at least monitored. We've got some of the groundwork already, but it's a case of both regulation and culture, and it's a hard problem.
posted by Making You Bored For Science at 12:35 PM on June 12, 2019 [27 favorites]


Thanks for the nuanced comments MYBFS.
posted by PhineasGage at 1:03 PM on June 12, 2019


On the one hand, I see why someone thought this was an interesting idea. In addition to determining the going rate, this kind of experiment can potentially teach you other things about the behavior of firms that offer these kinds of services. From the article:

> SEOTweet reported back to Jigsaw that it had posted 730 Russian-language tweets attacking the anti-Stalin site from 25 different Twitter accounts, as well as 100 posts to forums and blog comment sections of seemingly random sites, from regional news sites to automotive and arts-and-crafts forums.

> Jigsaw says a significant number of the tweets and comments appeared to be original post written by humans, rather than simple copy-paste bots.

> Without any guidance from Jigsaw, SEOTweet assumed that the fight over the anti-Stalin website was actually about contemporary Russian politics, and the country's upcoming presidential elections.

There are actual useful lessons here. It's interesting that the firm they contracted with used mostly human-written disinformation rather than automating, that they took the initiative to link the anti-Stalin site to the Presidential elections, and that they took it in a pro-Putin direction. If this was a pattern of behavior for other Russian firms offering similar services, it could be a strong signal that firms in this space have a common agenda themselves, and that we can't just regard them as mercenaries. I can also see the pattern of tweets and posts being helpful in improving detection tools.

Like, I do think that someone should be doing this research. I'm not sure of the best way to structure that, but if this were a University research team working closely with the State department (and under a sane Presidential administration) I'd hardly bat an eye. Instead, this was done by a private company with an extremely questionable track record is doing human subjects research with no fucking oversight!

To be honest, I feel really disappointed in the Jigsaw team who I have generally very much approved of. They've built a lot of excellent and useful tools like Project Shield and the self-hosted Outline VPN, which have provided real improvements in the security of journalists and human rights groups. This kind of horrible behavior will rightly!!! taint their reputation, and we'll be fucking worse off.
posted by a device for making your enemy change his mind at 1:21 PM on June 12, 2019 [8 favorites]


I genuinely don't understand the degree of outrage. Every single day Google's search team is running countless "experiments" and measuring user responses. This particular project engaged outsider involvement, of necessity, and was managed imperfectly. But this wasn't the Milgram experiment, and I have yet to see anyone here state how research into this very challenging real life situation with Russian trolls could have been done better and still have actually secured information. "There should have been oversight" is fine; but now, if we assume oversight, how should the actual experimental design have been different?
posted by PhineasGage at 1:37 PM on June 12, 2019 [6 favorites]


It's interesting that Milgram's experiment is brought up, where test subjects did not know or consent to what was really being measured.

Milgram's experiment, the Stanford Prison Experiment, and medical testing on Nazi concentration camp prisoners all sit in this infamous category.

These experiments and others are why we today have schools of bioethics and IRBs, which discuss, examine, and place restrictions on and guidelines around experiments and experimental conduct, to avoid deception and exploitation.

Unfortunately, private companies like Alphabet/Google and Facebook do not have to answer to authorities, before conducting their experiments on people.

I'd say just be thankful that Alphabet/Google hasn't gone into medicine, except that the main shareholders are heavily invested into genomics and aging research.

If only we had a government that could hold private companies to account for ethics violations, let alone violations of the law.

Maybe we need to regulate social media like a controlled drug, in that social media affect people in ways which are proving detrimental to society at large.

This or any similarly rational readjustment in perspective might be what is needed to bring about the regulatory regime required to put an end to experimenting on people.
posted by They sucked his brains out! at 2:45 PM on June 12, 2019


If this is human subjects research, are the subjects the Russian troll farm operators? If Jigsaw was performing competitive market research on troll farms to determine which one gave the best value for their $250, is it still human subjects research?
posted by allegedly at 2:47 PM on June 12, 2019


The calls for Alphabet to be held accountable (read: by the U.S. where they are based) or for there to be oversight of them reminds me of times when the power is out at the house and I keep flipping on the lights before remembering lights don't work without power.
posted by avalonian at 2:57 PM on June 12, 2019 [3 favorites]


Other industries are subject to strict regulations on human experimentation, for reasons that are plain to all.

Yes, but the analogy would be experimenting on humans with a product that is already being sold and used by humans. I'm not saying I don't have some problems with how this research was conducted, but it does need to be conducted.
posted by xammerboy at 4:06 PM on June 12, 2019


800 responses for a $250 week-long campaign against Stalin is pretty freaking low stakes. At least in this experiment, there is too little execution time to see what organically gets brought into the mix.

Those that have pointed out that this is a security test are correct. There is very little that one can see built on this in terms of long term effect or multi-media campaign. This only establishes that they could obtain a channel, issue an ambiguous direction and get a minimum viable troll farm constructed.

I'd imagine that this is probably more lucrative than gold farming. And yeah. For 250 the quality and bar of what you get is pretty low... what could they have gotten...

From a reverse engineering perspective, what are the negative things one needs to do to feed their NLP program forms screening out this kind of attack. Likewise, they have explicit insight into what the IP addresses posted/spoofed to by the hackers... there is a lot of fingerprinting that you could get out of this - explicitly because alphabet controlled both sides of the data request.

Honestly, I wouldn't show anyone this work if I were Google either. But you'd damn better believe that I'd have built in internal screening.
posted by Nanukthedog at 4:27 PM on June 12, 2019 [1 favorite]


If people think this is a problem, wait until they hear about the last 70+ years of the advertising and PR industries.
posted by A Thousand Baited Hooks at 4:49 PM on June 12, 2019 [4 favorites]


If people think this is a problem, wait until they hear about the last 70+ years of the advertising and PR industries.

If you are unsure that people see this is as a real problem, why would regulators go after cigarette and vaping companies for marketing to children, for example?
posted by They sucked his brains out! at 5:19 PM on June 12, 2019 [1 favorite]


Well, how effective was the 250$ campaign? Did they not have any metrics to gauge effectiveness?
posted by talos at 6:06 PM on June 12, 2019


It's interesting that Milgram's experiment is brought up, where test subjects did not know or consent to what was really being measured.

It's why the Milgram experiment was brought up, as a widely-agreed example of unethical human-subjects research to use as a contrast.

The Facebook "experiment" was Milgram stuff, for sure. I also think this one doesn't doesn't feel too far removed from the the sort of thing I'd see a guy with a security blog doing and say "wow, interesting." The part where it gets a bit shady is that it's a huge multinational company messing around in politics in another country, even if in a minor way.
posted by atoxyl at 8:02 PM on June 12, 2019 [2 favorites]


Since the campaign was executed in context of Russian politics, one could argue that it's a similar kind of shady to when security services do things like this.
posted by atoxyl at 8:07 PM on June 12, 2019


I think there are really interesting conversations to be had about the scope of IRB-style oversight of digital experiments, and the differences between university-led research and industry "research" (in quotes because it's often something else disguised as research).

I can't tell you the number of times I've been in meetings at our university with potential industry partners who propose super shady shit that just would not fly with our IRB (like, hey, this digital health app we are building, how about it secretly records a bunch of personal info and then we sell that to interested companies to fund the research? Or what if it detects whether users are under age and then stealth-notifies their caregivers and doctors about their health information?) When we explain that not only would these things have to go through IRB, but there is no way in hell they would be approved, the partners shrug and say that they aren't bound by IRB, and thanks for the discussion but they guess they'll now do the project without us.

There are also a number of ex-academics I know who have been quite open about how relieved they are now they work in industry that they can do the kind of stuff that would have had to have IRB approval, without having to go through that process.

I know there are regional differences as to how IRB processes work, what scope they have, and whether they even exist at all. That also becomes interesting when we are talking about multinational organisations. Even if there was some kind of ethics oversight process, who would administer it?

Here in Australia it is not the case that the IRB-equivalent (we call it Ethics Approval) is only about consent and human subjects. It is about all kinds of ethical risks. Our university (and most others here now) requires ethics review and approval even for research like scraping public Twitter and analysing the tweets. It doesn't mean they are going to say you have to get consent from each user whose tweet you use (although they might make that decision). It's about making sure the researcher identifies all possible risks and ethical concerns about what they are doing, determines that the benefits (not just scientific, but to the broader community) outweigh any such risks, and also puts in place plans to mitigate any identified risks or concerns.

While the actual process of application can be intensely irritating and hoop-jumpy, I think there is massive value to everyone including the researchers in how it forces us to think through the ethical implications of what we are doing, and public-good/private-good trade-offs. I wish that somehow industry professionals could be subject to a process that forced them to have these discussions too. But I don't know how it would practically come about.

Idealistically we could hope to build that kind of thinking into the education and training process for technology professionals, so that they would voluntarily choose to reflect on these kinds of ethical issues. But my experience with companies like Google is that you can have a situation where the vast majority of individual engineers (at least the ones I know there) do think critically and deeply about the ethical implications of the tech industry generally and their own projects specifically, yet that kind of thinking is quickly discouraged or disregarded when it conflicts with the company's financial incentives.
posted by lollusc at 5:16 AM on June 13, 2019 [6 favorites]


Reading lollusc's comment, I've had very similar conversations. One interesting subtype is how these conversations have varied between colleagues and collaborators in different departments - I'm a postdoc in what is, essentially, a computer science department, but the lab I'm in is essentially a visual perception / psychophysics lab, which means we hold to Psychology standards on what needs to go to the IRB, and we're pretty scrupulous about consent, assessing the risks/benefits of research and trying to do ethical research. We have it drilled into us from when we first do training for human subjects research, and it's regularly refreshed. However, my colleagues and collaborators who are computer scientists or engineers just don't have the culture of research review and consent the way Psychology and Neuroscience do.

I'm fully in favor of much broader application of our existing human subjects regulation for both academic research and industry research. Frankly, our existing regulatory structure in the US is, at least, a good starting point for asking the right sorts of questions - what's the harm, what's the benefit, what are the potential consequences? Even inside the academy, I've flat-out told friends and potential collaborators that if they're running experiments without IRB approval, I cannot work with them as a matter of professional ethics.

How we translate some version of this culture to industry is a really weird question, because as lollusc points out, how one builds an IRB or ethics board that can deal with international issues (and is independent from the companies it is overseeing) is deeply tricky, since the relevant regulation varies dramatically based on locality. Done wrong, what we could wind up with is something like the "flag of convenience" problem for maritime traffic as a dodge on regulation in other countries, even if those others are where the ship does much of its business. I don't have an answer for this problem, but it's a problem we need to think about.

One small piece of it might be for universities to push the ethics review point much harder in working with industry sponsors; in essence, if they want the prestige of working with a university (which is a significant factor in why they fund university research, along with access to a pipeline of bright students), require more aggressive oversight, even where it hasn't been previously required. That provides pressure on one small piece of the problem, but it might be a start.
posted by Making You Bored For Science at 6:30 AM on June 13, 2019 [1 favorite]


A potentially useful point of leverage is that at least some industry sponsors really, really want to work with universities on research so that they can use university IRBs rather than having to pay commercial IRBs. Obviously this only applies to research where IRB oversight is required, which is only a piece; but many companies absolutey hate paying commercial IRBs or dealing with their own IRB stuff at all, and are desperate to get universities to handle it. They also, as noted above, want access to a pipeline of grad students.

Sometimes the IRBs I work with have had success in using that as a point of leverage to wrangle the research around to a point where it's approvable by our ethics committees. Sometimes the companies flounce off and god only knows what they do after that, but it's not "collaborating on research with us."

There are definitely disciplinary differences what qualifies as human subjects research, and also vast gaps between "what the regulations were built for" and "the kind of research that is being done using techniques and raising ethical issues that the government doesn't even begin to understand." A lot of IRBs are out on their own trying to invent the appropriate ethical determinations from whole cloth, with help from other universities, but very little guidance from the regulatory agencies.
posted by Stacey at 8:46 AM on June 13, 2019


If it's within affordability for me to hire a group to spread lies of my choice, the world has a very big problem.
posted by GoblinHoney at 1:48 PM on June 13, 2019


« Older "Its a good day for a choke hold"   |   you are what you eat Newer »


This thread has been archived and is closed to new comments