The appropriate place for regulation is where there is market failure
April 19, 2019 1:01 AM   Subscribe

A Regulatory Framework for the Internet - "There are, in Internet parlance, three types of 'free'... Facebook and YouTube offer 'free as in speech' in conjunction with 'free as in beer': content can be created and proliferated without any responsibility, including cost. Might it be better if content that society deemed problematic were still 'free as in speech', but also 'free as in puppy' — that is, with costs to the supplier that aligned with the costs to society?"
Start with this precept: the Internet ought to be available to anyone without any restriction. This means banning content blocking or throttling at the ISP level with regulation designed for the Internet. It also means that platform providers generally speaking should continue to not be liable for content posted on their services (platform providers include everything from AWS to Azure to shared hosts, and everything in-between); these platform providers can, though, choose to not host content suppliers they do not want to, whether because of their own corporate values or because they fear boycott from other customers.

I think, though, that platform providers that primarily monetize through advertising should be in their own category: as I noted above, because these platform providers separate monetization from content supply and consumption, there is no price or payment mechanism to incentivize them to be concerned with problematic content; in fact, the incentives of an advertising business drive them to focus on engagement, i.e. giving users what they want, no matter how noxious...

“Free as in speech” is guaranteed at the infrastructure level, the market polices platform providers generally (i.e. “free as in puppy”), while regulation is narrowly limited to businesses that are primarily monetized through advertising (i.e. “free as in beer”) and thus impervious to traditional content marketplace pressures.

This framework, to be clear, leaves many unanswered questions: what regulations, for example, are appropriate for companies like YouTube and Facebook? Are they even constitutional in the United States? Should we be concerned about the lack of competition in these regulated categories, or encouraged that there will now be a significant incentive to build competitive services that do not rely on advertising? What about VC-funded companies that have not yet specified their business models?
Free-speech issues in the context of privately operated platforms - "If someone is already operating a platform that makes editorial decisions, asking them to make such decisions with the same magnitude but with more pro-social criteria seems like a very reasonable thing to do."
Online platforms such as Facebook, Twitter and Youtube already engage in active selection through algorithms that influence what people are more likely to be recommended. Typically, they do this for selfish reasons, setting up their algorithms to maximize “engagement” with their platform, often with unintended byproducts like promoting flat earth conspiracy theories. So given that these platforms are already engaging in (automated) selective presentation, it seems eminently reasonable to criticize them for not directing these same levers toward more pro-social objectives, or at the least pro-social objectives that all major reasonable political tribes agree on (eg. quality intellectual discourse).
also btw...
  • The Privacy Project - "Companies and governments are gaining new powers to follow people across the internet and around the world, and even to peer into their genomes. The benefits of such advances have been apparent for years; the costs — in anonymity, even autonomy — are now becoming clearer. The boundaries of privacy are in dispute, and its future is in doubt. Citizens, politicians and business leaders are asking if societies are making the wisest tradeoffs. The Times is embarking on this monthslong project to explore the technology and where it's taking us, and to convene debate about how it can best help realize human potential."
  • Like It Or Not Facial Recognition Is Already Here. These Are The Industries It Will Transform First - "From screening patients for clinical trials to assessing the emotional state of drivers, we dive in to how facial recognition technology is shaping the future."
  • Feeling Safe in the Surveillance State - "In China, facial recognition cameras are celebrated, and many citizens believe the rest of the world is dangerous without them."
  • One Month, 500,000 Face Scans: How China Is Using A.I. to Profile a Minority - "In a major ethical leap for the tech world, Chinese start-ups have built algorithms that the government uses to track members of a largely Muslim minority group."
  • The U.S. Is Losing a Major Front to China in the New Cold War - "A swathe of the world is adopting China's vision for a tightly controlled internet over the unfettered American approach, a stunning ideological coup for Beijing that would have been unthinkable less than a decade ago."
  • This is an opportunistic infection - "When megacorps ate the pluralist internet, it became untenable for nation-states not to respond. The US wouldn't tolerate WeChat intermediating most economic activity between Americans, regardless of technical excellence. Why should others?"
  • Understanding China's AI Strategy - "Like the Soviet Union during the Cold War, China today is engaged in an extensive campaign to harvest technological and scientific information from the rest of the world, using both legal and illegal means. Unlike the Soviet Union, China's efforts have prioritized using such access to build industries that are competitive in global markets and research institutions that lead the world in strategic fields."
  • Tracking Phones, Google Is a Dragnet for the Police - "The tech giant records people's locations worldwide. Now, investigators are using it to find suspects and witnesses near crimes, running the risk of snaring the innocent."
What is it [decentralization] good for?
Decentralization is valuable as a buzzword because it signifies other things, other virtues. First and foremost in my mind, the “decentralized technology” movement aspires (however much it fails), to react against the leeching away of human agency that is the signal social fact of an increasingly large scale, technologically mediated world. “Decentralized” systems claim to be “open” and “permissionless”. What that means, or ought to mean in my view, is that human beings — as generally as possible, not just some special technosophisticate caste — should be able to use this technology to act in ways that are socially and economically meaningful for themselves and their own communities, and that are not restricted to patterns and templates sketched out by distant “tech entrepreneurs” or by anyone else.

We are very, very far from that world. A whole industry of “blockchain skeptics” has emerged, quite reasonably, to point that out. And yet this problem, that we are building a world in which, however paradoxically, the great power unlocked by advancing technology and large scale specialization and trade leaves most of us feeling ever less powerful, ever more at the mercy of distant and inchoate forces with respect to the circumstances of our own lives and families, is dire. You come to the counterrevolution with the technologies that you have, not those that you might wish to have. The current generation of overhyped, overspeculated, underdeveloped “decentralized tech” is close to the only game in town, from a human agency perspective. We’ve watched the internet itself, which was supposed to be a great equalizer, become a space more rapidly and efficiently consolidated, more disempowering from an economic perspective, than almost any sphere in the predigital world. It seems unlikely that we will undo the internet’s great achievement of stitching a naturally pluralistic world into a single gigantic economy ripe for domination. To restore some hope for human agency, we’ll need tools that let humans create and defend their own spaces, which must be economic as well as creative if they are to be sustainable.
more, etc.
  • Getting over Privacy - "My admittedly controversial view that privacy is incompatible with technological progress. We need strategies other than privacy to remain free in the digital age."
  • World After Capital: Getting over Privacy (Intro) - "While I understand that we have a lot of work to do to create a world in which broad public sharing of health information is compatible with freedom, this is the direction we should be embarking on."
  • World After Capital: Getting Over Privacy (Cont'd) - "We can't really protect privacy without handing control of technology into the hands of a few and conversely decentralized innovation requires reduced privacy. So what should we do? The answer, I think, is to embrace a post-privacy world. We should work to protect people and their freedom, instead of protecting data and privacy. In other words allowing more information to become public but strengthening individual freedom to stand against the potential consequences."
  • World After Capital: Getting Over Privacy (Finish) - "Much of the fear about private information being revealed results from potential economic consequences. For instance, if you are worried that you might lose your job and not be able to pay your rent if your employer finds out that you wrote a blog post about struggling with depression, you are much less likely to do so... Here the economic freedom conferred by a Universal Basic Income would protect you from going destitute because of discrimination, and by tightening the labor market, it would also make it harder for employers to decide to systematically refuse to hire certain groups of people. Further, we could enact laws that require sufficient transparency on the part of organizations, so that we could better track how decisions have been made and detect more easily if it appears that discrimination is taking place."
posted by kliuless (51 comments total) 34 users marked this as a favorite
 
Can anyone clarify the distinction being drawn in the first link between the non-ad-supported platforms and the ad-supported platforms?

It also means that platform providers generally speaking should continue to not be liable for content posted on their services (platform providers include everything from AWS to Azure to shared hosts, and everything in-between); these platform providers can, though, choose to not host content suppliers they do not want to, whether because of their own corporate values or because they fear boycott from other customers.

I think, though, that platform providers that primarily monetize through advertising should be in their own category: as I noted above, because these platform providers separate monetization from content supply and consumption, there is no price or payment mechanism to incentivize them to be concerned with problematic content; in fact, the incentives of an advertising business drive them to focus on engagement, i.e. giving users what they want, no matter how noxious.


I understand that an AWS customer has the power to leave AWS if there is a boycott, and that means less money in Amazon's pocket. But Facebook users also have the power to leave Facebook if there is a boycott*, which means less ad impressions, which means less money in Facebook's pocket. What's the difference?

*Maybe a difference is that Facebook has a big network effect, so it's harder to create competition for Facebook. But that doesn't seem to be part of the author's argument.
posted by value of information at 3:22 AM on April 19, 2019


Yeah, I'm not sure that distinction is as meaningful as the author is positing. By his logic, Angelfire or Lycos (or other, actually existing free hosts) would fall in a category alongside Facebook and Twitter, not with other hosting providers, and that seems illogical and counter-intuitive to me. Why should a free host be subject to different rules in that regard than one that takes a fee?
posted by Dysk at 4:15 AM on April 19, 2019 [1 favorite]


As I understand it the ad supported providers are incented to provide anti-social content so long as users stay engaged (i.e. flat earth stories).

OTOH platform provider's revenue is not aligned with their customer's anti-social behavior in the same way as FB, YT etc.
posted by askmehow at 4:37 AM on April 19, 2019 [1 favorite]


OTOH platform provider's revenue is not aligned with their customer's anti-social behavior in the same way as FB, YT etc.

Why not? If I'm AWS, and Stormfront wants to host their website on my platform, I might like them to do it and take their money. The more traffic Stormfront gets, the happier I am. It seems the same as if I'm Facebook, and Stormfront wants to post propaganda on their Facebook page, and a bunch of bozos want to read it; I might like them to do so and watch a bunch of ads, thereby taking advertisers' money. The more they read it, the happier I am.
posted by value of information at 4:55 AM on April 19, 2019 [3 favorites]


So given that these platforms are already engaging in (automated) selective presentation, it seems eminently reasonable to criticize them for not directing these same levers toward more pro-social objectives, or at the least pro-social objectives that all major reasonable political tribes agree on (eg. quality intellectual discourse).

Yes, this is something I've been saying for a while now. Looking forward to reading the rest of this.
posted by tobascodagama at 5:27 AM on April 19, 2019 [1 favorite]


OTOH platform provider's revenue is not aligned with their customer's anti-social behavior in the same way as FB, YT etc.

Why not?


I would say there is a useful distinction to be made between attention and bandwidth, in that while the incentives for bandwidth providers to court controversy in search of clicks may exist, the causal chain is much less direct than for that of advertisers. I.e. Facebook and YouTube want to keep you en(g/r)aged while Comcast is much happier selling you the gigabytes of bandwidth for a sitcom than the few measly bytes for a few facebook comments.
posted by ropeladder at 5:50 AM on April 19, 2019 [3 favorites]


I would say there is a useful distinction to be made between attention and bandwidth, in that while the incentives for bandwidth providers to court controversy in search of clicks may exist, the causal chain is much less direct than for that of advertisers. I.e. Facebook and YouTube want to keep you en(g/r)aged while Comcast is much happier selling you the gigabytes of bandwidth for a sitcom than the few measly bytes for a few facebook comments.

It still comes back to "engagement" - that is, Comcast knows that to sell you that bandwidth, they need to provide content that engages the end user, which in turn selects for content designed to engage (hence why Fox News is a basic cable staple.)

Also, can I just say that if you come to the conclusion that privacy and technological advancement are mutually exclusive, and you then conclude that means we should chuck privacy, that you are a bad person who needs to recheck their work?
posted by NoxAeternum at 6:24 AM on April 19, 2019 [5 favorites]


I think it comes down to the services that use sorting algorithms and the services that don’t.
posted by nikaspark at 6:31 AM on April 19, 2019


Start with this precept: the Internet ought to be available to anyone without any restriction. This means banning content blocking or throttling at the ISP level with regulation designed for the Internet. It also means that platform providers generally speaking should continue to not be liable for content posted on their services (platform providers include everything from AWS to Azure to shared hosts, and everything in-between); these platform providers can, though, choose to not host content suppliers they do not want to, whether because of their own corporate values or because they fear boycott from other customers.

This fails what I'm calling the "revenge porn test" - does the system allow for creating a platform to knowingly host revenge porn legally? If so, the system is broken and needs to be rethought.

A swathe of the world is adopting China's vision for a tightly controlled internet over the unfettered American approach, a stunning ideological coup for Beijing that would have been unthinkable less than a decade ago.

Because it turns out that the American approach is what enabled the alt-right, and thus people are understandingly rejecting it. When the head of CloudFlare publicly argues that he has to work with Nazis and terrorist groups because of freedom, it's time to take a step back and rethink things.
posted by NoxAeternum at 6:31 AM on April 19, 2019 [8 favorites]


Also, after reading that "Getting Over Privacy" piece - it is one of the clearest demonstrations of Engineer's Disease that I have ever seen. The author really should go take some courses in philosophy to realize that you can't engineer your way out of social problems.
posted by NoxAeternum at 6:44 AM on April 19, 2019 [6 favorites]


Historically when the US has dabbled with censorship it has not been to promote justice but instead to protect the status quo power hierarchy and persecute left-aligned viewpoints. Why would we expect a new try at it to be any different, especially now that the alt-right is in power?
posted by Pyry at 6:51 AM on April 19, 2019 [9 favorites]


Historically when the US has dabbled with censorship it has not been to promote justice but instead to protect the status quo power hierarchy and persecute left-aligned viewpoints. Why would we expect a new try at it to be any different, especially now that the alt-right is in power?

So, in short, we're screwed either way. Because, if you haven't noticed, the modern Internet is one of widespread censorship - it's just that we don't call it that. When minorities are forced to choose between having a voice and having safety online - that's censorship. When platforms routinely place their thumb on the scales to amplify alt-right voices and diminish leftist ones - that's censorship. In a very real way, what we have today is worse than what we would have with government involvement, because for as obtuse the government can be, it's nothing compared to the opacity of Silicon Valley.
posted by NoxAeternum at 7:40 AM on April 19, 2019 [7 favorites]


Can anyone clarify the distinction being drawn in the first link between the non-ad-supported platforms and the ad-supported platforms?

It's a distinction without a difference.

How many times have you heard that if you're not paying for the product, then you are the product? That shibboleth is an organized campaign by corporate interests like Comcast and Microsoft battling against ad-supported platforms.

But we know that's hogwash, because even when you pay for the product you're still the product. Cable providers used to get sued for selling customer data, but now they've lobbied to make it legal.

So even after you pay $200 a month to your cable provider, they still saturate every show with commercials, track your viewing habits, profile and segment you, and sell your data to the highest bidder.

If I have to live in a world where giant corporations harvest my data and sell it to advertisers who annoy me and to propagandists who destroy democracy, at least don't make me pay for the privilege.
posted by ascii at 7:40 AM on April 19, 2019 [1 favorite]


I would argue that the state of things is less to do with the specific nuances in the incentives of platforms, and much more to do with what humans will click on when given a choice.

The recommendation algorithms are propelled by engagement metrics - like watch time - which ultimately come down to aggregated human preferences. It's unclear to me what magic metrics are available to non ad supported platforms that will magic away the recommendation problems...
posted by kaibutsu at 7:45 AM on April 19, 2019


It's unclear to me what magic metrics are available to non ad supported platforms that will magic away the recommendation problems...

Curation rather than purely metrics-driven algorithms?
posted by Dysk at 8:05 AM on April 19, 2019 [1 favorite]


It's unclear to me what magic metrics are available to non ad supported platforms that will magic away the recommendation problems...

I'm increasingly dissatisfied with the simple approaches companies take to their recommender systems. Tracking clicks and likes are immediate and abundant and low latency feedback signals, and so are easy to optimize for with traditional techniques, but the advent of more robust reinforcement learning techniques should bring longer-term metrics into reach.

I'm not certain exactly what they are, but they should at least be able to optimize for long-term instead of immediate engagement, which might be a proxy for more emotionally satisfying content. It would also be a good proxy for addiction, so I'm not totally happy with the idea.

I'd really like to see a major service try directly surveying their users for "how much did your last day on our platform leave you feeling fulfilled?" We should at least understand the ways in which it's infeasible to directly optimize that metric.
posted by AstroCatCommander at 8:34 AM on April 19, 2019


With this discussion, I'm also reminded of this piece criticizing New Zealand ISPs for blocking 4chan and 8chan after the Christchurch shooting over their refusal to remove the shooting video. What struck me was how incoherent the argument wound up being - it was a mass of non sequetors that never once addressed why they had done so.
posted by NoxAeternum at 9:40 AM on April 19, 2019 [1 favorite]


Curation rather than purely metrics-driven algorithms?

Who curates? How do they make their curation choices? Will they allow queer expression? When are nipples allowed? I think if you really follow that rabbit, you end up with something like a publication model again, with more-or-less niche interests represented in a particular corner of the web, and an ecosystem of editors and gatekeepers. (and, to be sure, you still have breitbart and stormfront off in the corner making their own editorial decisions...)

IMO, open platforms (indeed, the open web itself) are founded on a basic optimism in the people that interact with them. Whether as content makers (making interesting content that tells stories from new perspectives, or at least helping us figure out how to unclog a drain) or consumers (preferring to consume the 'good stuff,' leaving substantive comments, etc). I think that as a society we're slowly giving up on that premise; all the stuff about business models is a distraction.

And ultimately I think that's a perfectly reasonable decision to make. It's fine to step back and say that open platforms are/were a dead end; it puts you in a better place to try to salvage the good parts of open platforms.

[...] more robust reinforcement learning techniques should bring longer-term metrics into reach.

What do you mean by longer-term metrics? Long term watch time? If so, you're just increasing the relative importance of what people who use the site a lot prefer... which still may be trash.

Consider that the algorithm needs to decide how $CONTENT (amongst millions of choices) will impact $METRIC of $USER (amongst millions of users). The survey approach is going to require literal boatloads of surveys, that approximately no one will want to fill out. And, again, maybe a user's sense of $FULLFILLMENT has a high covariance with whether they saw PewDiePie make another racist joke today - the unwashed masses are still in the loop, and are happy to move the needle in directions that enlightened overlords find uncouth.
posted by kaibutsu at 10:48 AM on April 19, 2019 [1 favorite]


Who curates?

The same people who effectively do so at the moment, in their decision to use and control of the recommendation algorithms.

Curation can exist on open platforms. I posited it as an alternative to the mentioned recommendation algorithms, which is not essential to the Web as an open platform in the first place.
posted by Dysk at 11:07 AM on April 19, 2019 [1 favorite]


I love the phrase "free as in puppy," but it breaks down in this context when you think about it. The people getting the puppy for free are not the same as the people who need to be responsible for the puppy in this case. Then again, maybe that's part of the problem.
posted by adamrice at 11:09 AM on April 19, 2019 [1 favorite]


Part of the way he wants to regulate the Internet can be seen in light of the fact that Stratechery is one of several Apple aligned pundit websites, like Daring Fireball and others. They see that any action that would hurt Apple as bad. See this article about Senator Warren's regulation proposals, which reads as "Yes the App store is a monopoly, but everything would be worse if Apple didn't have a monopoly". This drawing of the line with "free as in puppy" might as well read "free as in not Apple".
posted by zabuni at 11:41 AM on April 19, 2019


IMO, open platforms (indeed, the open web itself) are founded on a basic optimism in the people that interact with them. Whether as content makers (making interesting content that tells stories from new perspectives, or at least helping us figure out how to unclog a drain) or consumers (preferring to consume the 'good stuff,' leaving substantive comments, etc). I think that as a society we're slowly giving up on that premise; all the stuff about business models is a distraction.

The problem with open platforms is that if you want them to be worth a damn, they need to be curated - and once that happens, you get a vocal contingent claiming that they are no longer open. A large part of the problem is that we have people who believe that the concept of openness means that they have to allow hate, bigotry, and abuse - and they are too often the same people running these platforms.
posted by NoxAeternum at 11:49 AM on April 19, 2019 [4 favorites]


Also, after reading that "Getting Over Privacy" piece - it is one of the clearest demonstrations of Engineer's Disease that I have ever seen. The author really should go take some courses in philosophy to realize that you can't engineer your way out of social problems.

Around 4:30 in the embedded video he says "I'd like to see more things that empower individuals but I'd also like to see fewer things that empower people who are already in power. It turns out that's not going to come from the technology. I believe that the idea that we're going to solve that by building even more technology, I just think is fundamentally flawed. Instead, that comes from society, from the political process, from being engaged in the political process, from fixing democracy."
posted by XMLicious at 12:16 PM on April 19, 2019


Around 4:30 in the embedded video he says "I'd like to see more things that empower individuals but I'd also like to see fewer things that empower people who are already in power. It turns out that's not going to come from the technology. I believe that the idea that we're going to solve that by building even more technology, I just think is fundamentally flawed. Instead, that comes from society, from the political process, from being engaged in the political process, from fixing democracy."

And yet in his longform piece, we have this gem:
This power comes at a price: Protecting your digital x-ray image from others who might wish to see it is virtually impossible. Every doctor who looks at your image could make a copy (for free, instantly and with perfect fidelity) and then send that to someone else. The same goes for others who might have access to the image, such as your insurance company.

Now, critics will make all sorts of claims about how we can prevent unauthorized use of your image using encryption. But as we will see, those claims come with important caveats and are dangerous if pursued to their ultimate conclusion (preview: you cannot have general purpose computing).
Note that he treats this as a technological question (doctor/insurer can send file) rather than the social question that it is (doctor/insurer is abusing a position of trust.) And, in the US, that's how we deal with this - yes, you can do this - and if you are caught, you are incredibly fucked legally (fines, jail time, and suspension/revocation of one's license.) The piece is dotted through with this sort of thing - another example is that the answer to making everyone's medical record public is UBI, so they don't have to worry about it impacting their ability to maintain themselves (and missing the massive social ramifications that disclosing medical conditions can incur.)

His problem is stated clearly in his opening - he doesn't believe in privacy for privacy's sake. And from that faulty argument, the rest of his argument fails.
posted by NoxAeternum at 12:30 PM on April 19, 2019 [5 favorites]


The third part of this old documentary might be worth your time if privacy in the age of mass collection of data is of interest to you.
posted by wierdo at 4:13 PM on April 19, 2019 [1 favorite]


Note that he treats this as a technological question (doctor/insurer can send file) rather than the social question that it is (doctor/insurer is abusing a position of trust.)

But they aren't separate questions, particularly because abuse of trust isn't the only locus of the problem. If there isn't an actual technological method implemented for “protecting your digital x-ray image from others who might wish to see” then it doesn't matter how many “social solutions” you come up with—that “protection” will turn out to be quite porous if there's any substance to it at all, but the social pretense that it's not possible will also have negative consequences.

So for example although I would never consent to my DNA being given to a private dealer, most of it can be reconstructed from samples which close relatives have submitted to companies whose business is collecting DNA data, HIPAA be damned; I have no control over it.

Or for another example “right to be forgotten” laws accomplish nothing of the sort and at best change the results of a handful of large search engine providers. But some people really appear to believe that information out of sight in that very limited way is genuinely gone or more private.

In this podcast episode from last year, the researcher and entrepreneur interviewed claims that she's working on an MRI-like technology implemented with inexpensive retail camera sensors, based on ambient high-energy “ballistic photons” which pass through your body. I have no expertise to evaluate how realistic her predictions for the viability of her technology are but if those predictions were true, at some point medical privacy laws aren't going to be worth the paper they're printed on and all the philosophy courses in the world won't keep the same information as is in your medical records' x-rays out of the hands of anyone who wants it: those “massive social ramifications” won't be preventable.

Not without the kinds of universal limits on what computers can do that Wenger describes, at least. (And implementation of those limits will probably not work very well anyways for actually controlling personal medical information, but will work super well for unrelated purposes powerful interests in society will have for placing limits on computing devices with the force of law.)

Regardless of whether that particular prediction is true, yes the social nature of problems is preeminent, but the solutions have to be aligned with what's actually technologically possible. Otherwise we'll either end up in a society-wide pantomime where we pretend people have privacy but don't, and the divergence between pretense and reality will be easily exploited by the likes of the DNA collectors and those in power... or we'll end up with slightly more genuine (yet still false) privacy but as a result of dystopian top-down control and surveillance—significantly more exploitable by the powers that be even than the alternative of pretend privacy.

I guess a pivotal question is how likely we are to get a hypothetical future society which passes your “revenge porn test” by deeming the possession or dissemination of revenge porn illegal, but is completely unable to actually prevent it from happening. It seems almost certain to me because I think we'll have about as much success as we did completely prohibiting pornography in general, and will experience worse side-effects because we'll marshal much more invasive surveillance powers than in, say, the 1950s U.S. I'm all for legally prohibiting revenge porn while we're still pretending privacy exists but I think that basic approach is unsustainable, and that chasing it too far without laying the foundations to deal with the social consequences of ubiquitous surveillance in alternative ways is something we'll regret in the end.
posted by XMLicious at 5:25 AM on April 20, 2019 [1 favorite]


Your argument still seems to be that because things are technically easy, we might as well throw up our hands and legalise them. I don't think that holds water at all, but regardless, if that's what we're doing, let's start by legalising piracy before we look at privacy.
posted by Dysk at 7:30 AM on April 20, 2019


Well why even have the concept of a “public place” in the first place, then? Why not, since it's only “technically easy” to look at other people in public with your eyeballs and hear their conversations with your ears, just mandate that everyone ignore other people and say that we all have “privacy” everywhere?

The thing that makes that untenable now is the same thing that will prevent any meaning of “privacy” resembling its historical meaning from existing.

We already have lives permeated with the eyes and ears of the Apples and Googles and Amazons of the world. You imply that it's illegal but they're listening anyways. All of the social disapproval of Google Glass didn't actually make it illegal.

In that interview I linked to above the researcher is talking about MRI-like sensors small and cheap enough that they could be integrated into clothing and continuously monitor the wearer for medical problems. She also gave a TED talk in 2013 (direct .mp4 link) in which she presented clips of fuzzy video depicting what people were seeing out of their eyes, reconstructed from realtime MRI scans of their brains. (From current-technology non-miniaturized MRI machines, though.)

If you can think of some way to actually preserve privacy in a future world so saturated with network-connected sensors of all types that they're embedded in clothing, instead of just being carried around everywhere as phones and other devices, I'd certainly like to hear it. But unless it's a really good idea and we get our butts moving on implementing it, “private” is just going to mean something like “only accessible to Apple, Amazon, Google, their clients and business partners, and you” and that's going to change society in its own ways, and reinforce every inequity and asymmetric power dynamic IMO, no matter how much we stick our heads in the sand about it.
posted by XMLicious at 8:34 AM on April 20, 2019


So for example although I would never consent to my DNA being given to a private dealer, most of it can be reconstructed from samples which close relatives have submitted to companies whose business is collecting DNA data, HIPAA be damned; I have no control over it.

And after a number of public cases showing how such entities can be potentially abused, people are now discussing how to regulate them because of that potential.

We already have lives permeated with the eyes and ears of the Apples and Googles and Amazons of the world. You imply that it's illegal but they're listening anyways. All of the social disapproval of Google Glass didn't actually make it illegal.

Actually, several aspects of Google Glass were potentially illegal based on existing law (for example, in places where recording a conversation requires the permission of all parties, using Glass could very well fall afoul of such laws.) Beyond that, the only reason it wasn't made illegal was because it had such small penetration - and even then, I'd imagine that the social concerns did have legislators looking the matter over. Furthermore, it's worth looking at what happened with "social disapproval" - Glass users were, either through rules stated by venues or by social opprobrium, pushed out of social venues. Just because something isn't illegal doesn't mean that doing it isn't going to make people want to not associate with you.

You're falling into the hole I see a lot with techies - that it's either perfect protection or no protection. This is the argument I see all the time with DRM - people say that "it doesn't work" because eventually someone will crack it, while ignoring that companies actually aren't looking for permanent, total protection - they just want to slow down cracking enough to protect the initial sales window where they get most of their revenue for the product from. And in that context, it does actually work - which is why companies use it. We don't expect laws to be a perfect shield - heaven knows that we've seen that HIPAA doesn't stop breaches both intentional and not. But it has, overall, improved the protection of medical records. I fully expect laws in the future to expect that individuals wearing such sensors to have to do so in a manner considerate with the public at large - or face legal penalty. And for the vast majority of people, that will be enough to prevent abuse.
posted by NoxAeternum at 4:19 PM on April 20, 2019 [1 favorite]


Is there anything wrong with mass surveillance in China, then? Or am I just a "techie" who supposedly hasn't taken enough philosophy courses so as to leave me believing that anything they think of as privacy, with the government constantly watching, is not really privacy?

By looking at a world of always-on constant monitoring, including my hypothetical one where sensors continuously see inside your body and can look out through your own eyes, and deeming that to merely be a not-perfect form of privacy, and handwavily saying that some self-definitionally adequate limitations will magically arise to constrain what can be done with surveillance data, while it's being recorded anyways—and that counts as privacy—you are doing exactly what Wenger says: you're "getting over privacy" but in a way that preserves some part of your ego that insists on maintaining the pretense.

While helping out all of the exploiters and abusers of power who will cry "ah-ah! We must have privacy now!" when it suits their purposes, but for the rest of us and especially the marginalized people of society, we will be prey for a system that knows our every thought and action. And the thoughts and actions of anyone we might want elected to office some day, as is playing out in Hong Kong right now through authorities' exploitation of the similarly useful "one country, two systems" fiction.

p.s. As someone gifted with a superior non-techie intellect, who presumably has taken oh-so-many philosophy courses, you know what the ad hominem fallacy is, right?
posted by XMLicious at 11:53 PM on April 20, 2019


Nobody is defending the situation in China. But China doesn't magically have tech that the rest of the world doesn't have access to. China is how it can go, not how it will. If it were the latter, you'd be talking about everywhere, not one country.
posted by Dysk at 12:26 AM on April 21, 2019 [2 favorites]


(Also the notion that one country two systems is a fiction is hilarious. There might be points on which the mainland exerts influence, but the idea that there aren't two separate and distinct systems is simply untrue.)
posted by Dysk at 12:28 AM on April 21, 2019


p.s. As someone gifted with a superior non-techie intellect, who presumably has taken oh-so-many philosophy courses, you know what the ad hominem fallacy is, right?

You've just demonstrated it very clearly for anyone who doesn't.
posted by Dysk at 12:29 AM on April 21, 2019 [1 favorite]


When you mention that China is using the same tech as the rest of the world you are very, very correct: perhaps you have more personal experience with this than me but as far as I know the social credit system for example is implemented by companies, not directly by the government, using approaches that differ very little from what's done by the tech industry outside of China. That's exactly why it's easy to accidentally defend the practices in the PRC when one does cognitive backflips and reaches for a way to call what's happening elsewhere now and in the near future “privacy”.

Y'all seem to me to be saying that the remedy for ubiquitous surveillance is something far, far after the fact: all of the surveillance happens, for sure, Google and Apple and Amazon and everyone else does their listening and sensing and recording, then maybe further along the way some specific bad things happen, then maybe some of those things can be traced back to the surveillance activity in an actionable way, then maybe a societal response will occur eventually where we get our shit together and do something, perhaps equivalent to the improvement in security and privacy between the beginning of formal medical record-keeping in the U.S. and the 1990s-era HIPAA law.

That's all if we're lucky and such an improvement is even technologically and socially possible, and if the delayed response actually matches up with whatever the subsequent environment is, by the time the response comes. Perhaps regulation of “ancestry profiling” DNA brokers will come along around the time (during the next decade) scientist-professor-entrepreneur George Church thinks (podcast PDF transcripts 𝟭, 𝟮, 𝟯) cheap mass-produced sensors capable of simply picking everyone's DNA straight out of the environment will become possible.

The thing is that the only substantial difference between China and the preceding scenario is the final step where our wonderful “free-world” stochastically-well-meaning democracies can be trusted to do something substantial waaaaay after the fact, long after all the surveillance has already happened.

If you are defending the steps preceding the last one—all of the pervasive surveillance actually proceeding to happen and the actions taken based on surveillance data and societal mechanisms actuated by it before maybe it getting connected to some negative outcome—and some far-off remedy being implemented, and calling all of that “privacy”, just “not perfect” privacy, then yes you are giving cover to categorize pretty much any completely Orwellian system as one that furnishes “privacy” to those subject to it, if they can be convinced to trust that final step. The word “privacy” becomes completely meaningless if it refers to minor details of what occurs long after the “private” events—fully open to inspection and recording by the powers in society and in all practicality fully exploitable by those powers—occur.

Rather than Newspeak-ifying the word, far better to stop pretending that anything of that sort remotely resembles privacy. So that we can get on in dealing with the real societal implications that those first few steps have, the pervasive surveillance and sophisticated, comprehensive mechanisms to leverage the surveillance product.
posted by XMLicious at 5:18 AM on April 22, 2019


If you are defending the steps preceding the last one—all of the pervasive surveillance actually proceeding to happen and the actions taken based on surveillance data and societal mechanisms actuated by it before maybe it getting connected to some negative outcome

Nobody is defending this, so you can stop beating that strawman now. What is getting pointed out is that all of what you described is ultimately a social issue, not a technological one. It's telling that you keep coming back to China - the problem there is that the Chinese government seeks a very high level of social control, and will leverage any and all tools at their disposal to do so. Google and Facebook hoover up data in part because of profit, but also in part because they have developed cultures where acquisition of data is viewed as a social and moral good (remember, Google's mission statement is to organize all of humanity's knowledge.) That's what you have to fix first - otherwise anything else is just going to be a band-aid.
posted by NoxAeternum at 6:21 AM on April 22, 2019 [2 favorites]


I just can't see why anyone who believes that "privacy will inevitably go away, get used to it" hasn't released all their personal data into the wild already; emails, texts, medical info, bank statements...

Why is this a "This is inevitable, you all need to get used to it" situation, and not an "I embrace this change" one? I mean the argument is that we need to adapt, but I don't see any examples of how.
posted by happyroach at 1:57 PM on April 22, 2019 [3 favorites]


How the hell do the behaviors of Google and Facebook not involve social control? No one would spend money on advertising, businesses wouldn't engage with the Google interfaces that offer to tell them the hourly foot traffic in their retail stores, and Facebook wouldn't be an object of Russian military intelligence operations if they weren't conduits of social control.

“We're going to fix the culture of data acquisition / profit motive first” (first before even beginning to think of how to actually, practically deal with the saturation of surveillance sensors and the reality of “private” moments being under surveillance even in extant circumstances, I guess? And how would you go about fixing those things?) seems like much more of an insubstantial non-serious argument to me than my pointing out that if you're acceeding to the current state of affairs as a form of “not perfect” privacy you indeed are are construing a level of surveillance as high as or higher than any totalitarian state that has yet existed, as a form of privacy.

No need to stick to China, that's just the best documented and obviously closest in practice to the rest of the 21st-century world. The security cameras and camera-microphone-sensor packages everyone carries around in the form of mobile phones easily entail more surveillance than the human observers who would sit at a desk on every floor of a Soviet hotel.

happyroach, what you're saying doesn't follow. Acknowledging the reality of climate change does not require that you set your home on fire or flood it with seawater.

I'm not saying it's good that the advocates of privacy have been so asleep at the wheel that legal acceptance of public surveillance have already gone irreversibly far without any serious plan or attempt to counter it or even substantially limit it. See for example the last several decades of accepting, in the United States, “Cyberspace is a legal grey area!” handwaving without a finger lifted to try applying to email the same protections afforded to postal mail. Or, for another example, to have a single set of laws comprehensively governing wiretapping instead of fifty-plus different ones. We've had so many opportunities to set ourselves on another path, or even just delay these consequences, but no one ever seems to want to even put in the effort to think through the steps that would be necessary to preserve our supposedly-valued privacy.

I would definitely personally prefer that privacy was still possible, at least in a hypothetical world with symmetry of consequences for the privileged and for the disadvantaged and marginalized. But on top of the constant tech industry shenanigans which have been going on for the entirety of the current century, the Snowden revelations happened half a decade ago now and no one even seemed to blink, much less rise up to demand privacy.

However if you want an example of people following principles of radical transparency down to the average citizen's level, here's the citation from the Wenger piece that
While the Panama Papers have forced British politicians to reveal tax details that are traditionally kept private, and U.S. presidential candidates are under pressure to do likewise, most Nordic citizens’ tax returns are freely available.
posted by XMLicious at 11:27 PM on April 22, 2019


most Nordic citizens’ tax returns are freely available.

This is not radical transparency. This is basic minimum accountability for the tax system and employers. I'm off the opinion that tax records should be public, but that's specifically because taxation is a public endeavour. That is rather different to your emails, your biometrics, your day-to-day actions, etc.

The security cameras and camera-microphone-sensor packages everyone carries around in the form of mobile phones easily entail more surveillance than the human observers who would sit at a desk on every floor of a Soviet hotel.

That they can doesn't mean that they do.

How the hell do the behaviors of Google and Facebook not involve social control?

There's a world of difference between Facebook and the Chinese government, both in the scope of what they can influence, how they can do it, and that's before we get into your collapsing flat the terms "privacy" and "public".

In fact, I think this is the fundamental issue here. Having my shit be public is, to me and I'd wager most people, incomparably not the same as having Google gave access to it. Google have access to my emails (in a limited capacity, legally, but in a practical sense they have total access) but that does not mean that they are public. I still have a good degree of privacy. My neighbours can't just look them up if they're bored. Nor can advertisers, even. Google employees face legal consequences if they're found to do so. Yes, a bunch of machine learning derived data is probably provided to advertisers, based on their machine scans of my emails. That's not ideal, but it bothers me less. You seem to be implying that these things are all the same. That because Google have my emails, my emails are not in any sense private. This is nonsense.

I'm not saying it's good that the advocates of privacy have been so asleep at the wheel that legal acceptance of public surveillance have already gone irreversibly far

Irreversibly? How is it irreversible? This is just hyperbole with no basis in anything.
posted by Dysk at 1:20 AM on April 23, 2019 [2 favorites]


this is unsettling:
I yelled into my phone “I’m pregnant” for 5 minutes on Sunday to see which apps would start advertising baby things. Definitely NOT pregnant. Zero babies in my sphere. Didn’t get any ads, but just received these free formula samples in the mail, which is creepier.
posted by kliuless at 11:59 PM on April 23, 2019 [1 favorite]


I'm off the opinion that tax records should be public, but that's specifically because taxation is a public endeavour.

Taxes: A Public Record -- Pro and Con! :P
posted by kliuless at 12:21 AM on April 24, 2019


Google have access to my emails (in a limited capacity, legally, but in a practical sense they have total access) but that does not mean that they are public. I still have a good degree of privacy. My neighbours can't just look them up if they're bored. Nor can advertisers, even. Google employees face legal consequences if they're found to do so. Yes, a bunch of machine learning derived data is probably provided to advertisers, based on their machine scans of my emails. That's not ideal, but it bothers me less. You seem to be implying that these things are all the same. That because Google have my emails, my emails are not in any sense private. This is nonsense.

Do you see that this is not true in China or other totalitarian regimes, either? Even though the government conducts intrusive, pervasive surveillance the results of that surveillance aren't automatically available to the average citizen. But I would not call that privacy.

Your assertion that you have privacy seems to rest on your trust in the agglomeration of “public” governmental entities and “private entities” like Google and Facebook and infrastructure providers which rule us and your confidence in an ability to predict their behavior.

I do think that your trust is misplaced. But more importantly I don't think we should make any identification between what would be called privacy in a 20th-century “free-world” sense, and a state of affairs where everything is observed and recorded but not immediately available to the other people one interacts with on a day-to-day basis and where the future secrecy of those recordings, and the guarantee that third-party access to them will not affect your future life, hangs on the thread of things like “data retention policies” and successful computer security practices and successful resistance to law enforcement “inquiries”, not to mention things like the goodwill of the entities conducting surveillance, their sense of responsibility, and their willingness to preserve their brand and reputation.

I don't think it's nonsense to say that the latter is not a version or “imperfect” variant of the former, but that they are substantially and materially different things. I also think that as time goes on and 5G networks and their successors are deployed, and the web of surveillance sensors becomes broader, denser, and more granular, the latter will continue to superficially resemble the former, privacy in its pre-surveillance sense, less and less.
posted by XMLicious at 4:02 AM on April 24, 2019


You seem to think that I'm relying on Google et al to protect me from an evil government? I look at it very much the other way round. The government sets laws which limits what Google (and everyone else) can do. You're going from a basis of government and law enforcement are evil, whereas I'm saying that they are the good thing that can be leveraged to limit what Google et al can collect, what they can do with it, how long they can keep it, etc.
posted by Dysk at 4:12 AM on April 24, 2019 [2 favorites]


Whereas your argument seems to be that the very existence of these technologies means their total adoption is inevitable? I'm just not buying that.
posted by Dysk at 4:22 AM on April 24, 2019 [1 favorite]


You seem to think that I'm relying on Google et al to protect me from an evil government?

No—by agglomeration of ‘public’ governmental entities and ‘private entities’ like Google and Facebook and infrastructure providers which rule us I'm saying that the distinction between “government” and large hegemonic “private” organizations that exert control over us is an artificial distinction and isn't the way things really work in practice. I think that to historians of the future it will seem no more valid than distinguishing between “church” and “state” in most eras of human history.

Regardless of that, though, you must know that Google does a great deal more than look at your emails. If you examine the source code of this web page you'll see instructions pulling in Google Analytics scripts: Google gets “clickstream” telemetry for MeFi and knows which pages you look at (if you don't have an ad blocker which blocks those scripts, though if you still load the Google fonts they still get some amount of information on your activities) as it does for most of the sites on the Internet via one method or another.

As far as adoption rate—I'd be curious to know which surveillance capabilities that have been deployed in totalitarian regimes, or just in general capabilities which are technologically possible, you think have not been deployed in other developed countries.

One example which has stood out to me in the past few years: Google's foot traffic estimation capabilities could theoretically be implemented solely with fully consenting people opting in providing just their own data willingly. But I would need to see some very comprehensive proof that this is the case before I'd believe it; it would be much easier if Google were using any device it has access to for dragnet watching and tracking of all of the other mobile phones and laptops any given device can see around it, the sort of approach a private investigator or law enforcement might use to track a particular person. I know that there are definitely some opt-in third-party apps which explicitly do this mass collection of every other visible device, and of course corporate-owned and government-owned networks which will do it from stationary equipment for a variety of legitimate and illegitimate reasons.

The GDPR, probably the best attempt by a government to influence data collection and storage practices, does not purport to apply to the processing of personal data for national security activities or law enforcement of the EU according to Wikipedia. Between that, the fact that Mark Zuckerberg considers it a very positive step for the Internet, and the fact that a whole whopping three notable GDPR-based fines are currently listed on Wikipedia, I definitely would not rely on Google, Facebook, or even the most woke governments, as it were, to protect you from pervasive surveillance.

As a final note I'd point out that you're the one bringing up evil: I've been talking about privacy and surveillance. Independent of this other stuff, the cases where everything is observed and recorded, on one hand under totalitarian governments and on the other hand under corporations and governments that (sometimes) super duper pinky swear to successfully anonymize it and secure it and eventually delete all the copies of it under a data retention policy—those two cases are much more similar to each other versus, and are on the other end of the spectrum from, the historical case where concepts of privacy at least began with not being recorded or eavesdropped on at all.
posted by XMLicious at 6:28 AM on April 25, 2019




New institutions are needed for the digital age - "Better regulation of data-sharing is long overdue."
We are struggling to get the best out of data. Too much gets collected, not enough gets used. When it does, it is in ways that can harm both individuals and our societies and democracies. It concentrates money and power. We want data to be used to deliver personalised services, to make scientific discoveries, and to inform business planning and government policymaking. But we also worry about our data being used to target and discriminate against us. We distrust the organisations that collect it...

Society will need to create new institutions for this data age. Data clubs — where data is exchanged between a group of organisations for mutual benefit — already exist. The Open Data Institute and others are exploring data trusts — where control over data-sharing is transferred to an independent third party, legally bound to ensure its use for a defined purpose. Data collected within a city, for example, might be placed under the control of a data trust, ensuring it can only be used in ways that benefit citizens, involving them in the decision making.

Similar investigation is under way of models for data co-operatives, which enable individuals to provide data that benefits us collectively. For example, gig economy workers might increase their negotiating power by giving them information about working conditions. Patient-led data trusts could help people with rare health conditions to donate records for research.

But these initiatives cannot work on their own. We won’t know which of them to trust. How can we tell whether a data club is a cartel, embedding and exploiting existing market power with exclusive access? How can we tell whether a data trust is sharing information towards its purpose? How do people work out where to donate data to, not just based on a cause they agree with but on the security and use of that data?

We will need existing, trusted institutions to help us work this out, and professional bodies to create ethical codes and qualifications. We will need consumer rights organisations to assess which are worthy of trust and auditors to perform not just financial due diligence but compliance checks. And of course government regulators need powers to hold organisations to account against the law.
from the comments: "There is a lot of good to be derived from data. However, we have to believe and trust that our personal data will be secure and not exploited. With more use of IoT, the amount of data will expand even further. We're going to have to have good structures in place to protect us."

also btw...
Facebook is right to open up its data trove - "But it can do more to help researchers assess the political impact of social media."
posted by kliuless at 10:43 PM on April 29, 2019


Big Tech's health fixation spreads into private areas - "Recent years have seen problematic revelations about usage of data."
In the US, the Health Insurance Portability and Accountability Act (HIPAA) imposes criminal and civil penalties for breaching confidentiality of healthcare data. But the rules apply only to entities covered by HIPAA such as healthcare plans and providers, or clearing houses that process healthcare claims. They do not include most internet companies, or apps. These span everything from the major platform tech companies that watch what type of medical information we search for online, to websites such as WebMD or KidsHealth that have become hubs for people who turn to the internet, rather than a doctor, for health advice.

There are no restrictions that prevent the fitness tracker on your wrist from selling information about your health to a third party. The personal profiles created and sold by retailers such as Wal-Mart include information that company algorithms can quickly link to medical conditions such as depression or obesity.

Recent years have seen some highly problematic revelations about how such data are being used. Facebook came under fire in 2017 after it was revealed that the company was marketing its capacity to identify teenagers who were feeling “insecure”, “worthless”, or in “need of a confidence boost” to advertisers.

The mobile dating app Grindr has shared identifying information about members’ HIV status with other companies and faced no legal consequences. Indeed, US national security officials recently called for the Chinese company Beijing Kunlun Tech to divest itself of Grindr because of worries that China could use information about sexual preference from the site to blackmail people with security clearances.

It is a mark of how limited public awareness is of the risks within digital healthcare that these warnings came from defence experts worried about foreign espionage, rather than privacy groups worried about citizens being compromised at home. This is an area ripe for regulation.

Search engines can capture health-related information from people’s emails and social posts. Healthcare companies themselves that should be subject to privacy regulation can also exploit platforms’ data collection by using data scraping to extract medical records. Even data collected by companies covered by HIPAA itself can be sold if the information is provided in anonymous form. But studies show that very often algorithms can identify data with specific individuals.

Surveillance of healthcare data is big business, representing a $76bn market that has grown by 379 per cent over the past two years, according to a recent survey by the Democratic strategy group Future Majority...
posted by kliuless at 10:56 PM on April 29, 2019 [1 favorite]


Think You're Discreet Online? Think Again - "Thanks to 'data inference' technology, companies know more about you than you disclose."

Retailers Are Tracking Where You Shop—and Where You Sleep - "Retailers are finding all kinds of uses for location data from customers' phones."

Deepfakes for good - "They include Radical, which turns 2D videos into 3D scenes; Auxuman, which has an AI-generated avatar that plays AI-generated music; and Dzomo, which wants to replace expensive stock photography with deepfake images... In a demo, British soccer legend David Beckham delivers a PSA about malaria in nine languages — most of which he does not actually speak." (This AI generates ultra-realistic fashion models from head to toe)
posted by kliuless at 11:19 PM on May 3, 2019


If you were in a certain crowd (or were one of their hangers-on) during the 90s, it was plainly clear that privacy in the sense of being free from surveillance in day to day life was dead. In the early days we relied on limited connectivity between most corporate networks (at least the ones that held the data) and the global Internet and the diversity of operators of things like CCTV cameras and various web services to keep it all a jumbled mess that took man-years to gather, catalog, review, and prepare for use against any one person limiting the risk to only the highest value targets. There was also a concerted effort to pollute databases with bullshit whenever possible, but that was really just a way to blow off steam rather than a useful technique.

Once it became clear that consolidation was inevitable, it also became clear that the concept of privacy had to be redefined if it was to maintain any meaning at all. Thus, we got the NSA's "it's not spying if we don't actually look at what we've hoovered up" policy, hated as it was. Sadly, that's just the reality of the situation going forward.

That's why I went ahead and put all my eggs in a basket that provides me a contractual means to have the data held by the platform of my choice deleted at my pleasure. Stronger legal protections against the use, disclosure, and dissemination of our private information would be far preferable, but we go to battle with the laws we have and the courts we have, which favor contract law over government enforcement. Absent any major change being possible, I'd like to see a law that creates significant statutory damages and provides for the recovery of costs and fees from companies that violate (at least) privacy-related terms of their agreement with users and bans forced arbitration in such cases.

No, it's not enough, but it would probably help at least somewhat and would be modest enough that some of the more anti-regulation contracts-above-all libertarian types could be convinced to go along with something along those lines. It would be a start, at least. Better than the nearly complete free for all we have today.

Where it's going to get really sticky is at the intersection of sunshine laws and our increasing desire for privacy. So much of the government's data on us is available to anyone willing to write a check largely because of those anti-corruption laws.
posted by wierdo at 11:54 AM on May 6, 2019




Last week Human Rights Watch released a comprehensive report (announcement, full report) based on a tear-down of the government phone app used by Chinese state security forces in Xinjiang provice in the far West of the country, where ethnic Uyghur Muslims are being rounded up into camps by the millions and oppressed based on their ethnicity and religion in many other ways (previously.) The examination revealed a wide variety of specific types of surveillance information security forces are being directed to collect, so it might be of interest in respect to the types of information gathered in a totalitarian state.

Good coverage of the HRW report on Chinese state surveillance in Xinjiang on today's Democracy Now! (full episode, direct .mp4, alt link, torrents 1, 2, m.)
posted by XMLicious at 2:52 PM on May 7, 2019


« Older They Got Magic and Flair   |   “Hotline Miami redone as a side-scrolling... Newer »


This thread has been archived and is closed to new comments