Privacy is not an end in itself
November 12, 2013 7:34 AM   Subscribe

"In 1967, The Public Interest, then a leading venue for highbrow policy debate, published a provocative essay by Paul Baran, one of the fathers of the data transmission method known as packet switching [and agent of RAND]. Titled “The Future Computer Utility," the essay speculated that someday a few big, centralized computers would provide 'information processing … the same way one now buys electricity. Highly sensitive personal and important business information will be stored in many of the contemplated systems … At present, nothing more than trust—or, at best, a lack of technical sophistication—stands in the way of a would-be eavesdropper.' To read Baran’s essay (just one of the many on utility computing published at the time) is to realize that our contemporary privacy problem is not contemporary. It’s not just a consequence of Mark Zuckerberg’s selling his soul and our profiles to the NSA. The problem was recognized early on, and little was done about it... It’s not enough for a website to prompt us to decide who should see our data. Instead it should reawaken our own imaginations. Designed right, sites would not nudge citizens to either guard or share their private information but would reveal the hidden political dimensions to various acts of information sharing." -- MIT Technology Review on The Real Privacy Problem
posted by Potomac Avenue (17 comments total) 16 users marked this as a favorite
 
It's not enough for websites to merely stop encouraging over sharing either, really the web should be replaced by an application layer capable of securely providing end-to-end cryptography. JavaScript, Java, Flash, etc. cannot provide encryption so long as they depend upon Certificate Authorities (CAs), now thoroughly compromised.

There are numerous projects like arkOS that hope to liberate people from the centralized services, which simplifies matters. There is nothing preventing us from designing new social networking tools like Facebook that (a) run entirely in the cloud but (b) keep everything end-to-end encrypted though. It's already been done for storage using Tahoe-LAFS, but building the application layer on top hasn't happened yet.
posted by jeffburdges at 8:07 AM on November 12, 2013


Another one by Evgeny Morozov. Will I feel dirty after reading it?
posted by ocschwar at 8:12 AM on November 12, 2013


There is a separate problem that vastly more transparency must be forced upon both government and powerful corporations.

Why shouldn't I just be able to read the head of the FBI's emails or listen to his phone calls? Why not Steve Ballmer's work emails? Yeah, individual mails might be tagged with an open investigations identifier so that they become available once the investigations closes.

Or better yet, why should we let any individual person run the FBI at all? Just build open source software that does all the necessary managerial work. Is there a case of internal discrimination against a whistleblower? Just examine the managerial software's records to determine who manipulated it into making those decisions.

We desperately need to make progress on both privacy and transparency because otherwise we'll find ourselves with an elite that invades all our private lives specifically to prevent us exposing their abuses of power.

Yes, neither privacy nor transparency are ends-in-and-of-themselves, but they both massively facilitate our social, cultural, and technological progress. And those ultimately decrease our risks from real threats like climate change, asteroids, etc.
posted by jeffburdges at 8:22 AM on November 12, 2013 [1 favorite]


why should we let any individual person run the FBI at all? Just build open source software that does all the necessary managerial work.

That's a little terrifying to me. From the article:

"Thanks to smartphones or Google Glass, we can now be pinged whenever we are about to do something stupid, unhealthy, or unsound. We wouldn’t necessarily need to know why the action would be wrong: the system’s algorithms do the moral calculus on their own. Citizens take on the role of information machines that feed the techno-bureaucratic complex with our data. And why wouldn’t we, if we are promised slimmer waistlines, cleaner air, or longer (and safer) lives in return?[...] Instead of getting more context for decisions, we would get less; instead of seeing the logic driving our bureaucratic systems and making that logic more accurate and less Kafkaesque, we would get more confusion because decision making was becoming automated and no one knew how exactly the algorithms worked. We would perceive a murkier picture of what makes our social institutions work; despite the promise of greater personalization and empowerment, the interactive systems would provide only an illusion of more participation."
posted by Potomac Avenue at 8:28 AM on November 12, 2013 [1 favorite]


I proposed radical transparency there, including transparency for both the algorithms and their data that do managerial tasks in government, so nothing like google's mysterious advertisement placement engine.

Algorithms are not hard to understand. If the machine acts unjustly, and all it's code and logs are public record, then you can figure out why it acted unjustly. I'll grant that so-called emergent behavior is tricky to understand, and software like neural networks creates emergent behavior, but..

(a) We're not necessarily talking about employing emergent behavior for managerial purposes, often much more deterministic tools do that job. And managerial work that seems less deterministic seemingly benefits more from wikification anyways.

(b) I'd trust journalists trained in machine learning to reverse engineer a computer's unjust emergent behavior before I'd trust a human manager to disclose their own unjust behavior. We're talking about an awfully corrupt social strata here : political appointees, highly promoted FBI agents, etc.

I'd agree that privacy is a means not an ends, but transparency aka no-privacy-for-powerful-organizations is important for the same ends. You cannot enforce privacy except through encryption or transparency or ideally both. And you cannot create major social changes such as transparency without privacy.
posted by jeffburdges at 9:22 AM on November 12, 2013


Will I feel dirty after reading it?

I mostly felt confused, because it takes him a long time to make a point that I didn't get a lot out of. I'm 100% on board with his statement of the problem and the importance of the problem. And then he kind of veers off into incoherent rant and I got nothing out of it. Maybe I'm not smart enough to understand the fine article.

We desperately need some new way to think of privacy in the digital age, because what we're doing now is not working. With all due respect to European-style privacy laws they're holding back the inevitable, and in the worst case enabling a police state. The most provocative argument I've read for a new way is David Brin's Transparent Society, which jeffburdges is sort of alluding to above. But it's got problems too.
posted by Nelson at 9:22 AM on November 12, 2013 [1 favorite]


I'd trust journalists trained in machine learning to reverse engineer a computer's unjust emergent behavior...

The kind of science journalism that tells us how cold fusion of junk DNA will cure cancer with God particles?
posted by justsomebodythatyouusedtoknow at 10:30 AM on November 12, 2013


I'm also having a hard time understanding the article's main point. So instead, I'm going to comment on some of the challenges I see in online privacy.

I have to strongly disagree with the article's assertion that "The problem was recognized early on, and little was done about it". There have been tons of research papers and systems looking at ways of protecting information, ranging from private information retrieval, to distributed systems (you keep your data on your servers), to access control systems, to homomorphic encryption systems where you can run mathematical operations on encrypted data.

Very few of these ideas have been adopted by industry. Plus, even companies whose sole purpose was end-user privacy have failed (I'm thinking of Zero-Knowledge Systems in particular). Developers and end-users voted with their feet.

This lack of adoption happened for a number of reasons. One is poor system performance. Many of the research ideas above just don't work well in practice.

Another is that these approaches often make debugging and customer support much harder (it's not easy to debug a system or do a password reset if the data is under someone else's administrative domain).

Third, there's also the hassle of running and maintaining your own servers. Beyond the expertise needed to do so, there's also the issue of keeping the software up to date, defending against attacks, etc... in many proposed solutions, you become your own system administrator, oh yay.

Fourth, there's also the potential lack of utility and usability if everything is not shared (or opt-in) by default. A good example here is caller ID. If by default, caller ID were turned off and people had to opt-in (as some privacy advocates argued it should have been), then it would never have been adopted, and we'd lose what is a useful service.

I know the above makes it sound like I am anti-privacy, but that isn't the case. I actually do a lot of research in computer privacy, and give a lot of talks about privacy too (here's are pointers to recent talks I gave at PopTech, IAPP, and CBS Morning Show).

One big challenge here is that privacy is actually a number of different and distinct concepts, but they all fall under the generic term "privacy", and they each require different kinds of mechanisms. What's needed to offer control between you and your friends and co-workers (personal privacy) is very different from what's needed between you and Target or you and the government. Privacy also encompasses ideas like control and feedback over what information is being collected and how it is used, anonymity, being left alone, avoiding undesired social obligations, large scale data privacy (big data), not being annoyed by marketers and spam, projecting a desired persona, and more. These different concepts often make it hard for people to talk to one another, and for solutions to generalize.

Another big challenge is that there are often a large number of stakeholders. For example, my team has been studying smartphone privacy for the past few years. The main argument I make (which you can see in the PopTech slides) is that these kinds of ubiquitous computing technologies can offer tremendous benefit for society (reduced carbon footprints, healthcare, safety, information finding), but only if we can address legitimate privacy concerns.

In just the smartphone space, the stakeholders include app developers, app markets, the operating systems developers, end-users, third-parties (mostly advertisers, but also social networking sites), and public policy advocates. We've found that many developers don't know what they should be doing with respect to privacy (there are best practices for security, less so for privacy), and often aren't aware of what information their libraries use (again, advertising, social networking, etc).

We've also found that many end-users actually are ok with data sharing if there is a clear purpose (even behavioral advertising is ok for most people, as long as it's up front that's what's going on). App markets and OS people are also trying to figure out what they should be doing, to help end-users, but in a way that doesn't significantly damage the market (e.g. if you remove all behavioral advertising, then it could kill off a lot of developer teams, leading to fewer interesting apps, which would be a net loss for everyone).

So what I would propose here is, rather than having vague solutions like "sabotage the system" or having "provocative digital systems", I'd suggest a more fruitful approach would be to separate these different dimensions of privacy, and look at practical solutions to specific challenges that stakeholders are having. These approaches can be guided by high-level principles, but should also start with the assumption that companies want and need different things than governments, social networking sites are different from smartphone apps, and so on. Each has a different ecosystem, with different players and different knobs and levers, and understanding and using these points of leverage will be a more effective approach to addressing the problems we face, as individuals, as organizations, and as a society.
posted by jasonhong at 10:34 AM on November 12, 2013 [3 favorites]


This is the second time I've read the article and I'm still not sure what the author's getting at. Both readings triggered my memory of the last story in Asimov's I Robot though. The point he's trying to make may be that somehow we're creating a global, ad hoc administrative machine (government?) that can manage everything we hoped for but is far too complex to understand, much less manipulate safely. This machine runs on privacy the way slash and burn runs on trees.
posted by klarck at 11:30 AM on November 12, 2013 [1 favorite]


A simple web site system to route insurance sales didn't work right on its release date. There's no way we could build a complex decision-making engine that we could trust _without_ all the logic being absolutely transparent and comprehensible to _everybody_.
posted by amtho at 12:06 PM on November 12, 2013


Proposal: In order to use a massive multi-site system like Google without your deep data being sold at auction to marketers you allow one other user, at random, to see your Google search history.
posted by Potomac Avenue at 2:04 PM on November 12, 2013


There is nothing hard about making the logic transparently available to everyone. Anyone who doesn't study the relevant mathematics and computer science depends upon journalists. Actually investigative journalism works really well when the relevant agencies actually answer their FOIA requests though.

There is a difficult part aspect of transparency in simultaneously allowing users to see their own data, so that they can theoretically understand why the machine did what it did, while not allowing everyone to see all user data. Ideally, you achieve this by never even giving the service's machines access to the user data, just let them deal with end-to-end encrypted chunks, while the user's machine submit suitably anonymized analysis. There is not necessarily any inherent statistical limitation that results from doing it this way, just requires more careful thought.

There isn't afaik any benefit to some random user should see your data. Just hire a private data investigator to tell you why the service acted that way. Any systemic strange behavior should be detected by journalists using the systems internal highly anonymized data.
posted by jeffburdges at 3:23 PM on November 12, 2013


From the article:
"A non-interpretable process might follow from a data-mining analysis which is not explainable in human language. Here, the software makes its selection decisions based upon multiple variables (even thousands) … It would be difficult for the government to provide a detailed response when asked why an individual was singled out to receive differentiated treatment by an automated recommendation system. The most the government could say is that this is what the algorithm found based on previous cases."

And by the way, populate said database via natural language processing and you end up with truly inscrutable logic. Transparency is useless to the individual.
posted by klarck at 3:41 PM on November 12, 2013


We already deal with exactly that problem in a criminal context, specifically all the wrongful convictions based upon DNA evidence. If you're going to do that, you need to treat the evidence as circumspect, basically just a hunch. It's fine to direct an investigation that way, but not necessarily even probable cause for a warrant. Aside from the criminal context, I'd expect insurance companies will exploit this far more egregiously than most government agencies, well maybe welfare, unemployment, and tax agencies.

Also, I'd hardly call say a bayesian text processors "inscrutable logic", maybe an untrained individual cannot decipher them, but the distributions make perfect sense. If the system discriminates against a particular minority based upon their word choice, you can weed out and address that fact infinitely easier than you can fire all the officials who arbitrarily make prejudiced decisions.

We're going to implement all this technological decision making regardless, just too damned efficient not too. It's both privacy and transparency that make the difference between oppression and progress.
posted by jeffburdges at 4:10 PM on November 12, 2013






The Government's Secret Plan to Shut Off Cellphones and the Internet, Explained

"This month, the United States District Court for the District of Columbia ruled that the Department of Homeland Security must make its plan to shut off the internet and cellphone communications available to the American public. You, of course, may now be thinking: What plan?!"
posted by jeffburdges at 6:09 AM on November 26, 2013


« Older The problem with fuel ethanol from corn   |   How Selling Out Saved Indie Rock Newer »


This thread has been archived and is closed to new comments