"the how of politics is as important as the what of politics"
July 27, 2014 7:03 PM   Subscribe

Evgeny Morozov, for The Guardian: The rise of data and the death of politics
This "smartification" of everyday life follows a familiar pattern: there's primary data – a list of what's in your smart fridge and your bin – and metadata – a log of how often you open either of these things or when they communicate with one another. Both produce interesting insights: cue smart mattresses – one recent model promises to track respiration and heart rates and how much you move during the night – and smart utensils that provide nutritional advice. In addition to making our lives more efficient, this smart world also presents us with an exciting political choice. If so much of our everyday behaviour is already captured, analysed and nudged, why stick with unempirical approaches to regulation? Why rely on laws when one has sensors and feedback mechanisms? If policy interventions are to be – to use the buzzwords of the day – "evidence-based" and "results-oriented," technology is here to help.

via Sore Eyes


Tim O'Reilly: Open Data and Algorithmic Regulation
Regulations, which specify how to execute those laws in much more detail, should be regarded in much the same way that programmers regard their code and algorithms, that is, as a constantly updated toolset to achieve the outcomes specified in the laws.

Increasingly, in today’s world, this kind of algorithmic regulation is more than a metaphor. Consider financial markets. New financial instruments are invented every day and implemented by algorithms that trade at electronic speed. How can these instruments be regulated except by programs and algorithms that track and manage them in their native element in much the same way that Google’s search quality algorithms, Google’s “regulations”, manage the constant attempts of spammers and black hat SEO experts to game the system?

Revelation after revelation of bad behavior by big banks demonstrates that periodic bouts of enforcement aren’t sufficient. Systemic malfeasance needs systemic regulation. It’s time for government to enter the age of big data. Algorithmic regulation is an idea whose time has come.
A brief exchange with Tim O’Reilly about “algorithmic regulation”

SSRN: Nudges.gov: Behavioral Economics and Regulation and Nudging Legally - On the Checks and Balances of Behavioural Regulation
posted by the man of twists and turns (28 comments total) 20 users marked this as a favorite
 
I guess the question is, now that we're capable of regulating things more precisely, how does this change the way we frame policy issues?
posted by zscore at 7:39 PM on July 27, 2014


This post is a wonderful point-counterpoint, and I wonder if OP chose Tim O'Reilly as a contrasting voice to Evgeny Morozov intentionally, since Morozov wrote a blistering takedown of Tim O'Reilly less than a year ago.
posted by cinoyter at 8:00 PM on July 27, 2014


I guess the question is, now that we're capable of regulating things more precisely, how does this change the way we frame policy issues the monied class buy elections, subsidies, tax breaks and legislation?

FTFY.
posted by ZenMasterThis at 8:02 PM on July 27, 2014 [9 favorites]


“Computer says no … <coughs>”
posted by scruss at 8:10 PM on July 27, 2014 [1 favorite]


As a method of rapidly revising the income tax act to react to newly discovered loopholes, the effects-based approach has promise. However...

This "measurement revolution" seeks to quantify the efficiency of various social programmes, as if the rationale behind the social nets that some of them provide was to achieve perfection of delivery. The actual rationale, of course, was to enable a fulfilling life by suppressing certain anxieties, so that citizens can pursue their life projects relatively undisturbed...However, as long as democracy is irreducible to a formula, its composite values will always lose this battle: they are much harder to quantify.

Looking at failures in government as optimization problems implies being able to identify which variables ought to be optimized. It also implies the existence of a disinterested technocracy, which is folly.
posted by justsomebodythatyouusedtoknow at 8:14 PM on July 27, 2014 [9 favorites]


All public policies affecting ordinary citizens should be tested by lottery, with the desired outcomes measured in everyone who applies to the lottery and compared between those who won and those who didn't. People would find randomization to groups repugnant, but millions of people love lotteries.

For example, you want to convince me that school vouchers work, then offer a lottery for sign-up, draw lots and track the progress of the winners and the losers and show me those who won the lottery do better. It's such a simple idea, I don't know why it isn't standard procedure.
posted by Mental Wimp at 8:18 PM on July 27, 2014


and I wonder if OP chose Tim O'Reilly as a contrasting voice to Evgeny Morozov intentionally, since Morozov wrote a blistering takedown of Tim O'Reilly less than a year ago.

On Metafilter, even!
posted by the man of twists and turns at 8:24 PM on July 27, 2014


This isn't how law and regulation works. Because it involves humans.
posted by Ironmouth at 8:34 PM on July 27, 2014 [1 favorite]


Measurement isn't a 21st century invention. Governments have been tracking data for hundreds of years if not longer and politics seems to have survived. Social science isn't a new invention either.

Historians probably have more to tell us about these things than futurists (and anti-futurists) do.
posted by leopard at 8:51 PM on July 27, 2014 [1 favorite]


Big banks have such a death grip on the existing political system as to prevent any attempt to legislate algorithmic enforcement of regulations from being enacted.
posted by newdaddy at 9:11 PM on July 27, 2014


Data can't set priorities.
posted by chrominance at 9:46 PM on July 27, 2014 [5 favorites]


This "smartification" of everyday life follows a familiar pattern: there's primary data – a list of what's in your smart fridge and your bin

Every fucking time with the smart fridge as an example of our wired|big data|smart data|$insert_meme_du_jour world will impact you personally.
posted by MartinWisse at 10:55 PM on July 27, 2014 [1 favorite]


This article feels all over the place, to the point that I can't really grok its point. Is it claiming that technology can never fully replace politics? Well of course thats the case. At the very least, even if we had a super computer which could do anything we wanted, we would still need to tell it what we wanted. And thats actually the hardest part. Well.

So politics comes down to two things

1)How you prioritise different things in life
2)How you accomplish your priorities.

Now ideally 1 should be where the arguments are. i.e. I think that people should all have free healthcare for ideological reasons, you think they shouldn't for ideological reasons. In practice this isn't the case, and actually a lot of political arguments end up beign about the practical. Lots of arguments for socialised healthcare is that it is/can be cheaper than the current system. Which is a very "methods" based argument. These kind of arguments are frustrating, because politics really shouldn't enter into this, if your genuine empirical question is: "What is the cheapest way, at a national level, for people to get healthcare" than that will have an empiricial answer. Said answer might not be obvious, but it is not something that is actually going to be solved by people having an argument, its going to be solved by people doin experiments!

Of course many political arguments boil down to people being unable, unwilling, or afraid of expressing what they actually want because being clear about ideology can lose votes. If you tell everyone that you think healthcare should be as cheap as possible then pretty much no-one will disagree, but if you say that you believe in it as a fundamental right some might start disagreeing.

At the highest, purest level, the idea of using technology to try and make the answer to these questions as clear as possible is a fairly good goal, which I am all for.
posted by Cannon Fodder at 1:23 AM on July 28, 2014 [2 favorites]


I see parallels with the eugenics movement of the 19th/20th centuries. Lots of data (testing), lots of science (some of statistics was invented just to investigate intelligence and class and race), lots of idealism (we can make better policy and end poverty.)

Meanwhile, we can't even agree on policies based on very clear things: climate change, minimum wages, phonics, harm reduction in drugs. Let alone complex things, like healthcare systems or taxation.
posted by alasdair at 2:28 AM on July 28, 2014 [1 favorite]


The point that hit hardest for me was the idea that, having privatised all other assets, selling data would become the next cash cow for governments. I had a moment of thinking "surely they wouldn't go there", before kicking myself for being such a sucker.
posted by pulposus at 4:25 AM on July 28, 2014 [1 favorite]


Politics has always been about who gets what, when and how. I don't see how data fundamentally changes this, considering humans are economical and do not parse shit-tons of data to find out what their interests are.
posted by MisantropicPainforest at 5:02 AM on July 28, 2014


, lots of idealism (we can make better policy and end poverty.)

And the 'solving' of poverty is a perfect example. We know exactly what works. It has been demonstrated time and time again. Direct cash transfers to poor people. Just give them money. That helps impoverished people more than anything.

However, in terms of actual politics, this knowledge changes absolutely nothing.
posted by MisantropicPainforest at 5:05 AM on July 28, 2014 [2 favorites]


Or consider a May 2014 report from 2020health, another thinktank, proposing to extend tax rebates to Britons who give up smoking, stay slim or drink less. "We propose 'payment by results', a financial reward for people who become active partners in their health, whereby if you, for example, keep your blood sugar levels down, quit smoking, keep weight off, [or] take on more self-care, there will be a tax rebate or an end-of-year bonus," they state. Smart gadgets are the natural allies of such schemes: they document the results and can even help achieve them – by constantly nagging us to do what's expected.

And the 'solving' of poverty is a perfect example. We know exactly what works. It has been demonstrated time and time again. Direct cash transfers to poor people. Just give them money. That helps impoverished people more than anything.

No, don't you understand? What will defeat poverty will be metrics! And ankle monitors! For everyone! If you're poor, your monitor will nudge you out of bed at 6:30 to start looking for work or getting on the road to your second or third job; if you're unemployed, the monitor will track how many jobs you apply for online; it will track and sanction the consumption of "bad" foods and there will be constant bickering between McDonalds (which wants poor people to eat its food) and the government; it will track and sanction the failure to walk enough steps; it will track and sanction the failure to regulate blood pressure and cholesterol; it will track all expenditures and flag anything that looks like you're getting above your station or spending money on "foolish" indulgences. Labor camps for all! And of course, if you're not a "bad" (lazy, cake-eating, self-indulgent) Poor, why would you object to that? Shouldn't you be getting up early to clean your 'umble abode? Shouldn't you be regulating your very biochemical processes to make you a better, more obedient, less-self-having citizen?

In the future, only criminals will have high blood pressure.
posted by Frowner at 6:57 AM on July 28, 2014 [4 favorites]


Technology will always overcome folly. This will help us in much the same way that the War To End All Wars ended all wars. See, they came up with this...um...this here machine gun, and it made warfare so horrendous that they stopped...um...something. And gas. And H-bombs. Okay, maybe the drones will do it. Okay, they aren't talking about killing. This will help us the same way the lock and key helped us to stop theft. Okay, jails. Ah crap.

Keep trying, I'll wait.
posted by mule98J at 8:41 AM on July 28, 2014


Morozov's article is... difficult to read, in more than one sense. At the base, he's not focused enough on the non-tech industry desire to move toward evidence-based decision-making at the government level--that does harm to some of his judgments. Namely, "the nudging state" isn't just a money-hungry tech sector viewing government as a big, potential market.

Let's take toxicology, for instance, and the regulation of drugs and industrial chemicals based on it. There's been a large toxicological paradigm shift happening in the last decade, one that has been many decades in the making. Everyone calls it Tox21 now (short for 'toxicity testing for the 21st century'), but the entire thing boils down to making risk assessment based on data pulled from lots of very cheap, very high-throughput, very human-relevant assays that reflect a solid understanding of the biological mechanisms that end up causing undesirable effects when disturbed by chemical exposure. It was articulated in a big way in 2007 and has since become A Big Deal. Everyone agrees it's a good idea to move this way, in terms of: the research that we prioritize; the regulations that we revise; the budgeting we commit to in order to develop and evaluate new toxicity tests; and so on.

But changing regulations? That process is laborious and slow--by design. The procedural elements to revising regulations are very old-fashioned on purpose, because updating regulatory requirements already has (almost globally) a statutory element that asks, can you show that this revision isn't going to accidentally cost more/take longer/do more bad than good because it creates serious unintended consequences? So even when something like Tox21 comes up--something that pretty much everyone is on board with for many reasons--the regulations aren't going to change overnight.

O'Reilly seems to recognize that. Morozov doesn't.
posted by late afternoon dreaming hotel at 10:47 AM on July 28, 2014


This article is all over the place and has too many muddy half-truths that are worse than lies mixed in with momentary flecks of insight to be really any interest.

"By assuming that the utopian world of infinite feedback loops is so efficient that it transcends politics, the proponents of algorithmic regulation fall into the same trap as the technocrats of the past."


The article, in summary, seems like Morozov is thinking of an algorithm as a magical black box that spits out inevitably stupid answers. Is there anyone out there who has ever touched code who really thinks of an algorithm as "a utopian world of infinite feedback loops"? Who thinks that machine learning is anything other than the opposite of being "so efficient that it transcends politics"?

No, of course not; all systems are designed by people, and all systems have their effects on people; this kind of fear-mongering points the finger at the wrong thing, the technology, rather than the people controlling the technology. Morozov himself fulfills his own prophecy by setting up the algorithm in "algorithmic governance" as some sort of growing monolithic enemy to be necessarily defeated. The real issue is still in politics, as ever; still about who controls data, who gets to decide these things.

"Such systems, however, are toothless against the real culprits of tax evasion – the super-rich families who profit from various offshoring schemes or simply write outrageous tax exemptions into the law. Algorithmic regulation is perfect for enforcing the austerity agenda while leaving those responsible for the fiscal crisis off the hook. To understand whether such systems are working as expected, we need to modify O'Reilly's question: for whom are they working? If it's just the tax-evading plutocrats, the global financial institutions interested in balanced national budgets and the companies developing income-tracking software, then it's hardly a democratic success."

This is kind of a hilarious passage. TL;DR: "One specific type of system with bad data gives results that are not good enough." Morozov's conclusion: "I have proved that algorithmic regulation is a problem."

"... those injustices would still be nowhere to be seen, for they are not the kind of stuff that can be measured with a sensor."

And oh, come on. This is another technophobic argument of "people, not numbers", in which concepts like measurements or statistics are falsely juxtaposed against injustices. Is Morozov going to also say, "we can't measure poverty, so the WHO's research studies are irrelevant?" Is Morozov going to say, "police violence is never truly visualized, so citizen journalism is pointless?" Is Morozov going to say, "the water height of a river does not measure the health of agricultural production, so we don't need those sensors or information?"

The whole point of gathering data and sensors is not to blindly believe that a sensor directly measures some point, but to synthesize data using statistics and make informed judgments.

"For all those tracking apps, algorithms and sensors to work, databases need interoperability – which is what such pseudo-humanitarian organisations, with their ardent belief in open data, demand."

This is kind of a laughable sentence. Databases being interoperable and a call for open data are vaguely related but irrelevant to each other. Organizations want open data, want to make it easy to file FOIA requests, want government organizations with their own APIs so that it's easy for citizens and the public to see what is going on. The interoperability of databases for the purposes of tracking apps and algorithms is a technological problem of knitting together infrastructure. Open data is about political transparency; "interoperability" is something else entirely. Yet Morozov links the two together in the same breath as if were natural and obvious that tracking apps and open data are somehow related.

I feel like I'm reading arguments that are hazy and not quite fitting together; it's like if someone argued that "smartphones need to have cameras because of our government's increasing need to perform surveillance on us; as a result, we are getting used to cameras being everywhere." The presence of smartphone cameras and government surveillance is ultimately totally different. Linking the two together feels like it could make sense, but it really doesn't.

Any of his more cogent points in the end can really be just covered by Gilles Deleuze's Postscript on the Societies of Control.

Otherwise, the article is pretty vapid and empty of real insight, just cherry-picked thoughts strung together with wikipedia-research about algorithms.
posted by suedehead at 12:32 PM on July 28, 2014 [2 favorites]


This is kind of a hilarious passage. TL;DR: "One specific type of system with bad data gives results that are not good enough." Morozov's conclusion: "I have proved that algorithmic regulation is a problem."

No, the issue he points out is that the system itself is flawed - the algorithm is tuned to seek out people who are living outside of their stated means, when the real problem is that the elite have rigged the system so they don't have to pay their fair share.

That's a huge problem, and not only is it one that an algorithm can't solve, it's one that algorithms can make worse, by covering up the problem. Yay, we're catching all the petty scofflaws who aren't disclosing all their income - and ignoring the bigger issues of inequality because we've got this one small win.
posted by NoxAeternum at 3:31 PM on July 28, 2014


No, the issue he points out is that the system itself is flawed - the algorithm is tuned to seek out people who are living outside of their stated means, when the real problem is that the elite have rigged the system so they don't have to pay their fair share.

I agree with your explanation of the algorithm, and this algorithm is flawed for this game, yes, but that does not immediately dismiss all algorithms. This is not ammo for an attack against algorithmic regulation. This is an explanation that a larger issue is present, and algorithmic regulation, in this case, wouldn't be aware of that larger issue.

The presence of loopholes that create offshore tax havens doesn't mean that we should immediately dismiss all tax laws and taxes, right? We'd say: "Let's change the tax laws, based on who they benefit." The same deal goes for any algorithm and any system. "Algorithmic regulation" is just some buzzword that provides a convenient scapegoat for these issues.

The fault is not with that amorphous entity but, first of all, with the absence of robust technology policy on the left – a policy that can counter the pro-innovation, pro-disruption, pro-privatisation agenda of Silicon Valley. In its absence, all these emerging political communities will operate with their wings clipped. Whether the next Occupy Wall Street would be able to occupy anything in a truly smart city remains to be seen: most likely, they would be out-censored and out-droned.

It's this concluding pen-penultimate paragraph that I think is pretty expressive of the incoherence of this article. Placing "pro-innovation, pro-disruption, and pro-privatization" against a vague, unexplained leftist technology policy is the worst kind of political simplification possible, collapsing a fertile landscape of potential collaborators and advocates in to the usual tug-of-war of Left Versus Right.

What about open-source advocates? Swartz/Lessig-like technology activists? What about startups that want to make encrypted, secure, open-source email clients -- and earn some money doing it, too? What about the smartphone? What about Ustream or Livestream or Twitter or Celly, all for-profit startups, all pivotal for protest movements, Occupy amongst them?

Now. Having said all that,

O'Reilly's own analysis is pretty flawed too, most of all because he conceives as politics as an optimization problem given a series of existing data points. Rather, he should consider politics as an agent-based simulation, in which all agents operate under a series of given rules, and politics as a whole is a representation of the emergent behaviors of these agents, responding to each other. Or in other words, politics is a game-theory problem, in which the rules themselves have secondary and tertiary effects, like the Keynesian beauty contest.

Part of the harm of an 'algorithmic regulation' system is that regulation itself would be black-boxed, very much in the way that an artificial neural net is a not-really-understandable mass of links and weights. The problem with a black-boxed system is that you don't understand why things are the way they are, because the logic inside has become abstracted/encapsulated. And, like Deleuze argues in the paper I linked before, you internalize these mechanisms of control into yourself.

Externalized laws - "you should not cross the street at a red light" may be not so effective at stopping people from jaywalking, but the role of a law has other purposes; one of them being to signal to other people that a law is a law, and to be able to discuss these laws, and to understand why the laws are the way they are, so that you can adjust your behavior accordingly. Imagine having an detection system that grabs you when you're about to jaywalk, but before you have, and penalizes you accordingly. It would be outrageous, unfair, and terrible (perhaps not unlike our growing police presence here in the US..).

Of course, unlike Morozov, I don't think that this invalidates "algorithmic regulation" outright. You could have algorithmic regulation that is clear and fair, and you can have "non-algorithmic regulation" that is convoluted and kafkaesque.
posted by suedehead at 4:35 PM on July 28, 2014




Erg, posting fail, that's the fpp again.
posted by eviemath at 5:16 AM on July 29, 2014


I agree with your explanation of the algorithm, and this algorithm is flawed for this game, yes, but that does not immediately dismiss all algorithms. This is not ammo for an attack against algorithmic regulation. This is an explanation that a larger issue is present, and algorithmic regulation, in this case, wouldn't be aware of that larger issue.

The problem is that those larger issues always exist. Moreover, the danger is that if you present an easy, clean "process", it will be overly attractive, no matter how poorly it actually works. Look at what's happening with evaluating teachers - you're seeing this search for some sort of magic calculation that can determine how well a teacher performs, when it's quite clear that actually evaluating a teacher is much more complex, and deals with issues that are not easily quantified.

You say that you can have an algorithmic system that is clear and fair, but I would argue that this is the sort of thing that Anatole France called out with his famous quote about the majesty of the law. "Fair", often, is in the eye of the beholder.
posted by NoxAeternum at 7:34 AM on July 29, 2014


Let's be clear. You can't have self-updating regulations because of notice. Due process requires notice of the regulations in order to be held to them.
posted by Ironmouth at 2:14 PM on July 29, 2014 [1 favorite]


BLDGBLOG: Right To Light
Parts of Copenhagen are being turned into an outdoor night-lighting experiment, aiming to determine exactly how—even to what extent—cities should be illuminated at night, not only to use resources most efficiently but to increase urban security.

A mix of context-sensitive and remotely controlled lighting systems will be deployed, and each light will have its own IP address for outside monitoring. "Sensors that track traffic density, air quality, noise, weather conditions and UV radiation will also be fitted throughout the site to see what sort of environment the lights are operating in," New Scientist explains. "All this will help work out which lights are making the biggest difference in terms of lowering costs and emissions."
posted by the man of twists and turns at 12:30 PM on August 14, 2014


« Older It began with an itch I just had to scratch.   |   "Pizza with a Bisquick crust? Sounds like ’60s... Newer »


This thread has been archived and is closed to new comments