Eigendemocracy: crowd-sourced deliberative democracy
June 23, 2014 4:56 AM   Subscribe

Scott Aaronson on building a 'PageRank' for (eigen)morality and (eigen)trust - "Now, would those with axes to grind try to subvert such a system the instant it went online? Certainly. For example, I assume that millions of people would rate Conservapedia as a more trustworthy source than Wikipedia—and would rate other people who had done so as, themselves, trustworthy sources, while rating as untrustworthy anyone who called Conservapedia untrustworthy. So there would arise a parallel world of trust and consensus and 'expertise', mutually-reinforcing yet nearly disjoint from the world of the real. But here's the thing: anyone would be able to see, with the click of a mouse, the extent to which this parallel world had diverged from the real one."
They'd see that there was a huge, central connected component in the trust graph—including almost all of the Nobel laureates, physicists from the US nuclear weapons labs, military planners, actuaries, other hardheaded people—who all accepted the reality of humans warming the planet, and only tiny, isolated tendrils of trust reaching from that component into the component of Rush Limbaugh and James Inhofe. The deniers and their think-tanks would be exposed to the sun; they'd lose their thin cover of legitimacy. It should go without saying that the same would happen to various charlatans on the left, and should go without saying that I'd cheer that outcome as well.

Some will object: but people who believe in pseudosciences—whether creationists or anti-vaxxers or climate change deniers—already know they're in a minority! And far from being worried about it, they treat it as a badge of honor. They think they're Galileo, that their belief in spite of a scientific consensus makes them heroes, while those in the giant central component of the trust graph are merely slavish followers.

I admit all this. But the point of an eigentrust system wouldn't be to convince everyone. As long as I'm fantasizing, the point would be that, once people's individual decisions did give rise to a giant connected trust component, the recommendations of that component could acquire the force of law. The formation of the giant component would be the signal that there's now enough of a consensus to warrant action, despite the continuing existence of a vocal dissenting minority—that the minority has, in effect, withdrawn itself from the main conversation and retreated into a different discourse. Conversely, it's essential to note, if there were a dissenting minority, but that minority had strong trunks of topic-relevant trust pointing toward it from the main component (for example, because the minority contained a large fraction of the experts in the relevant field), then the minority's objections might be enough to veto action, even if it was numerically small. This is still democracy; it's just democracy enhanced by linear algebra.
Read the whole thing, as they say; the introduction and setup of 'philosophical mechanics' is really interesting as is Aaronson's discussion of the state of climate change policy today:
"In the two previous comment threads, we got into a discussion of anthropogenic climate change, and of my own preferred way to address it and related threats to our civilization's survival, which is simply to tax every economic activity at a rate commensurate with the environmental damage that it does, and use the funds collected for cleanup, mitigation, and research into alternatives. (Obviously, such ideas are nonstarters in the current political climate of the US, but I'm not talking here about what's feasible, only about what's necessary.) As several commenters pointed out, my view raises an obvious question: who is to decide how much 'damage' each activity causes, and thus how much it should be taxed? Of course, this is merely a special case of the more general question: who is to decide on any question of public policy whatsoever?

"For the past few centuries, our main method for answering such questions—in those parts of the world where a king or dictator or Politburo doesn't decree the answer—has been representative democracy. Democracy is, arguably, the best decision-making method that our sorry species has ever managed to put into practice, at least outside the hard sciences. But in my view, representative democracy is now failing spectacularly at possibly the single most important problem it's ever faced: namely, that of leaving our descendants a livable planet. Even though, by and large, reasonable people mostly agree about what needs to be done—weaning ourselves off fossil fuels (especially the dirtier ones), switching to solar, wind, and nuclear, planting forests and stopping deforestation, etc.—after decades of debate we're still taking only limping, token steps toward those goals, and in many cases we're moving rapidly in the opposite direction. Those who, for financial, theological, or ideological reasons, deny the very existence of a problem, have proved that despite being a minority, they can push hard enough on the levers of democracy to prevent anything meaningful from happening.

"So what's the solution? To put the world under the thumb of an environmentalist dictator? Absolutely not. In all of history, I don't think any dictatorial system has ever shown itself robust against takeover by murderous tyrants (people who probably aren't too keen on alternative energy either). The problem, I think, is epistemological. Within physics and chemistry and climatology, the people who think anthropogenic climate change exists and is a serious problem have won the argument—but the news of their intellectual victory hasn't yet spread to the opinion page of the Wall Street Journal, or cable news, or the US Congress, or the minds of enough people to tip the scales of history. Because our domination of the earth's climate and biosphere is new and unfamiliar; because the evidence for rapid climate change is complicated and statistical; because the worst effects are still remote from us in time, space, or both; because the sacrifices needed to address the problem are real—for all of these reasons, the deniers have learned that they can subvert the Popperian process by which bad explanations are discarded and good explanations win. If you just repeat debunked ideas through a loud enough megaphone, it turns out, many onlookers won't be able to tell the difference between you and the people who have genuine knowledge—or they will eventually, but not until it's too late. If you have a few million dollars, you can even set up your own parody of the scientific process: your own phony experts, in their own phony think tanks, with their own phony publications, giving each other legitimacy by citing each other. (Of course, all this is a problem for many fields, not just climate change. Climate is special only because there, the future of life on earth might literally hinge on our ability to get epistemology right.)

"Yet for all that, I'm an optimist—sort of. For it seems to me that the Internet has given us new tools with which to try to fix our collective epistemology, without giving up on a democratic society. Google, Wikipedia, Quora, and so forth have already improved our situation, if only by a little. We could improve it a lot more..."
posted by kliuless (45 comments total) 48 users marked this as a favorite
 
Just like eigenfactor, these should definitely exist even, if they never see widespread adoption, heck maybe that'd make them more useful.
posted by jeffburdges at 5:13 AM on June 23, 2014


This has great resonance for me. Ever since I got online and started to discover what complicated things trust, truth, logic and belief were when one had direct contact with multiple strangers, I've wondered how to create an objective way to demonstrate correctness. It's one of the fundamental aims of the Enlightenment, after all, to approach as closely as possible an objective understanding of the situation as it is, and it is one of the fundamental discoveries of the Enlightenment that it's much harder than it looks.

One argument against this sort of approach is that it's a technology-first attempt to fix a human problem, and that this is what happens when technologists try to fix anything. History is mixed on the success of such endeavours, and over the years I've veered all over the spectrum in my trust in technology as the cure for this or that.

However, I'm completely in favour of trying this as an experiment. Empiricism and iteration are most definitely part of any winning strategy, and the Great Problem of bashing some bloody sense into skulls is by no means exempt. One of the really exciting things about being alive today is the sense that the seed crystal for the next revolution in thought could pop out of solution anywhere at any time, and that we'll be right there on the front row when it does.
posted by Devonian at 5:21 AM on June 23, 2014


There is a bunch of philosophy which relates to this. In fact the whole social epistemology field is sort of built around how groups can know things.

I'm thinking of Susan Haacks Crossword Puzzle

"Haack introduces the analogy of the crossword puzzle to serve as a way of understanding how there can be mutual support among beliefs (as there is mutual support among crossword entries) without vicious circularity. The analogy between the structure of evidence and the crossword puzzle helps with another problem too. The clues to a crossword are the analogue of a person's experiential evidence, and the already-completed intersecting entries the analogue of his reasons for a belief. She claims that her metaphor has proven particularly fruitful in her own work, and has been found useful by many readers, not only philosophers but also scientists, economists, legal scholars, etc."
posted by Just this guy, y'know at 5:26 AM on June 23, 2014 [3 favorites]


Mathematically, is there a problem of multiple stable equilibria? If for a simple example we take the US political system, where there are two large, roughly equal groups in opposition to each other, then it sounds like his algorithm would define one as trustworthy and one as untrustworthy in a pseudo-random manner (as a result of the 'moral people are people who cooperate with moral people' definition). Does that sound right to those of you who know more about these things?
posted by Ned G at 5:34 AM on June 23, 2014 [1 favorite]


This other recent thread seems relevant here.
posted by eviemath at 5:42 AM on June 23, 2014


Be sure to read down to the bottom, the two updates are meaty.

(Also this is fantastic, thanks!)
posted by Skorgu at 5:44 AM on June 23, 2014


Within physics and chemistry and climatology, the people who think anthropogenic climate change exists and is a serious problem have won the argument—but the news of their intellectual victory hasn't yet spread to the opinion page of the Wall Street Journal, or cable news, or the US Congress...
This is another instance where the both-sides-are-equivalent mindset blinkers the scientist -- especially one newly (it seems) dabbling in a field he presumably considers to not be a hard science (political science). It's not the case that "the news" hasn't spread to "the US Congress" -- it's the case that the news hasn't spread to one half of it. "Eigendemocracy", as he terms it, has in fact already been quite thoroughly done in political science. DW-Nominate (the left-right scale that we always see members of Congress plotted on) is essentially the first eigenvector of the roll-call vote matrix. People have done much the same with political donations, Twitter following, or blog links. All give rise to two main factions, one of which has nearly all the scientists and other empirical workers, and the other of which has conservapedia. I suppose it's the cross to bear for the "soft" sciences to have folks from the hard sciences constantly blundering in and thinking we've never heard of eigenvectors, so I guess I'm just happy to have more folks thinking about these things. I wonder what he thinks of my cool new theory about quantum computation...
posted by chortly at 5:51 AM on June 23, 2014 [23 favorites]


This is certainly an interesting idea but referring to the phenomenon under examination as "morality" seems like a misnomer to me. It seems like he's designing a metric in which the most common behavior defines what "morality" is, whereas all of the moral systems I can think of involve advocacy of behavior that is explicitly different, often radically different, from whatever is most common.

And besides that, his metric is entirely based upon interpersonal interaction, whereas enough of our traditional concepts of what morality is are sufficiently inward-focused that we speak of someone's "moral fiber".
posted by XMLicious at 5:58 AM on June 23, 2014 [2 favorites]


Mathematically, is there a problem of multiple stable equilibria?

This isn't a multiple equilibria issue for a few reasons: 1) Trivially, there's no dynamics in these formulations, so no concept of an equilibrium. This is all about coming up with linear algebra equations (e.g. that one's rank is the average of the rank of those that link to you, plus a bit of intrinsic quality) that turn out to have unique (or nearly unique, up to arbitrary multiplication by a constant) self-consistent solutions. 2) There's no a priori assumption that there is agreement on any given issue, so part of the trick is coming up with a way to measure the existence of a consensus. He states that he's looking for the presence of a giant component in the network, which in this context would be a trust group that spans the majority of the network. Polarization, in the form of several large communities of roughly equal size, would then be interpreted as not having a clear majority and no "correctness" assignments would be made. The June 22 update at the bottom is clarifying here, too.

As with most things like this where the math is so straight forward (to those who do it), the question is the most interesting part. It's easy to read about a polling result, and easy to read about scientific consensus, but those are separate things. Is there a simple way to describe if an idea is both popular and considered true by experts? The idea he suggests is that people rarely have totally independent derivations of the ideas they have, and so the relevant information can be captured in the question of who people trust on a given subject.

It's not the most novel idea — it's a pretty obvious extension of a lot of network ideas that have been out there in the social sciences for a long while — but it's also a blog post, not a journal article. (And I don't understand why he thinks that it would be a useful number for practical purposes, but that's a separate issue)
posted by Schismatic at 6:05 AM on June 23, 2014


Scott Alexander draws out the DW-Nominate connection explicitly.
posted by Proofs and Refutations at 6:13 AM on June 23, 2014 [2 favorites]


DW-Nominate (the left-right scale that we always see members of Congress plotted on) is essentially the first eigenvector of the roll-call vote matrix.

This doesn't really disagree with you, but I failed a saving throw against pedantry.

Nit: nominate uses that to figure out starting values for a maximum likelihood model with a probabilistic voting model built into it. If the model iterates for a long time, it might move pretty far from the simple matrix math starting point.
posted by ROU_Xenophobe at 6:18 AM on June 23, 2014 [1 favorite]


Very interesting (what I've read so far, will keep reading!). I'll go a little farther than chortly, though - it's not even that the news hasn't spread to one half of Congress; it's that they don't care. Our long-term and (at this point what seems to be) inevitable wrecking of our climate and environment is not as important to some very powerful people as holding their political and economic power. And what do you do about that problem?
posted by rtha at 6:20 AM on June 23, 2014 [4 favorites]


Hey, Schismatic - ok, I kind of get that, he's basically counting connections between nodes to quantify consensus, and in the case where there's one broad consensus I can see how that'd work. What I don't get is what would happen if there was a roughly 50-50 split in the consensus, would this algorithm state there's a 50-50 split, or would it decide one side had a stronger consensus? And if it did decide one side was better, would that decision be deterministic, or would it be sensitive to some of the initial conditions (like a hill climbing algorithm in the region of two local maxima - maybe multiple equilibria was the wrong choice of phrase)?
posted by Ned G at 6:36 AM on June 23, 2014 [1 favorite]


One of the first things that would happen, of course, is that the parallel discourse community identified would argue that the trust graph was rigged somehow and disinvest in believing in it. See also "unskewed polls" in the 2012 presidential election. The problem with implementing an eigendemocracy system would be that any system that did not substantially reproduce current political outcomes would have considerable problems achieving sufficient infrastructural support and initial popular buy-in, both in terms of active participation and in terms of defining the terms "relevant" and "expert." I wouldn't expect strictly Popperian definitions to triumph, for example.

You also have the "prefect vs. good" problem that expert consensus and the allied social priorities are only *more right* or *more likely right* for any *particular* moment or within a *limited* timeframe, but that mistaken expert belief can have catastrophic and unforeseen long-term consequences.

The iterative element is meant to answer this, but shifts in the consensus can happen too late for meaningful action to reverse an incorrect, if informed consensus. Certainly not everyone shares Aaronson's optimism that the effects of anthropogenic climate change are substantially reversible at present; the expert consensus may have taken too long to solidify and be disseminated, and the eigendemocracy mechanism has not yet arrived and is unlikely to become a real political system anywhere anytime soon.
posted by kewb at 6:48 AM on June 23, 2014


chortly is exactly right on this. This whole thing may very well be the purest example of Engineer's Disease I've ever encountered.
posted by graphnerd at 6:50 AM on June 23, 2014 [1 favorite]


(I am aware that Aaronson raises these objections himself at the end of his article, but I think they're worth looking at, especially as he admits he doesn't have good answers.)
posted by kewb at 6:50 AM on June 23, 2014


What about Arrow's theorem?
posted by grobstein at 7:14 AM on June 23, 2014


Umm...

My problem with this is that the group is dead wrong just often enough to make this a worrying prospect. The biggest flaw with Democracy is that it is the only system with a built in disadvantage for minorities. Just because 80% of the population believes a thing to be true and reliable does not make it so, but legislating it so means that the minority gets silenced.

Climate Change is particularly an issue where political correctness comes into play. When my sister took a university level course on climate change, hoping to get a clear understanding of the factors involved, not because she doubted climate change as a reality but because she wanted to learn how reliable the different climate change predictions were compared to each other, she found that voicing any doubts about the studies resulted in a forceful negative reaction from the other students. If she asked, "How reliable is this?" she got shouted down. She felt like a pro-choice person speaking up during an anti-abortion rally.

Worse, many people go along with the majority. You've heard about the study: If everyone in the room says a little line that is three and a quarter inches long is shorter than a line which is three inches long, almost everybody will agree with the rest of the room instead of admitting to what they really think. People also are more likely to agree with a high status person over a low status person, regardless of the evidence.

So to me this smacks of a device that will encourage the enforcement of political correctness. I don't want to be told who I should trust. I want to mistrust everyone and everything. I want to be able to listen to the argument of someone with unsavoury morals and assess his argument on its own merits without anyone telling me that I have to ignore what he says because he is unkind to his old mother and hates dogs.
posted by Jane the Brown at 7:21 AM on June 23, 2014 [4 favorites]


I'm missing the "democracy" part of "eigendemocracy"?
posted by eviemath at 7:22 AM on June 23, 2014 [1 favorite]


Yeah the problem with this is, if simply looking at data and cold hard facts was enough to convince people, there wouldn't be a global warming debate. People reach their conclusions based on emotion and ideology, and then find ways to support their conclusions, rather than the other way around. See George Lakoff.
posted by natteringnabob at 7:23 AM on June 23, 2014 [1 favorite]


If such a system as he envisions actually were created, and it really did show, for example, huge networks of trust in global warming and tiny networks of trust in James Inhofe, conservatives would just say the math is skewed. If it was bad enough, they'd just refuse to participate -- and participation is required to make the system anything like a real measurement -- and call it an echo chamber.
posted by Flunkie at 7:35 AM on June 23, 2014


What about Arrow's theorem?

It remains in force. The author didn't describe his eigendemocracy very clearly, but it would still fail at least one of ST/U/P/I/D.
posted by ROU_Xenophobe at 7:39 AM on June 23, 2014


This is excellent!

[INSERT 3D SURFACE GRAPH I LEARNED TO MAKE IN MATHEMATICA BUT NEVER HAD A USE FOR]
posted by srboisvert at 7:51 AM on June 23, 2014 [1 favorite]


What I don't get is what would happen if there was a roughly 50-50 split in the consensus, would this algorithm state there's a 50-50 split, or would it decide one side had a stronger consensus?


I don't think he proposes a good answer to that. The natural assumption to me would be to say that issues that don't have giant components are not well-resolved by this (or that this measure would detect no "best" consensus).

People also are more likely to agree with a high status person over a low status person, regardless of the evidence

The assumption/hope here is that the high status/visibility person would themselves trust more expert people. Obviously this could be wrong, but it might be right often enough to be useful. The general idea is that trust should propagate, so instead of asking what you think is the right answer to a question (let's say climate change), you ask who you trust on questions of climate change. So if that person isn't themselves an expert, if they know more about the topic than the first person, then have a better idea of who the experts are and would trust them. This is a very different question than asking people what their opinion is itself. It's sort of polling by proxy, with the hope that this is a useful combination of expert-agreed-ness and population-agreed-ness. I do really like the idea of trying to merge the two ideas into one number.

I can really see why this would be appealing to a scientist. If we want to learn something new, we might chat with a colleague about the topic, and then read a few papers he or she suggests, and then read the papers cited in those, etc. This is an intuitive extension of that. Applied to a more general group of people, though, and I suspect it would have some very interesting failure modes and successes. Some mentioned in the thread, but others that are probably less predictable.
posted by Schismatic at 8:02 AM on June 23, 2014 [1 favorite]


To state an emerging theme as two questions: How resilient are postpositivist trust-propogation networks against emotional appeals like FUD? And how much of trust as a social function actually emerges from a shared belief in Popperian postpositivistism?
posted by kewb at 8:16 AM on June 23, 2014 [3 favorites]


Conversely, it's essential to note, if there were a dissenting minority, but that minority had strong trunks of topic-relevant trust pointing toward it from the main component (for example, because the minority contained a large fraction of the experts in the relevant field), then the minority's objections might be enough to veto action, even if it was numerically small.

Ermm, the way the Koch brothers have trust because of their involvement in the energy industry or Wall Street CEOs have trust because of their involvement in the financial industry? Of course Aaronson would probably like to argue the magical trust graph will firewall David Koch away from climate decisions, but those decisions will messily affect all of society and for an algorithm to take power to effect laws on specific topics away from the Kochs and Comcasts of the world you're in a programmer-as-dictator situation where algorithm choices are determining policy.

Elites and their networks and structures of power exist in the world and have incredible impacts on the course of events - the idea that a benevolent dictator algorithm will put the rational scientists back in charge is an astonishingly naive view of politics.
posted by crayz at 8:18 AM on June 23, 2014 [3 favorites]


And what do you do about that problem?

expose them explicitly and objectively "with the click of a mouse [and perhaps] the force of law..."

maybe not even that! "16-year-old programmer develops plugin that automatically annotates politicians' names with their funders." :P

I don't want to be told who I should trust.

that's not what it would be doing; it'd be access to a (social) graph that showed connections between people -- a trust network -- and then you could (presumably) make up your own mind; a couple people have already mentioned aaronson's updates, which i'd also encourage everyone to read, but for clarity's sake:
Moving on to eigendemocracy, here I think the biggest problem is one pointed out by commenter Rahul. Namely, an essential aspect of how Google is able to work so well is that people have reasons for linking to webpages other than boosting those pages' Google rank. In other words, Google takes a link structure that already exists, independently of its ranking algorithm, and that (as the economists would put it) encodes people's "revealed preferences," and exploits that structure for its own purposes. Of course, now that Google is the main way many of us navigate the web, increasing Google rank has become a major reason for linking to a webpage, and an entire SEO industry has arisen to try to game the rankings. But Google still isn't the only reason for linking, so the link structure still contains real information.

By contrast, consider an eigendemocracy, with a giant network encoding who trusts whom on what subject. If the only reason why this trust network existed was to help make political decisions, then gaming the system would probably be rampant: people could simply decide first which political outcome they wanted, then choose the "experts" such that claiming to "trust" them would do the most for their favored outcome. It follows that this system can only improve on ordinary democracy if the trust network has some other purpose, so that the participants have an actual incentive to reveal the truth about who they trust. So, how would an eigendemocracy suss out the truth about who trusts whom on which subject? I don't have a very good answer to this, and am open to suggestions. The best idea so far is to use Facebook for this purpose, but I don't know exactly how.
(emphasis added ;) this is analogous to mefi's SEO woes (or wikipedia edit wars, but then there'd at least be a '[meta]talk' page everyone could reference) but in all cases more (meta)data _might_ break the impasse: "A logic is monotonic if the truth of a proposition does not change when new information (axioms) are added to the system. In contrast, a logic is non-monotonic if the truth of a proposition may change when new information (axioms) is added to or old information is deleted from the system [...] i.e., that kind of inference of everyday life in which reasoners draw conclusions tentatively, reserving the right to retract them in the light of further information." but then wouldn't that be the essence of 'deliberative democracy' (or 'participatory economics')?
posted by kliuless at 8:32 AM on June 23, 2014 [2 favorites]


Popperian postpositivistism

Try saying that after a couple of drinks.
posted by Ned G at 8:38 AM on June 23, 2014 [1 favorite]


expose them explicitly and objectively "with the click of a mouse [and perhaps] the force of law..."

I admire the optimism in this, but we have many, many real-world examples of this kind of treatment not making a damn bit of difference. For one (big) thing, you will (and do) have people who will not accept that evidence is objective; for another, you have large swaths of people who simply do not care: the political and policy positions of their chosen politician or pundit makes them feel good and righteous, and that is what is important to them.
posted by rtha at 8:59 AM on June 23, 2014


Does this mean that atheism would be made illegal?
posted by No Robots at 9:22 AM on June 23, 2014


They'd see that there was a huge, central connected component in the trust graph—including almost all of the Nobel laureates, physicists from the US nuclear weapons labs, military planners, actuaries, other hardheaded people—who all accepted the reality of humans warming the planet, and only tiny, isolated tendrils of trust reaching from that component into the component of Rush Limbaugh and James Inhofe. The deniers and their think-tanks would be exposed to the sun; they'd lose their thin cover of legitimacy

This piece made me feel old. Like, I would have been fascinated by this in high school or college.

But now... "Democrats" and "Republicans" are not mysterious eigenvectors whose existence can only be uncovered through slightly advanced mathematics, they're rather widely known. And if someone thought that Nobel laureates and physicists and actuaries had all the wisdom needed to run the country, they wouldn't be Republicans in the first place.
posted by leopard at 9:32 AM on June 23, 2014


This is all very interesting, but what I really need is some kind of score that will let me know if the person making a friend request on Facebook is going to spam their timeline with chemtrail and UN tanks in Georgia links.
posted by ob1quixote at 9:57 AM on June 23, 2014


Devonian: "an objective way to demonstrate correctness"

Good luck.
posted by dendrochronologizer at 9:59 AM on June 23, 2014


Good luck

Not to mention phenomenology. So, no, no illusions that such things are rigorously possible. But a sense of probabilities would do me fine...
posted by Devonian at 10:25 AM on June 23, 2014


So to me this smacks of a device that will encourage the enforcement of political correctness. I don't want to be told who I should trust. I want to mistrust everyone and everything. I want to be able to listen to the argument of someone with unsavoury morals and assess his argument on its own merits without anyone telling me that I have to ignore what he says because he is unkind to his old mother and hates dogs.

But in practice, there are, at best, only a few issues that you have time to really get to know in enough depth and detail to really have an informed opinion. Most of the rest, we all have to rely on networks of trust. And I think it would be great if there were a way to make those networks a little more explicit.

I'm pretty sure I have a gray area of unknown size of opinions on subjects on which I don't have any personal expertise and that I don't really know whether I've paid attention to get the advice of people who I actually think are trustworthy on the subject or if I've just internalized an opinion from some comment from someone who might have sounded smart on MetaFilter.
posted by straight at 11:04 AM on June 23, 2014 [2 favorites]


And not to thread-sit, but completely independently I came across this Slideshare on social media and the rise of algorithmic knowing. Haven't digested it completely (and Slideshare does not replace the essay in getting arguments across) but it covers similar themes to the OP. In particular, what the nature of knowing and truth are to the machines that we create to find them, and how there's a rise in probabilistic notions of facts which may be rather more powerful than we're ready for.
posted by Devonian at 11:04 AM on June 23, 2014 [1 favorite]


For one (big) thing, you will (and do) have people who will not accept that evidence is objective; for another, you have large swaths of people who simply do not care: the political and policy positions of their chosen politician or pundit makes them feel good and righteous, and that is what is important to them.

right, aaronson has stated as much upfront -- "As long as I'm fantasizing..." or "Obviously, such ideas are nonstarters in the current political climate of the US, but I'm not talking here about what's feasible, only about what's necessary." -- so your objections are duly noted: when (Popperian postpositivist) objectivity is itself 'subjective' and subject to motivated reasoning then elites (those who have resources -- mass manipulation -- or access to such) and particularly those who don't have any particular fealty to the rest of humanity, future generations or The Truth (canonically climate change) then fuck it, gave over.

like to restate the problem epistemologically, what you're saying is that humanity is degenerate and irredeemable and while i agree there are "many, many real-world examples of this," i'm not sure i'm with you as _a whole_ in that regard.

so that's the spirit in which i think aaronson is making in his proposal, with a cue from pagerank's success:
eigenmorality can't serve as the ultimate ground of morality. But that's a bit like saying that Google rank can't serve as the ultimate ground of importance, because even just to design and evaluate their ranking algorithms, Google's engineers must have some prior notion of "importance" to serve as a standard. That's true, of course, but it omits to mention that Google rank is still useful—useful enough to have changed civilization in the space of a few years...
the gambit is that by making biases explicit (or at least more easily revealed), people can become more 'objective', particularly as new information is imparted, however it's received. i'm reminded of neal stephenson talk a couple years ago:
It's not my purpose to single out the environmental movement. But that does embody a certain mentality about risk that has become so tied up in intellectual knots that it has the net long-term effect of making things more risky.

It's my thesis that a small number of people have to be willing to shoulder greater risks in order to create changes that eventually reduce risk for civilization as a whole. And that when they stop fulfilling that responsibility, a decline sets in that may require some conscious effort to reverse.

It shows up most conspicuously in public attitudes towards science and technology. American culture from the very beginning has had a very powerful anti-elite, anti-intellectual strain. It's been well covered in books like Anti-Intellectualism in American Life by Hofstadter and a more recent book called--I can't remember the name of it. It's by Susan Jacoby. But it's about the decline of American rationalism. It came out a couple years ago.

Anyway, people who espouse that kind of mentality are markedly aggressive. And they are relentless. They never stop working to further their goals. To the extent that they tolerate progress at all, it's on a 'what have you done for me lately' sort of basis.

So through the first part of the 20th century, we were able to keep them off balance with spectacular advances that couldn't really be argued with.

Here's an airplane. Argue with that.

I just saved your kid's life with penicillin. Argue with that.

Here's a mushroom cloud.

Polio vaccine.

A guy walking around on the moon.

Argue with those.

But when those inarguable triumphs stop coming, the anti-science people come back and begin making inroads to a degree that educated people can't even comprehend, for example, by denying that the moon landings ever happened. So it's entirely plausible that 100 years from now, it may be believed by 99% of all the people in the world that the moon landings were a hoax. And the idea that they actually happened may have the
status of a totally marginalized conspiracy theory. That is a totally achievable result. And there are people who are actively working to make that kind of thing happen.
or if you want a less libertarian 'makers vs. takers' vibe where everyone has something to contribute: "Private wealth is worth nothing if it exists alongside public squalor."
posted by kliuless at 11:11 AM on June 23, 2014 [5 favorites]


Ok, I think I see where a lot of people are stumbling on the idea of how trust should be calculated. At least what it seems like, to me, is that people are looking at stated trust versus indirect actions of trust.

The key being that if someone wanted to game the system, or, alternatively, just did not want to appear to be politically incorrect or associated with a cultural identity other than their chosen tribal affiliation, then answers to a survey or their publicly stated trust associations would basically be useless in accurate calculations. This is where the scary NSA monitoring tactics would give you much better data to work from. You would look at actions of the people in the trust web, versus their stated opinions (which often run completely counter to their actual behavior). It is widely documented that many people will hold conflicting stated opinions, but then behave counter to their stated beliefs. So your actual trust network should be based upon your post decision actions, not on your stated belief of how you would behave in any given circumstance.

Again, this is something I've been thinking about for a few years now, especially because of the predominance of online "quizzes" to determine what your IQ, personality, political affiliation, etc, etc. There was that fascist quiz a few days ago (the one that was meant to assertain fascist leanings of people in the late 1940's early 1950's) that seemed to surprise a lot of people who self-identified as "conservative" mefites. The problem being that when asked about many things that have historically been defined as conservative/authoritarian, they find themselves cognatively against those stated stances, however, they may find, were they to have a 3rd party catalog their behaviors and interactions with people over a long enough period of time, that their actions are completely counter to their own stated beliefs. This does not just apply to conservatives, either. Of course, it has been an often derided and derogatory accusation that "liberals" are hypocritical, but I think the more universal statement should be "while we may cognitively believe in anything, our actions often make liars of us all." That's probably a paraphrase of someone really smart or something, but it's the closest I can come to accepting that humans, as noble as they may view themselves, will always act in the moment based upon unrealized and unexamined emotional, cognitive, and temporal biases (temporal meaning that in the moment, it is impossible to make all decisions with total information awareness).

Also, of note, the linked FPP stuff is really talking about creating a map, which should not be confused for the actual thing itself. Something something map not the lay of the land or whatever. Every model is flawed, etc, etc, etc.

To reference someone mentioning the Koch brothers and energy executives. The actions of those individuals is not the "expert" opinions that represent viable trust. Their behaviors and decisions are what determines their trustworthiness. The results of their actions and decisions are also key factors in building that trust.

And of course, just to end on a sour note, the hardest part of trust is knowing the difference between someones stated beliefs or public representation of those beliefs and how those beliefs affect their actions and decisions. Many times, the actions and results of those actions are obscured either by externalized temporal factors, such as the effects of burning coal (which takes time to build up in the atmosphere, thus does not appear, in a very short time window, to have much if any effect on the climate), and the utility provided by the decisions (such as power generation, heat when it is cold, or powering locomotion, such as the widespread use in trains as the main means of powering the boilers). Sadly, people, in general, are not very good at making decisions that do not have an immediate effect upon their current temporal disposition. Hence the reticence of many people to do anything to mitigate or curb climate change, as the distance and time factors are outside of their day to day mental calculations and decision making. This is half the reason one of the primary means that is being attempted to change our behavior is through taxation of carbon usage, to bring the cost factor into play in the daily calculation. It may or may not be the best way to do it, however in order to affect changes to individual behavior, some weight must be added to those individual decisions, on a wide enough scale to make those individual decision calculations factor in those longer term and many times spacial differentiations (most people who live West of a power plant could care less what toxins are being spewed into the air, as their local pollution levels are not affected. Just like people upstream have no problem dumping waste into rivers, because the water they have access to is still "clean").

Blah blah blah, tl;dr: DAQ blabbers about stated opinions versus recorded actions/decisions.
posted by daq at 11:18 AM on June 23, 2014


In the case of global warming, the experts are right. But experts can get things wrong spectacularly. Since 2008, for instance, the Very Important People of the world have been agreed that the best response to the economic crisis was to reduce spending and eliminate the social safety net— a response that has prolonged the crisis and created widespread misery. I'm not sure I'd trust the "expert" opinion on deregulation, either. Any civil rights movement initially faces a situation where every trusted institution is against it.

(Also, surely "the experts" don't accept Aaronson's eigendemocracy, and so by his own logic it should not be adopted?)

But anyway, if "representative democracy is now failing spectacularly" on one issue, why not figure out why it's failing and fix that, rather than suggest a vast, untested, contentious new system? In the US, the main culprits are pretty clear: gerrymandering and filibusters. The US system was designed to have a lot of veto points, and we've learned that a determined minority can abuse these to block governing as long as it likes. And the winner-take-all system, single-member districts, and district lines drawn by the majority, are poor design decisions.
posted by zompist at 11:48 AM on June 23, 2014


> The US system was designed to have a lot of veto points, and we've learned that a
> determined minority can abuse these to block governing as long as it likes.

Is there some sort of fix for that that wouldn't count as governing?
posted by jfuller at 2:09 PM on June 23, 2014


In the US, the main culprits are pretty clear: gerrymandering and filibusters.
I agree that both of these are significant problems, but the Republicans would still have majority control of the house without their gerrymandering advantage (which has been estimated by multiple smart statisticians, independently, as approximately six seats, only a minor portion of their 34 seat advantage). With majority control of the House, the filibuster is essentially irrelevant*. Let the godless commies in the Senate pass whatever America-hating bill they want - it doesn't become law unless the True Patriots in the House pass it, too. So fix those two things, and you've got just as much -- and just as effective -- obstruction from Republicans as before.

The main culprits aren't gerrymandering and filibustering. The main culprits are Republicans.

*: Except for things like judicial appointments where the Senate doesn't need the House, but Democrats have gotten rid of the filibuster for that anyway.
posted by Flunkie at 4:29 PM on June 23, 2014


Since 2008, for instance, the Very Important People of the world have been agreed that the best response to the economic crisis was to reduce spending and eliminate the social safety net

In a thread involving epistemology, maybe it is useful to think about how this common belief compares to reality.

Government expense, % of GDP: G7 countries
Government expense, % of GDP: Portugal, Ireland, Greece, Spain.
Government expense, local currency: Greece, Portugal, Ireland.
Government expense, local currency: Larger countries where GDP is still below pre-recession peak.

Consider how many of them are below pre-crisis levels. Consider the rest of the world. If you're going to have an opinion about this, look at the numbers first.
posted by sfenders at 4:58 PM on June 23, 2014


And of course, just to end on a sour note, the hardest part of trust is knowing the difference between someones stated beliefs or public representation of those beliefs and how those beliefs affect their actions and decisions.

Lately I've been reading Plato's Republic, because I always wanted to get some idea what "justice" means and was reminded of it recently by metafilter. I'm now getting to the bit where good old Socrates is considering what kinds of music are good and virtuous, and coming to the decision that the mixolydian mode is the opposite of those and not worth playing or listening to. He's already expounded on theatre, poetry, and what kind of stories should be told about the gods, in ways that seem entirely nonsensical in a modern context.

Now I haven't decided how ironic I think it's all meant to be, or how allegorical, but I think it's safe to say he's probably just plain wrong about some things. Wrong, according to what we've learned since his time. Despite that, from experience I would tend to rate as more trustworthy, other things being equal, someone who habitually quotes from Plato and other ancient classics. I suspect that I would be with the consensus in thinking that way. So it seems to me that in this system of eigentrust, the writings of Plato would most likely end up highly ranked. And yet they are, although brilliant in parts, not all that reliable as plain descriptions of things that would by consensus be considered at all near the truth.

All this is without paying any attention to the writer's practical actions and decisions, so I think the system will have enough trouble without considering those at all. Trying to model human behaviour on top of the already ambitious goal is incredible and unnecessary. Going only by what the sources say already seems like it would require some pretty fancy AI.
posted by sfenders at 6:57 PM on June 23, 2014 [2 favorites]


Yes, I've looked at the numbers. They support what I said.

Take a look at some of the charts here. US government spending, per capita and measured by inflation, is in steep decline. Yes, we really do have an austerity policy. Yes, we really do have a huge output gap and high unemployment.

Note that looking at government expenditures as a percentage of GDP in a recession is highly misleading, because GDP went down. That's a cheap trick to make government spending look higher. (It's also misleading because people make more use of the safety net in a recession, so spending will go up because of that. It does not mean that OMIGOD GUMMINT IS TAKING OVER. In the US, federal-level austerity was minor (remember the sequester?), but state and local spending cuts were immense.)

Also note that before the crisis, contrary to what the Very Serious People remember, Spain and Ireland were running surpluses, not deficits. The problem in these countries was not excessive spending, and reducing it more does not help.

(On the plus side, I agree with you about Plato.)
posted by zompist at 7:11 PM on June 23, 2014 [1 favorite]


Note that looking at government expenditures as a percentage of GDP in a recession is highly misleading, because GDP went down.

I included the charts that aren't relative to GDP specifically so as not to be misleading on that point. Didn't include one for USA because it in that case it isn't misleading, since GDP is at a record high already as of the latest data on those charts. Good point about state and local government, I didn't think to try and include that.

To me it seems less potentially misleading than using "potential GDP", a concept I trust even less than I did before we had that gigantic "credit bubble" thing which pushed its current value, in the estimation of people who believe that things had the "potential" to continue on just as they had been going if only they hadn't, well above where it probably ought to be. The Fed has already calculated values for potential GDP going out to the year 2024, and its present value doesn't seem very much influenced by the trend of actual GDP in recent years. But I wouldn't rate my line of thought there as particularly popular with the masses.

Even by that odd measure however, according to the chart on Paul Krugman's blog, total US government expenditure to imaginary GDP looks to be back down only to about the same as it was pre-crisis. So, a huge wave of government spending, both automatic and otherwise, has come and gone. Krugman thinks it's gone too early sure, and I suspect he may be partly right, but I think lack of sufficient stimulus is not the same as austerity. I do think they're going a bit too far, too fast in deficit reduction in the US. On the other hand, trying to (prematurely) judge by the outcome, I also believe a consensus is beginning to emerge that by conventional measures the US economy is doing reasonably well at the moment. Eventually there will come a time when the Fed will be correct in their continued forecasts that it will all get better next year, and that time may be now.

So how do we decide who's right without actually arguing about it for days? Could a computer algorithm really help? I guess it only works for questions on which there is a general consensus, and in this case to my knowledge there isn't one even among well-established experts. I'm not even sure the dividing line between groups of experts with opposing beliefs would be at all clear. There are many intermixed subjects of disagreement among economists.

Krugman might be another difficult case for the automated TrustRank evaluation algorithms. He is at least a difficult case for my own mind's trust evaluation algorithms. He obviously knows a thing or two about economics, but he writes a popular newspaper column in which he cannot apply quite the level of rigorous thinking he'd do in more academic work, and often ventures into topics that may not precisely align with his specific expertise. Might there be a substantial number of economists who respect him for his Nobel-prize-worthy work, but who don't have any idea what kind of things he writes on his blog? That kind of situation could throw off a naive algorithm easily. It seems safe to assume that the quality of his blog is less than the quality of his papers in more scientifically prestigious journals, but by how much? Humans would have a hard time quantifying it, I'm not sure I trust an algorithm to be able to do it. Does it even make sense to assign him one overall trust rating? If for any given work his trustworthiness as an author is modified according to which publication he's writing in, that presents another set of even worse problems; the New York Times is even less consistent in its degree of accuracy than is any single person. There's no escaping the need to assign trustworthiness not as one scalar value per source, but at least as one value per general area of subject matter per source. Wikipedia may be generally excellent on the subject of physics, but I've stumbled across topics where it was completely inept and misleading. Or take me, sfenders, occasional metafilter commentator; sometimes I know stuff, other times I go completely wrong. Actually it would be a great tool for self-improvement if there were some kind of algorithm that could tell me which was which according to the omniscient AI. But to be really useful I think it would require not ranking sources, or even sources on each general topic, but individual statements and beliefs.
posted by sfenders at 9:05 PM on June 23, 2014 [1 favorite]


« Older Algeria make history   |   I can't stop! I can't stop myself! Newer »


This thread has been archived and is closed to new comments