I spent a weekend at Google talking with nerds about charity.
August 12, 2015 5:15 AM   Subscribe

 
I knew GiveWell would probably be part of this as soon as I saw the name.
posted by grouse at 5:27 AM on August 12, 2015


Do they talk about how they're going to disrupt charity?
posted by symbioid at 5:34 AM on August 12, 2015 [3 favorites]


You joke symbioid, but the guy quoted in there saying "I really do believe effective altruism could be the last social movement we ever need" is more or less advocating for disrupting charity.
posted by ActionPopulated at 5:46 AM on August 12, 2015 [1 favorite]


Cue my standard gripe about the shorthand 'nerds' for "smart, technology inclined people".
posted by signal at 5:53 AM on August 12, 2015 [1 favorite]


Cue my standard gripe about the shorthand 'nerds' for "smart, technology inclined people".

Especially when so many are clearly not smart in ways that matter.
posted by Dip Flash at 5:55 AM on August 12, 2015 [19 favorites]


In the beginning, EA was mostly about fighting global poverty. Now it's becoming more and more about funding computer science research to forestall an artificial intelligence–provoked apocalypse.

Their fancy math has convinced them that helping humanity avoid an extinction event is more effective altruism than fighting poverty today. (Lives lost to poverty is a "rounding error" when compared to the potential lives lost to an early extinction.)

I wonder how they view birth control. Or spilled seed.
posted by notyou at 5:59 AM on August 12, 2015 [10 favorites]


At the moment, EA is very white, very male, and dominated by tech industry workers. And it is increasingly obsessed with ideas and data that reflect the class position and interests of the movement's members rather than a desire to help actual people.

Cue my surprised face. I could never imagine that white male tech workers would end up in that corner. Again.

In the beginning, EA was mostly about fighting global poverty. Now it's becoming more and more about funding computer science research to forestall an artificial intelligence–provoked apocalypse.

Cue... my surprised face. My actual, surprised face.
posted by GenjiandProust at 6:00 AM on August 12, 2015 [18 favorites]


I wonder how they view birth control. Or spilled seed.

♫ Every sperm is sacred... ♫
posted by Cash4Lead at 6:01 AM on August 12, 2015 [2 favorites]


Two things:

1. Dylan Matthews is an eminently sensible human being.
2. Turning effective altruism into the Roko's Basilisk squad is all the evidence you need that intelligence is self-refuting.
posted by anotherpanacea at 6:02 AM on August 12, 2015 [17 favorites]


At the risk of overgeneralizing, the computer science majors have convinced each other that the best way to save the world is to do computer science research. Compared to that, multiple attendees said, global poverty is a "rounding error."

AHAHAHAHAHAAHAHAHAHAHAHAH

Oh man. This reminds me of the wonderful explanation that Fred Clark (Slacktivist) has for the Left Behind series and the mindset that produces them, how they ignore actual evil in this real world for imagined horrors (baby parts in our Pepsi!) because that allows the followers to become heroes without having to actually confront real evil or risk anything genuine.

So we can face a shadowy sci-fi what-if (and not even asteroids, they go for the Skynet AI, seriously!) rather than tackle the global poverty that comes from economic and political and social systems that have somehow coincidentally brought us to the near peak of prosperity.....
posted by dorothyisunderwood at 6:04 AM on August 12, 2015 [30 favorites]


I look forward to AI, because I'm pretty sure we have no chance whatsoever of controlling it. Either we will survive or we won't, because AI will scale so fast that we will not be able to negotiate as equals or superiors, so it's a pointless waste of time trying to prepare for AI, a sort of intellectual cargo cult. Better we try and figure out some solutions to problems we can solve.
posted by dorothyisunderwood at 6:07 AM on August 12, 2015 [1 favorite]


To be fair, Give Well (which I know has a troubled history with metafilter) does not recommend spending money on existential risks.

Effective altruism has become (or always has been) tied up with the less wrong crowd. This article is an excellent rebuttal of the kind of arguments put forth by less wrong in favour of the kind of utility calculations that lead to you being overwhelmingly forced to choosing one action over another (they also beg the question somewhat as you need a clear moral framework before you can write down a utility function). There's lots of sensible reasons to be wary of any pascal mugging type argument,

1)Utility calulcation (multiply gains by probability) makes an implict assumption that you can repeat this action lots and lots of times, and thus on average come out ahead. Well we only live in one world, and we only get to spend that money once. So we should be worried if we think the probability of something occuring is very low.
2)As the article suggests, the numbers really end up being literally made up. That is, there is little emperical basis for it. That is, we can actively measure how many lives mosquito nets will save, we can't actively measure how much paying some guy to think really hard about AI risk will help us.
posted by Cannon Fodder at 6:08 AM on August 12, 2015 [11 favorites]


Effective altruism is not a replacement for movements through which marginalized peoples seek their own liberation

Especially since, as they are overwhelmingly white men, stopping being shitty to marginalized people is not as compelling as a possible asteroid problem..
posted by GenjiandProust at 6:09 AM on August 12, 2015 [7 favorites]


they've been driven too much by the desire to feel good and too little by the cold, hard data necessary to prove what actually does good

Man, I bet no one was expecting that the cold, hard data would point towards never having to think about how much it sucks to be poor.
posted by teponaztli at 6:17 AM on August 12, 2015 [14 favorites]


They really are leaping over a lot of issues about potential lives, which Matthews talks about in one paragraph. To put it more starkly: one of the basic premises of effective altruism is that if someone is suffering and you have the means to stop it, you ought to do so. In the new existential risk framework, you're valuing the sum of all the potential future lives over the current suffering and death of an actual living human being. There's a difference between a conscious person dying and a future potential person never coming into existence (which is why it is especially frustrating to see them elide the impact of an extinction event on people it kills and the impact on people it prevents from coming into being).

The existence of "economic and politic and social systems that have somehow coincidentally brought us to the near peak of prosperity" is a key thing that EA fails to grapple with. The hopeful discussion of reducing animal suffering at the end of the article struck me as an effective use of a current capitalist framework. If EA wants to "expand" its impact, there's a lot of potential in getting people to acknowledge and address the harms of the system in which they are already implicit. But part of the problem is measurability. It is easy to measure the impact of a life-saving measure, and fairly easy to measure the impact of treating a parasitic infection of a fistula. And there is a lot more life-saving and reduction of suffering to do on that level! But EA doesn't have a good framework for measuring other forms of suffering or good; the current thinking values preserving these hypothetical future lives over improving the lives of current humans.
posted by earth by april at 6:18 AM on August 12, 2015 [7 favorites]


Especially since, as they are overwhelmingly white men, stopping being shitty to marginalized people is not as compelling as a possible asteroid problem..

But the other key feature - economic equality means that some people become, proportionally, less rich, right? You can't exactly be an economic titan bestriding the landscape in a world of genuinely progressive tax policies, living wage laws and a generous social safety net, because then it's a lot harder to exploit people.

All this tech nonsense wouldn't even be happening if we had global living wage laws and effective environmental protection because it would no longer be cost effective to fuck over junior developers, to extract metals unsafely, to run sweatshops to produce all this junk and to dump the old versions in some desperate country where people will give themselves lung cancer burning the plastic off the metal for salvage.

It's people like this who make me - an anarchist - yearn for a strong state. These are the kind of people who give non-state systems a terrible name.
posted by Frowner at 6:20 AM on August 12, 2015 [15 favorites]


There's not one mention of climate change anywhere in this article's discussions of existential risks, which should show you just how useful these guys are going to end up being.
posted by Caduceus at 6:21 AM on August 12, 2015 [30 favorites]




Wait, so what is the metric for knowing that you your efforts are effectively preventing the AI apocalypse? Is there someone somewhere who can say that for every dollar you spend on a project you eliminate 6 millicyberdynes or something?
posted by Panjandrum at 6:29 AM on August 12, 2015 [8 favorites]


The problem is that you could use this logic to defend just about anything. Imagine that a wizard showed up and said, "Humans are about to go extinct unless you give me $10 to cast a magical spell."

Actually, by coinicidence... just pay pal me it. Soon.
posted by fearfulsymmetry at 6:32 AM on August 12, 2015 [2 favorites]


Also AI has the potential to be bigger / more populous, have more potential experience, last a hell of lot longer etc than us weak and feeble and dense fleshy ones. So the alturistic thing is to fund singularity as quickly as possible.

Also save some starving kids, because.. come on!
posted by fearfulsymmetry at 6:34 AM on August 12, 2015


ITT: atheistic libertarian Silicon Valley engineers (re)discover religion.
posted by Avenger at 6:34 AM on August 12, 2015 [2 favorites]


This feels a lot like what the lesswrong kids are into. Is there overlap between these groups?
posted by theorique at 6:36 AM on August 12, 2015 [1 favorite]


If they actually believed AI was a serious existential risk, wouldn't they be advocating for a moratorium on further development? Then they could save us from the robots AND spend their money on actual problems happening now.
posted by congen at 6:43 AM on August 12, 2015 [7 favorites]


I saw the Dalai Llama speak, a long time ago. We had the chance to ask him questions, by writing them on index cards and submitting them. One of the selected questions was, basically, "What is the most important thing for human beings to be working for, on Earth, at this time?". And he needed some clarification, he didn't quite understand at first what was being asked for. He then drew himself up and said: "How.......how should I know that? There are so many things that need to be done." So, he told us, just do something, just try to help people how you can, and try to be a more compassionate person.
posted by thelonius at 6:51 AM on August 12, 2015 [22 favorites]


If they actually believed AI was a serious existential risk, wouldn't they be advocating for a moratorium on further development? Then they could save us from the robots AND spend their money on actual problems happening now.

I can't help but wonder whether there's a degree of fantasy in here - it's like these folks want to be worried about the threat of runaway AI, because that's the kind of thing that people in "the future" are worried about. Yet they're clearly not interested in actually doing anything about it, because that's not what happens next in the movie. (If that makes any sense, I may not have explained this terribly well.)
posted by ColdOfTheIsleOfMan at 6:51 AM on August 12, 2015 [6 favorites]


I honestly don't know what to think about any of this.

Option 1) Okay, rich tech nerds disappearing up their own asses means all that money isn't accomplishing anything, but it's at least neutral. They're not actually using their power to kill us like the Koch brothers.

Option 2) This is why, for all its problems, democracy is at least better than giving enormous power to a handful of more or less randomly selected individuals and trusting them to to do the right thing with it. In the last gilded age we got a shit ton of libraries and colleges and sure those are good things - but maybe the people in those communities would have preferred to do something else with all that money. And this time we're just... making sure Terminators remain fictional. okay... thanks?

Option 3) What makes their particular silly interest worse than the author's? Isn't making sure a bunch of chickens and pigs don't suffer really more about "the desire to feel good"? I mean, sure, why not be kind to animals, but does it really save human lives?

I saw this on Vox yesterday and knew someone would post it, so I've been trying to clarify how I think about it since then, and it's just not working.
posted by Naberius at 6:53 AM on August 12, 2015 [1 favorite]


"Let's worry about extinction events!"
"Yes! Killer robots!"
"Er, I was thinking more, like, ocean acidification -"
"Killer robots!"
"Or even maybe megavolcano eruption or asteroid strike -"
"KILLER FUCKIN' ROBOTS!"
posted by mightygodking at 6:54 AM on August 12, 2015 [20 favorites]


But the other key feature - economic equality means that some people become, proportionally, less rich, right? You can't exactly be an economic titan bestriding the landscape in a world of genuinely progressive tax policies, living wage laws and a generous social safety net, because then it's a lot harder to exploit people.

The Effective Altruism people who don't have a weird Calvinist obsession with the singularity are way less unpleasant than you guys think. The actual argument is that you should make as much money as possible and then give as much as possible (without harming your ability to keep doing your job) to the developing world, in whatever ways are most effective.

They also have this argument where, if you do a traditionally "evil" job like finance, but do it with as much concern for ethics as you can, that's a good thing because you prevented whoever would have replaced you from doing a lot of harm. (The example being a rice futures trader who makes food prices less unstable than they otherwise would have been.) It sounds insane, but utilitarianism.

If someone made their kind of utilitarian argument to show that, like, reparations for colonialism, strict limits on capital markets, and partial state ownership of the means of production would maximize utility, many of them would probably shift their donations into political advocacy. Some of these people are already advocating for criminal justice reform and immigration reform in the US.

That's not to say I really like them. I basically don't trust utilitarians (especially not if I might be riding public transit with them). Most of the objections to Effective Altruism are basically objections to utilitarianism. And their movement is overwhelmingly white, male, and has no useful advice for anyone who isn't supremely privileged, but they at least claim to be aware of this and debate among themselves ways to fix it.

It's just funny that after all their ethical pondering, the moral imperative -- "earning to give" -- is the one that allows them to change almost nothing about their goals. They think the only way to not be worthless and totally fungible is to use your natural intellectual gifts to get a high status job, which coincidentally is what they and all their elite peers have been taught for their entire lives. (Greed isn't the real motivation, it's more about not getting beat by everyone else; it doesn't matter how you spend the money as long as you're good enough to make it.) I suspect that they'd rationalize their way out of a utilitarian imperative if it required them to lose status among their peers.

This feels a lot like what the lesswrong kids are into. Is there overlap between these groups?

Unfortunately, yes. People who buy into utilitarian arguments have a hard time not buying into the LessWrong stuff.
posted by vogon_poet at 7:07 AM on August 12, 2015 [25 favorites]


As the article suggests, the numbers really end up being literally made up.

I saw this sweet Batman statue yesterday that I want for my desk for $100. I could donate that money to feed starving kids, but then I ran the numbers.

If I don't buy it, I will be kind of bummed, which will result in a 96% of causing my marriage to fail. This will mean that my wife and I don't have the children we planned for, each of which has a 98% chance of saving 80 billion potential future lives.

Therefore, I owe it to humanity to buy that awesome statue. It feels good to help people.
posted by Sangermaine at 7:16 AM on August 12, 2015 [12 favorites]


For once the buzz marketers may have this correct by referring to this as disrupting charity.
posted by Nanukthedog at 7:17 AM on August 12, 2015


I find it a little ironic that this naive brand of utilitarianism is EXACTLY the one that in many many SF stories leads to an AI, trying to "minimize suffering" deciding to wipe out humanity. Literally exactly the same.
posted by Jon Mitchell at 7:23 AM on August 12, 2015 [6 favorites]


I thought utilitarians knew all about the time value of money, and would be able to apply it by analogy to people living now versus people living in the future.

In the time periods they're talking about, a single pair of breeding humans living now could produce millions/billions/trillions/quadrillions of descendants. So saving a single human now is worth saving quadrillions of humans far in the future.
posted by clawsoon at 7:32 AM on August 12, 2015 [4 favorites]


The Effective Altruism people who don't have a weird Calvinist obsession with the singularity are way less unpleasant than you guys think. The actual argument is that you should make as much money as possible and then give as much as possible (without harming your ability to keep doing your job) to the developing world, in whatever ways are most effective.

I wonder. Their entire argument is based on the idea that they - unlike most people - are totally transparent to themselves. They can be absolutely sure that their motivations aren't self-serving, there's no real desire to be the richest and most powerful that might get in the way of actual social change; no, they just want to be the best, which is satisfying, so that they can give more. I can't think of any historical figure ever for whom this has been true, so I'm a bit skeptical that there's a raft of tech one-percenters gunning for sainthood all of the sudden. I think it's far more likely that they're a standard self-serving technocratic elite, justifying its wealth and power in a way that is appropriate to its social class and political milieu.

I mean, I wouldn't trust anyone I know to set the agenda single-handedly for....well, let's be conservative and say for my neighborhood association, and that would include me. The mere thought of deciding that I could just...do some research and make some money and make a bunch of decisions for masses of people I'll never even meet - that gives me the cold chills.

If someone made their kind of utilitarian argument to show that, like, reparations for colonialism, strict limits on capital markets, and partial state ownership of the means of production would maximize utility, many of them would probably shift their donations into political advocacy. Some of these people are already advocating for criminal justice reform and immigration reform in the US.

Didn't a lot of intelligent and well-meaning people believe that the state would wither away and it would be soviets all the way down after the revolution? I surmise that if there were real threats to these people's money and power, we'd get our own little Kronstadt quickly enough.
posted by Frowner at 7:33 AM on August 12, 2015 [16 favorites]


Can't we do both? It seems like a no-brainer to clothe the poor and then put them in dark alleys in their fancy new clothes as part of our Terminator Early Warning System.
posted by rtimmel at 7:33 AM on August 12, 2015 [4 favorites]


I mean, one reason for small-d democracy is simply so that you don't have to trust any one person or small group of people to be altruists. You don't need to bet that someone who thinks he's an altruist really is; you don't need to bet that someone who is altruistic now will still be altruistic in five years when he's bored and has more to lose. It just seems like insane hubris to assume that making oneself as rich and powerful as possible in order to manipulate the fates of others carries no moral hazard.
posted by Frowner at 7:39 AM on August 12, 2015 [15 favorites]


This is all just so depressing... No one is listening anymore. We should start a fund to research longterm data storage, so we can build a time capsule that stores information for millions of years without degrading so that the aliens find something readable in the rubble when they get here and learn from our fuck-up.
posted by sardaukar at 8:00 AM on August 12, 2015 [1 favorite]


Oh man. This reminds me of the wonderful explanation that Fred Clark (Slacktivist) has for the Left Behind series and the mindset that produces them, how they ignore actual evil in this real world for imagined horrors (baby parts in our Pepsi!) because that allows the followers to become heroes without having to actually confront real evil or risk anything genuine.

This sentiment (as well as this whole "lets save ALL MANKIND FROM EXTINCTION WITH OUR GRAND PROJECT because feeding a hungry person only helps the hungry person" mentality) strikes as very much related to the horrible practical problems of running a nonprofit that employs skilled professionals to feed hungry people, or provide legal assistance to one person facing eviction, or help one person with disabilities find and keep housing. No-one wants to pay the salary of those professionals--they want to buy the hungry kid an ice cream cone. No-one wants to buy modern computers for the professionals who do the work, they want to get a hug from a grateful person who is more pathetic than they are.

That's the same thing driving this over-grandiose plan to SAVE EVERYTHING. Because you don't want to be effective in aiding a real suffering person by enabling a professional with skills whose job it is to help that suffering person; you want to be Batman.
posted by crush-onastick at 8:03 AM on August 12, 2015 [8 favorites]


I mean, one reason for small-d democracy is simply so that you don't have to trust any one person or small group of people to be altruists. You don't need to bet that someone who thinks he's an altruist really is; you don't need to bet that someone who is altruistic now will still be altruistic in five years when he's bored and has more to lose. It just seems like insane hubris to assume that making oneself as rich and powerful as possible in order to manipulate the fates of others carries no moral hazard.

They do at least deliberate as a community (in public and via research organizations) over which causes are most important to support. And they at least perform awareness of their own biases and short-sightedness. The assumption is that they all prefer to act morally (haha) and so will prefer to act in accordance with the conclusion drawn by the community (hahahahaha). Unfortunately they seem to be deliberating their way into a science-fiction delusion, but fortunately not all of them.

But I agree with your main point. Their entire identity is that they are rational, moral actors, but actually there's no reason to think this is true of anyone, let alone people who make their wealth from an unjust system.

Just, most of them are sincere on an individual level, even if they're kind of willfully blind to their own ideology, and a lot really are doing good at the moment. Mistrust, criticism, absolutely, but I don't think they deserve the visceral contempt they seem to be getting. (Except the singularity people.)

Because you don't want to be effective in aiding a real suffering person by enabling a professional with skills whose job it is to help that suffering person; you want to be Batman.

Effective Altruists, before the weirdo AI people snuck in, were exactly the opposite of that. They were very clear that it is utterly depraved to do anything but spend your money where it is most effective. That your personal warm fuzzy feeling is worthless. That an untrained volunteer is probably less useful than money. Hence, you should donate to well-run organizations for malaria prevention, nutrition, and cash transfers to the developing world.
posted by vogon_poet at 8:12 AM on August 12, 2015 [5 favorites]


@vogon_poet - I apologize for boiling this down to a quote, but, from HBO's superb Silicon Valley:

"I don't want to live in a world where someone else makes the world a better place than we do."
posted by ColdOfTheIsleOfMan at 8:30 AM on August 12, 2015 [2 favorites]


wouldn't they be advocating for a moratorium on further development?
Moratoriums on dangerous technology development occasionally fail even when the development requires large facilities processing rare materials and there is a clear and well-known distinction between peaceful and dangerous R&D. How much more hopeless would a moratorium be if either of those facts weren't the case? Imagine if the danger of nuclear explosions hadn't been recognized until uranium centrifuges were already as plentiful and economically critical as computers. The Non-Proliferation Treaty would have just been page after page of tear-stained cursing.
posted by roystgnr at 8:35 AM on August 12, 2015 [2 favorites]


The thing I find tricky about this stuff is, measuring impact of charitable interventions is really hard. So evaluating where your money can do the most good is working with an incomplete picture to begin with, because a lot of organisations don't have a numerical way to demonstrate impact. Sure, malaria prevention looks really effective on paper but that's probably because people running food banks aren't providing massive amounts of statistics about the long-term effect of their interventions. They can probably tell you how many people they gave food to in a given period, but not whether that food made it possible for a kid to pass a test that means they get to move up a stream at school, or gave an adult the ability to concentrate at a job interview so they could get a job.

(That's not to say there shouldn't be more focus on measuring impact. I think a lot of charities are less effective than they could be, and they don't have the data available to help them work out how to be more effective, and they don't have the resources to gather that data or make sense of it, so it just goes round and round.)

But, for example, in the area I work in, interventions can be delivered by private providers, by the stautory sector, or by charities. The bulk is done by charities. All of those types of organisation can tell you how many people they worked with and that's a blunt metric for effectiveness, but assessing how effective that help was is another thing entirely - the private provider may see more people but deliver interventions of a lower quality. Part of my work at the moment is trying to implement a nationwide programme of outcome measures in one specific area, and it's an absolutely huge undertaking. Even once it's implemented, it'll be a really limited blunt way of assessing the effectiveness of services, and there'll still need to be a huge amount of contextual data to understand what makes the difference in quality of intervention, not just where it's better or worse.

Tech bros could do an immense amount of good by donating their actual currently existing skills to charities to help them to get impact data sorted out in the first place, which would then help them make their EA decisions, if you ask me (shout-out to Datakind who do do just that).
posted by theseldomseenkid at 8:37 AM on August 12, 2015 [6 favorites]


Datakind are building a project for us to help do that kind of tracking in a very specific and helpful way - really interesting volunteer organisation structure and so far, going well even with all the arguments over database structures.
posted by dorothyisunderwood at 8:46 AM on August 12, 2015 [1 favorite]


Oh awesome! I just linked up with them and have a call next month so am hoping for the same.
posted by theseldomseenkid at 8:51 AM on August 12, 2015


They can probably tell you how many people they gave food to in a given period, but not whether that food made it possible for a kid to pass a test that means they get to move up a stream at school, or gave an adult the ability to concentrate at a job interview so they could get a job.

But surely the reason for giving food in a food bank is so that people don't go hungry? I mean, I know there's all sorts of great effects of eating as well as not being hungry (though not being hungry is the important thing here, I feel), but it does strike me that if you were asked as a food bank organizer what you'd like help with you'd probably say getting more food and distributing it more efficiently as a first thing and very far down the list would be 'assessing the percentage of children who did better in math because they weren't starving as a rubric of our success.'
posted by lesbiassparrow at 8:55 AM on August 12, 2015 [2 favorites]


> "Their fancy math has convinced them that helping humanity avoid an extinction event is more effective altruism than fighting poverty today."

Well, I don't think it's an either/or issue, but if they've decided to dedicate themselves to helping deal with the climate change situation, then I can't say I --

> "Now it's becoming more and more about funding computer science research to forestall an artificial intelligence–provoked apocalypse."

Wait. What?
posted by kyrademon at 9:20 AM on August 12, 2015 [3 favorites]


I am like 90% sure that in the next five or ten years outside of SF or maybe in the Bronx a bunch of these EA people will open up a Assistance For Deserving People Center where you have to pass an IQ test or god knows what else to get in. And inside it'll be great! Food and resume assistance and maybe job placement and clothing for interviews and all that stuff that impoverished people genuinely need.

And they will pat one another on the back for being able to contribute to charity in a genuinely worthwhile manner because the people they tested with their Objective Intelligence Batteries will be the ones who are proven to most benefit the rest of society by having these services provided to them.

Eventually they'll figure out how to get either VC funding or profit from these places and they'll spread. And local governments run by people like Scott Walker will say "well what the hell do we need state-funded services for" and take the funding for general services and instead give the Charitable Tech Bros a nice, fat subsidy to run more centers which, at this point, have become profit-oriented and aren't functioning nearly as efficiently as the original ones and now poor people have to pass an IQ test to get basic services from for-profit (or only nominally non-profit) centers.
posted by griphus at 9:20 AM on August 12, 2015 [13 favorites]


They think the only way to not be worthless and totally fungible is to use your natural intellectual gifts to get a high status job, which coincidentally is what they and all their elite peers have been taught for their entire lives.

Well, to be honest, I don't think that NGOs need any more aspiring Assistant Program Coordinators. What they do need is money and highly specialized professionals that can serve their mission on the groundAnd the latter group is well represented among the local people, in many cases-- but they need to be hired and paid. So I think there is something useful in having upper middle class professionals learn how to best donate their money in a way that creates and infrastructure to help people. It's not simply that the world needs more social workers and teachers and civil engineers. It's that the world needs more money to HIRE social workers and teachers and civil engineers.
posted by deanc at 9:21 AM on August 12, 2015 [7 favorites]


Yes, definitely the main thing is stopping people being hungry! If you want to demonstrate effectiveness, you do need some sort of follow-update beyond 'how many people' though. I could hand out 100 tins of tomatoes but unless you assess need and have some kind of follow-up, you don't know if handing out tons of tomatoes is actually helping anyone, if it's more cost-effective, if it's reaching the people who really need it, etc.

If anyone's from the UK and has been following the Kids' Company controversy, that's a really good example of where just counting 'people we helped' isn't enough if funders start wanting more details. The claim that KC just handed out money to kids as a way to help them get better at budgeting, and that some of them might have spent it on drugs, and KC's attitude was "well, we're trusting them to make their own mistakes" - I actually think that's an interesting perspective to take as long as there are other support systems in place to encourage kids to not do that, or to be fully aware of why that might not be the greatest choice. But I can also see why the government might not be psyched about continuing to fund that.

Part of the problem though is that (in the UK at least), a lot of charities are reliant on statutory funding that comes in from year to year and makes it really hard to be strategic about interventions. Ideally you'd have a multi-year strategy but if all your staff are only funded until March 2016, and you can't guarantee the money's coming in again next year, how can you possibly plan? So you end up firefighting and being in a position where you just plug immediate gaps in services rather than being able to take any kind of long-term position where you might be able to work for a world where your interventions aren't needed any more, and therefore all you can really do is count the number of people, and not whether you specifically did something helpful and worth repeating. If you had metrics to persuade wealthy independent funders that you were worth a punt on the basis of your effectiveness, that puts you in a better position (until they decide you're not as effective as someone else and withdraw their money which is why diversity of funding streams is also important).

I started my career at a big medical research charity that had a lot of independent funding so did work very strategically, had ten year plans, and was essentially working for its own closure, because if it achieved its vision, there would be no need for it to exist any more. But smaller charities don't have that luxury. (Part of the reason I left the medical research charity was that I felt uncomfortable with a goal that was about increasing the number of people on the planet and the longevity of those people when we exist within a system that doesn't adequately support the people we have now, so I freely admit I approach this subject at a completely different angle from the people in the article and possibly from many others.)
posted by theseldomseenkid at 9:49 AM on August 12, 2015 [5 favorites]


So I think there is something useful in having upper middle class professionals learn how to best donate their money in a way that creates and infrastructure to help people.

And yet, so many of these upper middle class jobs where one makes a lot of money are actually bad for society - if Beyonce wants to become an Effective Altruist, that's one thing, but some tech dude in a dude-centric, misogynist industry that's completely enmeshed in the Uberization of working class jobs, the growth of the security state, the financialization of everything, etc....I admit that I've said this here elsewhere, but by that same logic I should move upstate and become a prison guard, because I'd make more money and could give more to charity. Sure, I'd spend my days, you know, supporting the prison industrial complex, but I'd also be preventing malaria! Or possibly AI attacks!

It's like you're giving with one hand and taking with your remaining hand and both feet.
posted by Frowner at 10:05 AM on August 12, 2015 [7 favorites]


I'm on the edge of the effective altruism movement, but I have friends who are in the middle. I asked one of them about whether this Vox article is accurate. He told me that the article *is* an accurate description of what happened at the EA Global conference ... but that the EA movement more broadly is *not* focused on the AI apocalypse thing.

I think (though I don't pretend to have survey data!) that most EA people think are committed to donating to global health charities. The people who focus on the AI apocalypse are noisy but few.

(Why was the AI apocalypse so much discussed at EA Global? Perhaps because it was held in Silicon Valley and because the big celebrity in attendance was Elon Musk...)
posted by HoraceH at 10:25 AM on August 12, 2015 [5 favorites]


As circle jerks go, this one seems less dangerous than, say, Davos, because at least there is some small chance that this group will actually realize that the danger to humanity is them.
posted by OHenryPacey at 10:26 AM on August 12, 2015 [1 favorite]


by that same logic I should move upstate and become a prison guard, because I'd make more money and could give more to charity

No, don't be silly. What you ought to do is make sure to only wear one synthetic polymer jumpsuit all the time while drinking your "meals" out of a polystyrene cup and, above all, not pooping. That's how you save the world.
posted by Panjandrum at 11:53 AM on August 12, 2015 [4 favorites]


I think (though I don't pretend to have survey data!)
Then let's get some survey data! Note that this survey is doing "convenience sampling", i.e. probably but not definitely better than nothing.
that most EA people think are committed to donating to global health charities.
This is true. "The top three charities donated to by EAs in our sample were GiveWell's three picks for 2013 -- AMF, SCI, and GiveDirectly." The first two of those are the Anti-Malaria Foundation and the Schistosomiasis Control Initiative.
The people who focus on the AI apocalypse are noisy but few.
This depends on your definition of "few". "MIRI was the fourth largest donation target", and got donations from nearly 10% of people who self-described as Effective Altruists.
posted by roystgnr at 11:57 AM on August 12, 2015 [6 favorites]


Thanks, roystgnr! This survey is very interesting.
posted by HoraceH at 12:11 PM on August 12, 2015


Two semi-related articles I was tempted to FPP (but they didn't quite qualify for my 'full endorsement', so YMMV):

'Spiritual' Nonsense Is Just As Common in Silicon Valley as Elsewhere

Here's How the Richest of the Rich Are Being Charitable
posted by oneswellfoop at 12:16 PM on August 12, 2015


They also have this argument where, if you do a traditionally "evil" job like finance, but do it with as much concern for ethics as you can, that's a good thing because you prevented whoever would have replaced you from doing a lot of harm.

This is called the Vichy fallacy.
posted by bukvich at 3:13 PM on August 12, 2015 [7 favorites]


I work in tech, read EA stuff, and give money to GiveWell. I would consider giving money to "X-risk" type stuff, but I haven't found much of it persuasive. I don't think my company is doing anything "evil" (our app sends messages).

What should I be doing differently? Like, I find it hard to believe that by quitting my job and devoting my work to some other cause, I would have a more positive impact on the world than what I'm doing now. But if someone has a specific idea for what I should be doing, I'd be interested to hear it. Until then, I'm gonna figure I'm doing OK at my ethical obligation to the world.

I don't think very many EA people can be said to be "some tech dude in a dude-centric, misogynist industry that's completely enmeshed in the Uberization of working class jobs, the growth of the security state, the financialization of everything, etc.." Like, uh, the vast majority of us EA folk work jobs providing similar services to other jobs, and have nothing to do with the security state/finance/uber/whatever, and for us, EA is not some circle jerk self-worship thing—it's just what it says on the tin.
posted by andrewpcone at 3:26 PM on August 12, 2015 [2 favorites]


theorique: "This feels a lot like what the lesswrong kids are into. Is there overlap between these groups?"

Given that, according to the survey that roystgnr linked to, MIRI was the 4th most popular charity among their respondents and that both MIRI and lesswrong were co-founded by Eliezer Yudkowsky, I'd say that the lesswrong folks are intimately mixed up in this whole thing.
posted by mhum at 5:52 PM on August 12, 2015


There are so many arguments against this sort of super-long-term logic that the mind boggles. Matthews gives a few of them, any one of which is sufficient to rebut most of this long-term BS, not even getting into the AI nonsense. These guys really don't understand how uncertainty affects expectation values. And they really don't understand the paradoxes generated by valuing potential lives as highly as actual lives (or in fact, valuing them at all).

In any case, given that Bostrom et al believe in the multiverse, who cares about the quadrillions of future humans in this universe? Given that there are an infinite number of copies of myself, making those copies even episilon happier or longer lived outweighs the entirety of future humanity in this universe even if we fill it instantly to the brim.

(And for those who would argue that there an infinite number of copies of those other humans too, well, those two infinities are of the same cardinality, so there's no reason to prefer an infinitude of 10^100 humans to the infinitude of 1.)

Being a bit more serious, given how uncertain the world is, I think there's a good argument to be made for why it's better to make someone 1 unit happier today rather than 5 units happier even 5 years in the future. Who the hell knows what the future will bring? Fight the currently burning fires first, with a little bit of sensible long-term prevention along the way (eg, CO2 mitigation, welfare, healthcare, etc). But anything much more long-term and complex than that is psychohistory.
posted by chortly at 8:08 PM on August 12, 2015 [2 favorites]


So, i didn't attend this conference, and didn't know about it before today. But some people who did seem to think this article overemphasizes the singularity folks, mostly because of the popularity of some speakers like Elon Musk drawing a lot of that crowd. Perhaps a discussion based more on what some of these EA people are saying and doing, and less on secondhand sources like the article, would be a bit more productive. Sure, I know, piling on rich white self-centered males is fun and all that, but perhaps not all of these folks fall so neatly into that category.

Here's a blog post by a speaker at the conference, a written version of the talk he hadn't known he was to give: http://www.jefftk.com/p/why-global-poverty

At least, it stimulated me to donate to deworming efforts today (yes, one of GiveWell's selected charities, hate on me if you must). Does seem like an effective use of some of the extra money I am fortunate enough to have. Perhaps better than buying a new pair of shoes I don't need, or getting extra minutes on my cell phone plan. Is giving to charity, even if not from the purest of motives, really all that bad?
posted by dougfelt at 11:04 PM on August 12, 2015 [1 favorite]


It's reducing charity to a consumer choice: I have $X money, where can I spend it best to get a perception for myself as a good person (for this self-image of values, with EA, of being intelligent and practical, not emotionally-driven).

When charity can be a way to name and fight and remake the structures of the world, to reach out to people who are different and vulnerable and damaged in solidarity instead of pity and distance, and to basically do much much more than soothe our guilt over ignoring the pain in the world.

Donations do a lot, money means a lot - but money isn't the only metric.
posted by dorothyisunderwood at 1:24 AM on August 13, 2015


It's reducing charity to a consumer choice: I have $X money, where can I spend it best to get a perception for myself as a good person (for this self-image of values, with EA, of being intelligent and practical, not emotionally-driven).

That seems like the perfect way to do charity for technical SV denizens - metrics, dashboards, something with an API and data streams and sparklines showing a nice hockey stick graph of "orphans rescued" or whatever. None of that fussy "human" stuff, just data and results. In a lot of cases, this is probably a good thing. So-called "charities" that manage to spend 80% of their budget on executive salaries and administration aren't necessarily getting the best impact on the world.

It gets a little weird (OK, a lot weird) when it gets too abstract and distant. When you estimate a (e.g., numbers made up) 1-in-one-billion probability of saving 10^27 hypothetical future humans and that's so much more important than doing anything for people here and now. It reads more as a technically-driven religious faith than a practical charity. Which, again, seems to fit perfectly the ethos of lesswrong and similar intellectual experiments (thanks for the citation, mhum!)
posted by theorique at 3:45 AM on August 13, 2015


Shorter EA community: "Sorry, we're too busy building a fence around the pond to save your drowning children."

Except they're not even building a fence, they're building a laser cannon to shoot hypothetical giant mutant killer ducks.
posted by logopetria at 7:21 AM on August 13, 2015 [2 favorites]


« Older "In the mirror, I am just Tyler"   |   DuPont and the Chemistry of Deception Newer »


This thread has been archived and is closed to new comments