“I still wanted to help. But I didn’t know what the hell I was doing.”
April 18, 2024 8:11 AM   Subscribe

The Deaths of Effective Altruism [archive] by Leif Wenar is a critical assessment of the effective altruism movement, taking in Sam Bankman-Fried and billionaires, Peter Singer and other philosophers, and GiveWell and the wider network of charities working off effective altruistic ideas.
posted by Kattullus (82 comments total) 46 users marked this as a favorite
 
I'm going back in my blog, from a time in my life when I would have been susceptible to the pitch, and... I don't see the review of the one Peter Singer book I thought I'd written, but probably couldn't be bothered to because it was so bad, but I do see that, even then, I mentioned that the only way he could be taken seriously is if we framed what he was writing as some sort of satirical performance art.

I suspect that it's partially that no one is a more fervent preacher than the former drunk, but I look around at a whole lot of people who are tied up in their philosophical models of the world so deeply that they can't see what's right in front of them. In my business life, I think this is impacting LLM adoption, where people are happy to burn down the planet because if they phrase the prompt just right they can get text that reads like an over-confident 8th grader, ignoring all of the times that gets things wrong. In so many places I see people thinking they can reduce decisions not just to simple microeconomics, but in ways that remove choice from others because it's allegedly in their best interest.

And I look around and flail helplessly at my friends who pitch universal college as a solution for this, because, so far as I can tell, this willingness to indulge in bullshit seems correlated with the prestige of the university they attended.

Anyway, good read. Thanks.
posted by straw at 8:37 AM on April 18 [24 favorites]


There's a good Dunt/Lynskey Origin Story podcast on the topic as well if you like your criticism a bit sweary.
posted by pipeski at 8:55 AM on April 18 [4 favorites]


I've said this about "effective altruism" before elsewhere, but:

There's a particular strain of technically-competent, ideologically-adrift cryptointellectual who somehow just stops asking questions the moment they've got a mental model that "makes sense" to them. You've met them on the hellbird and they've got a prominent hive over on the orange website, but you'll find them in any techno-libertarian circle where a dash of cleverness mixed with the facile reasoning that comes with graphs in Econ 101 is praised until it becomes a self-reinforcing cycle, a nerd-posturing Coriolis force churning all these ideas together as they circle the bowl, and the undernourished ideological plasticity of these minds is where the whole idea of EA does its best work.

As an ideology, it's a pile of nonsense. It's what you'd end up with if you started with Scientology and replaced "thetans" with "dollars." But as a tool of the wealthy and powerful, it's fantastic: precision-designed to target people who are capable of understanding and changing complex systems and completely neuter them as a threat to that wealth and power.

What effective altruism is most effective at is turning smart kids into political NPCs. It's effective at making its adherents ideologically irrelevant, the world's smartest, most useful idiots.
posted by mhoye at 9:02 AM on April 18 [107 favorites]


Ah, Givewell ...
posted by Melismata at 9:09 AM on April 18 [21 favorites]


Thanks mhoye -- I opened this thread intending to say something along those lines, but you were more eloquent than anything I would've dreamed up.
posted by Pedantzilla at 9:17 AM on April 18 [4 favorites]


I started reading the article expecting to find a modified Upton Sinclair reference about how it's even harder to convince a man to understand something when his self-perception as a good and moral person depends on not understanding it, and was not disappointed.
posted by allegedly at 9:18 AM on April 18 [24 favorites]


Oh my god this makes me hate effective altruism even more, something I thought wasn't possible. Where is the fucking guillotine when you need one?

Like me a dozen years earlier, Ord was excited by Peter Singer’s “shallow pond” argument. What he added to it, he said, was a way of measuring how many people’s lives he could save. The simple version goes like this. Say there’s a pill that adds a year of life to anyone who takes it. If Ord gives $50 to an aid charity, it will give out 50 pills to poor foreigners. So with his donation, he has added a total of 50 years of life. And adding 50 years is like saving the life of one child drowning in a pond. So by giving $50, he has “saved the life” of one poor child.

Only a rich person who never, ever faces even the tiniest consequence for their actions would start from "humans are completely fungible". This guy will never never have anyone important to him miss out on anything important because someone else was deemed more worthy; his family will never be fungible. Just other people's.

Vanity, stupidity, arrogance. Greed for things that aren't money - fame, admiration, power.

And also, none of these people had any sustained contact with activists, ever, because the most standard thing you hear when you're doing any kind of on the ground stuff ever is that projects must be, as much as possible, directed by the people who need them. There's obvious limits to how possible that is, there's always human failure as well, but it does at least provide some kind of curb on the whole arrogant savior mindset.

Also, frankly, if your life as an "altruist" (or a politician, or a non-profit administrator, or a grant-giver, etc) is basically about you feeling like you're a terrific person who has all this leadership power and is doing great things that could never be accomplished without you and your ideas are amazing (although of course you're humble too), then you're doing it wrong and your values are wrong. Doing good in the world is seldom comfortable and often comes with "oh my god while I was doing A I was also neglecting/messing up B" or "I accidentally slighted this person or group because I was ignorant or tired" or "what if I had done X instead, that probably would have been better now I'm worried". That's what is normal when you're trying to do your best in the world.
posted by Frowner at 9:46 AM on April 18 [57 favorites]


EA is a guilt-reduction model.

It allows TechBros (and they are ALL Bros!) to live the life of extreme capitalism they were always going to live while vaguely waving their hands at 'doing good', thus relieving themselves of any vague sense of guilt they might feel for being such a raging capitalist.
posted by Frayed Knot at 9:54 AM on April 18 [30 favorites]


I'm reading a book on longtermism (involuntarily) and it reminded me of my thoughts on abortion. It's easy to advocate for future humans, as a class, because they don't exist yet and so they can never disagree with you, they always exist exactly as you imagine them. Maybe longtermism is an escape from reality for the EA folks the way that abortion is escape from more difficult social charity for religious folks.
posted by fleacircus at 10:04 AM on April 18 [82 favorites]


That's a very good insight, fleacircus.
posted by tavella at 10:08 AM on April 18 [6 favorites]


I was so confused about effective altruism. I used to think it was that thing where you give someone in need $1,000 instead of $10 because relatively small increases in giving offer far greater life changing effects. I think this was referenced in one of the later seasons of the Good Place where everyone remarks about how the majority of their life's problems could've been solved if only they had access to a few grand to pay debts or cover a security deposit.

But then I found out 'effective altruism' was just the name a bunch of techbros gave to justify their shitty, selfish behavior.
posted by RonButNotStupid at 10:18 AM on April 18 [16 favorites]


The EA movement is terrible for all the reasons listed in the article very thoroughly, no question about it. But I've seen this type of (again, correct) analysis to write off the Shallow Pond/Drowning Child experiment, and I'm not ok with that.

The problem comes from mathing it out like a deranged economist. The point is not to add up the fractional kids you've saved and go for a high score by tweaking your stats. The point is to convince people to expend effort and resources to help people that they wouldn't otherwise try to help. If you're ignoring the harms and trade-offs, you are fundamentally not engaging with the core concept.

I might be overly sensitive to this because I've never met a longtermist, but I have used the thought experiment to talk people into helping where they can, and these TechBro leeches are starting to make that less effective by lumping the Drowning Child in with Roko's Basilisk as dumb and dangerous ideas that don't deserve to be engaged with.
posted by Garm at 10:20 AM on April 18 [17 favorites]


Which is more fun: making lots of money, calling the shots and getting hailed for your genius, or doing political work, probably alongside your regular job, to expand democracy and give other people more power over their own lives?

If you're helping imaginary future humans on a different planet 20,000 years from now, yes, exactly - they will never say "so why don't I have more agency, why are my leaders crooked, why are all the polluting industries in the poor part of town, why do we spend all our money tearing up homeless encampments" and they certainly won't say "gee Mr. Investment Banker, I think you probably shouldn't in fact take home millions of dollars a year and use tax schemes to reduce your tax burden while our city is full of unhoused people".

If you're rich and privileged, a better society will pretty much certainly mean that you won't have as much discretionary income or power. You will probably have better health, you'll have a meaningful safety net and your life will probably still be pretty sweet, plus you won't have to avert your eyes while your security removes desperate unhoused people from your path. But where's the fun in advocating for more democracy when it means that you'll just be in the top 15% instead of in the top 5%?
posted by Frowner at 10:21 AM on April 18 [27 favorites]


If "effective altruism" is going to be used to refer to the morally-toxic utilitarianism practiced by Sam Bankman-Fried, we're going to need a new term to refer to the idea that giving poor people money or installing clean water supplies in developing countries is a better use of money than donating more to Harvard's endowment to get your name on a building
posted by 0xFCAF at 10:24 AM on April 18 [34 favorites]


"What EA pushes is expected value as a life hack for morality."

This is the single best summary of the Effective Altruism "movement" that I've ever heard.
posted by tom_r at 10:34 AM on April 18 [16 favorites]


Welp, society gave "depending on the good ideas and generosity of the rich" a legit try.

How about next we try "taxing billionaires out of existence and reallocating their wealth"?
posted by DirtyOldTown at 10:39 AM on April 18 [39 favorites]


"Only a rich person who never, ever faces even the tiniest consequence for their actions would start from "humans are completely fungible"."

That's the least charitable way to express the idea.

A notion at the core of traditional (western) deontological and utilitarian theories is not that people are fungible, but, rather that no one person has more intrinsic moral standing than any other person. Which is to say that, in a strict moral sense, your family's intrinsic value (their right to live so to speak) is no greater than the intrinsic moral value of anyone else's family.

Recently, in philosophical terms, care ethics (and/or feminist ethics, they sometimes get intertwined) has criticized traditional theories for ignoring the value of emotions and specific relationships in moral calculations. They argue that philosophers have been working with an incomplete set of criteria when it comes to moral decision making. However, while Virginia Held, for example, would argue that ignoring these details leads to making bad choices (or accepting bad theories), she does not, as far as i know, insist that different people ought to be considered to have different moral value.
posted by oddman at 11:03 AM on April 18 [13 favorites]


When I originally heard of effective altruism, many years ago, I thought it meant stuff like 1. evaluating charities to see they were doing the work instead of spending a lot of money on overhead and 2. spending money in places with a lot of poor people so a little money could go a long way, especially on things like public sanitation, public health, and other infrastructure-oriented projects. What a good idea!

How naive I was.
posted by gentlyepigrams at 11:12 AM on April 18 [24 favorites]


I appreciate that this article covers both parts of EA that are bad. Picking a questionable moral theory and methodology, and executing it badly. And that it does so in a concrete way with reference to the specific bad numbers and bad projects. It's too easy to mock longtermism, but harder to tackle the original sins of thinking you can math poverty reduction without valuing human agency and thus be virtuous: the "hero mindset" it describes, the lack of epistemic humility, the self-importance.
posted by lookoutbelow at 11:13 AM on April 18 [7 favorites]


So, let's take a bunch of folks with hardcore Engineer's Disease, use an allegory cribbed from Schindler's List to convince them that they can change the world from afar with their bank accounts, and give them just enough information to make them feel good about what they're doing and feel hostile to any evidence to the contrary.

Then, once we've got them on the hook there, we can convince them that anything they do to get more money is de facto good, since they can theoretically use the money better than anybody else could.

That could work, right?

On Preview: Sorry, I just created EA again. Don't know what I was thinking there.
posted by Navelgazer at 11:23 AM on April 18 [11 favorites]


When I originally heard of effective altruism, many years ago, I thought it meant stuff like 1. evaluating charities to see they were doing the work instead of spending a lot of money on overhead and 2. spending money in places with a lot of poor people so a little money could go a long way, especially on things like public sanitation, public health, and other infrastructure-oriented projects. What a good idea!

Wasn't there some original branch of what became to be EA that was this, beyond charity-navigator type efficiency, where 2 was centered, before it became infected with the seeds of "future hypothetical techbro electronic souls are more important than extant brown people"?
posted by lalochezia at 11:47 AM on April 18 [1 favorite]


....never mind, the article covers some of this.
posted by lalochezia at 11:51 AM on April 18


I have used the thought experiment to talk people into helping where they can, and these TechBro leeches are starting to make that less effective by lumping the Drowning Child in with Roko's Basilisk as dumb and dangerous ideas that don't deserve to be engaged with.

The term for that is "poisoning the well." It's been standard operating procedure for the Conservative Movement for decades now, and more recently the shitlib Dems have picked up the strategy to co-opt social justice, economic equality, and environmental movements to defang them and continue their pretense of "ThErE'sNoThInGwEcAnDo!1!"
posted by Pedantzilla at 12:21 PM on April 18 [7 favorites]


Garm: But I've seen this type of (again, correct) analysis to write off the Shallow Pond/Drowning Child experiment, and I'm not ok with that.

The problem with the Singer’s thought experiment is that he reduces everything down to money. The cost accrued for the benefit of saving a child is ruining a pair of shoes, and somehow it’s supposed to follow that the equivalent amount of money can magically save a child at a distance.

First of all, human beings don’t operate on cost/benefit analysis, most people, when they rush to save a child, don’t give any thought to their shoes. And donating the value of the shoes is not going to have the magical effect Singer ascribes to money.

All that said, it’s an effective way to get people to think about the value of charity, so that aspect of it is good, and I understand why you want to keep hold of it.
posted by Kattullus at 12:48 PM on April 18 [9 favorites]


I give significantly to Givewell and Givedirectly, so I was interested in reading his critique, so see what I might be able to do differently or better. Apparently, I'm supposed to take up surfing, travel to Indonesia regularly (which would seem to be costly in time, money, and environmental damage), befriend a local, and work closely with him to improve his community.

But seriously, what is the takeaway for the average individual who wants to maximize the benefits of their donation? If not the EA organizations I donate to, which ones? Should I just give up and spend more on myself? He does suggest ways Givewell could improve, and I'll encourage them to consider his suggestions, but other than that, what?
posted by Mr.Know-it-some at 1:08 PM on April 18 [4 favorites]


Something like Charity Navigator which assesses the downsides as well as the upsides of donating to a specific charity?
posted by joannemerriam at 1:27 PM on April 18 [5 favorites]


IMO ways to improve giving:

1. Use social media to become more familiar with projects that seem worthwhile to you. I'm assuming here relatively small projects so that their social media is likely to be less managed. For instance, I got involved with something I do currently because I followed someone who worked on it online, then learned more about the project, then followed the project and other volunteers, then joined. Similarly, I use social media to find out more about immigrants' rights organizing and such few assistance projects as are able to work in Gaza, and donate accordingly. It takes a different kind of effort than going to a website and reading about an org, but it is extremely informative in its own way.

I would say that this takes more effort than just looking at a charity rater website, but it's not that much more effort, it's a little bit every day rather than hours and hours and it's rewarding. Expanding my social media in these ways has made my life better - and how often can you say that?

2. Donate directly to individuals when you can do so. I donate more to GoFundMes now when they are vetted.

3. Donate locally - it's a lot easier to get a sense of whether, eg, your public library system is doing a good job than something somewhere far away. You can also easily find small organizations whose effectiveness is easy to understand - there may be horrible knock-on effects to funding a tenants' aid organization but they are likely to be no worse than the horrible knock-on effects of buying a coffee.

4. Donate to political campaigns that directly or indirectly increase people's power over their own circumstances. Call and write in, protest and donate about US meddling in other countries. People in colonized/post-colonized societies will probably always be objects of charity as long as their countries are being jerked around by the US, the IMF, their patron states, etc, and as long as democratic organizations in-country are being choked and starved.

The root cause of this whole mess (and I don't mean "the root cause of all human evil", just "the root cause of white saviorism/failed aid) is colonialism and anti-democracy (small-d democracy). It's not that one can't do a little charitable giving while trying to stop the whole ugly situation, but the biggie has to be anti-colonialism. Just like we're not going to eradicate racial inequality in this country through cultural initiatives and representation on TV - not that those things don't have their upsides, but while people are still prevented from voting and while the prison industrial complex is sucking our lifeblood, we just aren't going to get things fixed.

~~
And that's why I don't think the pool analogy is very good as a piece of reasoning (and why I think effecting altruisim is bad). You have to say something like "know that the child is drowning in a pond where fifty children drown every year but the pond belongs to a billionaire who refuses to fence it off and has actively dug the pond deeper and made the edges slipperier, also we suspect the billionaire of making and collecting snuff videos of drowning children as their personal hobby, also they fund the local sheriff's office and are best friends with their Senator so complaints go nowhere" in order for it to make sense as a real-world analogy.

The kid is drowning and the other kids are getting malaria because it's profitable to maintain societies where children drown and get sick. It's like it's great to raise money to keep kids out of the orphan-grinding machine, but as long as we've got the orphan-grinding machine our lives will be one long emergency.
posted by Frowner at 1:34 PM on April 18 [29 favorites]


it’s supposed to follow that the equivalent amount of money can magically save a child at a distance

How would you describe the causal chain of events where a charity gets organized, gets money, uses the money to buy vaccines and travel to a developing country, gives the vaccines to kids, and then the kids don't die from the disease they got vaccinated against?

I get that it's stochastic, not every dollar will be directly tied to an averted death, but there actually are fewer deaths when this happens, and if it weren't for the money, it wouldn't have happened. What makes this "magical"?
posted by 0xFCAF at 1:45 PM on April 18 [12 favorites]


The Sam Bankmen-Fried trial coverage on The Verge has this quote which is quite haunting :


Kaplan lingered on Caroline Ellison’s testimony about Bankman-Fried’s character; specifically, he told her that if there was a coin where tails destroyed the world and heads made the world twice as good, he’d gamble on flipping the coin.


That's the mindset of the kind of people who are enamoured with Effective Altruism.

posted by Faintdreams at 1:48 PM on April 18 [11 favorites]


There's some great points here about specific problems with GiveWell handwaving away poor research that it's used to promote specific strategies like mosquito nets, as well as well-landed critiques of the dangerous ideas promoted by longtermism and valuing non-existent future lives over people living now.

But there's also -- in both this article and the discussion here -- some parts that seem to attack the idea of charity in general. It's not a large jump from "charitable donations enable bad governments to not spend money where they should" to the conservative ethos of "charitable/social programs promote laziness and disincentivize people pulling themselves up by their own bootstraps". Don't let the poison well of longtermism seep into the actual value of helping people now.
posted by Theiform at 1:49 PM on April 18 [10 favorites]


0xFCAF: How would you describe the causal chain of events where a charity gets organized, gets money, uses the money to buy vaccines and travel to a developing country, gives the vaccines to kids, and then the kids don't die from the disease they got vaccinated against?

I’d describe it as not involving magic in any way.

I’m not criticizing charitable giving, I’m criticizing the logic of Singer’s pond allegory.
posted by Kattullus at 1:52 PM on April 18 [4 favorites]


If Ord gives $50 to an aid charity, it will give out 50 pills to poor foreigners. So with his donation, he has added a total of 50 years of life. And adding 50 years is like saving the life of one child drowning in a pond. So by giving $50, he has “saved the life” of one poor child.

Although... the thought that the death and redistribution of wealth of one billionaire could save the lives of millions of people does have its charms...
posted by clawsoon at 1:55 PM on April 18 [5 favorites]


a coin where tails destroyed the world and heads made the world twice as good

Literally Gavin Belson from Silicon Valley ... "I don't want to live in a world where someone else makes the world a better place better than we do."
posted by credulous at 2:07 PM on April 18 [2 favorites]


I’m criticizing the logic of Singer’s pond allegory.

Which part of the logic is wrong, though? Singer's argument is:
- You can spend money to save lives
- You can spend money to buy more shoes
- Most people would gladly forego an extra pair of shoes to possibly save a life
- Not buying additional shoes is the same as foregoing a pair you already have
- At some point you will be faced with the option to buy an unnecessary marginal pair of shoes
- You should weigh the "save a life" vs "have more shoes" decision the same as you would if faced with the pond, because they are equivalent

Which specific premise is untrue? You seem to dismiss it out of hand as obviously wrong, but I don't see how you're actually engaging with the argument.
posted by 0xFCAF at 2:08 PM on April 18 [4 favorites]


EA's (and charity's) grandest aspirations are to soften the sharpest edges of the meat grinder. Its mechanism is premised on the presumed necessity of the meat grinder and the belief that it would be impossible to dismantle.

It's kinda like how the church was seen as the primary mechanism for lifting up the serfs back when we believed in the permancence of the divine right of Kings.
posted by Richard Saunders at 2:16 PM on April 18 [8 favorites]


Ah, Givewell...

This is exactly what I thought when I saw this. I see their name come up from time to time and I remember how they behaved around here and just think "Oh yeah, those assholes."
posted by East14thTaco at 2:18 PM on April 18 [5 favorites]


Which specific premise is untrue?

I don't know if this is what Kattullus is saying, but to my eyes, the whole point of the article is that Singer's allegory is false because it believes money will save lives as simply and directly as running into the pond.

The article argues that resources given at a distance and directed by outsiders will not, in fact, save lives in a simple and direct way.
posted by joyceanmachine at 2:37 PM on April 18 [16 favorites]


How can he write "Extreme poverty is not about me, and it’s not about you. It’s about people facing daily challenges that most of us can hardly imagine. If we decide to intervene in poor people's lives, we should do so responsibly—ideally by shifting our power to them..." and not mention GiveDirectly, which aims to do exactly that by giving extremely poor people money to spend as they wish?
posted by Mr.Know-it-some at 2:47 PM on April 18 [3 favorites]


Which specific premise is untrue?

I don't want to put any words in anyone else's mouth, but I'd say that in the broadest possible interpretation - "Money given charitably can potentially save lives, which are worth more than shoes" - it's hard to argue with, yeah.

But where it breaks down is also where Singer's allegory gets its hooks in people, who run with it from there:

It raises the idea of a dollar amount that equals "Saving a life," for one thing. I can give Singer the benefit of the doubt that he was doing this as an intellectual exercise, using the "new shoes" as the yardstick there because it helped make his point clearer, but no, we don't know how much "saving a life" costs. But in the case of EA, that idea is so sticky and seductive that it set them off a-runnin' making up data to say that putting money into Thing X over here equals Most Lives Saved Per Dollar. Which is already spurious reasoning, as much as I agree with the grounding that all lives are worth the same, but it gets worse from there because:

If you take the analogy just a little bit further, you have a pretty good grasp of the situation with the kid in the pond. Without intervention, the kid drowns, and there are no clear externalities in play. When the kid in question is on the other side of the world, someplace you've never yourself seen, and you're sending your New Shoe Fund there, chances are that you don't really have an understanding of the situation, and that there are a lot of externalities at play. It's possibly still a net positive effect, but it's much harder to know that. You can research to know the situation and externalities better, and accept that the perfect is the enemy of the good and try to do the good anyway. Or you can do what EA does and refer to your formulas to credit yourself with however-many-lives-saved based on dollars, and blow off any reports of said externalities.

Which leads us to the situation where these folks aren't at all interested in, say, solving the housing crisis in the Bay Area. Because that would be very expensive, and living in the area, they are familiar with all the red tape that would be involved in that. But if you send money elsewhere and make it very clear that you don't want to hear anything but good news about what great work it's done, then you can imagine yourself a superhero without having to get your hands dirty. But there's gonna be "red tape" anywhere you send your charitable contributions. Sometimes it'll be bureaucratic and sometimes it'll be payoffs to local potentates or whatever. Which isn't to say that we shouldn't contribute aid to foreign places, but the crises closer to home are likely to be ones that you have a better understanding of, and can see the results of your intervention in more clearly.
posted by Navelgazer at 2:53 PM on April 18 [25 favorites]


I'm reading a book on longtermism (involuntarily) and it reminded me of my thoughts on abortion. It's easy to advocate for future humans, as a class, because they don't exist yet and so they can never disagree with you, they always exist exactly as you imagine them.

I've long thought this. They also can't make any demands on you beyond the ones you imagine, so if you aren't inclined to imagine anything uncomfortable, you're all good.
posted by praemunire at 2:58 PM on April 18 [10 favorites]


But there's also -- in both this article and the discussion here -- some parts that seem to attack the idea of charity in general.

Well, here's the thing - in the end charity is a poor replacement for good governance. Does this mean that charity is intrinsically bad? No, of course not. But it does tend to indicate points of failure in governance more often than not, and if not treated thoughtfully can wind up entrenching the very problems it was meant to alleviate, or even introduce new ones.
posted by NoxAeternum at 3:03 PM on April 18 [34 favorites]


Well, here's the thing - in the end charity is a poor replacement for good governance.

To put it another way, your buddy getting his medical bills covered through a gofundme is not an argument against universal healthcare.
posted by East14thTaco at 3:11 PM on April 18 [27 favorites]


Which specific premise is untrue?

I don't know if this is what Kattullus is saying, but to my eyes, the whole point of the article is that Singer's allegory is false because it believes money will save lives as simply and directly as running into the pond.
I don't think Singer has ever seen someone save a drowning child.

There is an immediacy and an urgency when someone is dying right in front of you, and especially when that someone is a child. People will step in front of a truck to save a child. People will not just sacrifice a pair of shoes; they will sacrifice themselves.

Man drowns saving two children in lake. Father drowns after saving three children. Man dies after saving son. Man drowns trying to save son. Maybe you're old enough to remember when that airplane crashed into the Potomac River in 1982, and there were just six people left alive in the icy water. One of them was Arland D. Williams Jr., who was, according to the rescue crew, "the most alert". Five times the rescue helicopter came, and five times did Arland tie the rescue rope around someone else, and five times did the helicopter drag that person to safety. When the helicopter returned for the sixth time, Arland D. Williams was already gone beneath the waves.

So when Peter Singer asks if you would get your shoes wet to save a drowning child, it just makes me want to smack him. Singer has many bad habits, and one of the most dangerous of them all is his tendency to ask exactly the wrong questions that then lead him to construct the most ridiculous tottering towers of "logic" that I've ever seen outside of Ayn Rand.

The road to hell is paved with intellectual exercises, and the roads to our own personal damnation are painted in the hubris of our own delusions that because our motives are pure then our actions are unimpeachable. And that's how SBF buys himelf a luxury compound in the Bahamas.

Ehh, most philosophy is bunk, anyways, suitable only for sophomores and sophists. You want to do good in the world? Then do good, in your own way, as best you can. Don't try to maximize it. Just do what you can. Be kind. Be better. And be goddamn wary of those who think they've already got it all figured out.
posted by fuzzy.little.sock at 3:31 PM on April 18 [44 favorites]


This is reminding me of the post not too long ago about the question of why Superman bothers saving cats and spending an afternoon talking a suicidal teenager off the ledge when, given his basically limitless capacities, he could be doing much more "good" elsewhere. And the answer (to me, at least) is that the nature of Superman is that he can't pass up the chance to do the good right in front of him.
posted by Navelgazer at 3:39 PM on April 18 [10 favorites]


IMO ways to improve giving:

1. Use social media to become more familiar with projects that seem worthwhile to you.


Alternately, for us olds - use reliable news sources to become familiar with projects and problems, and donate, participate, or advocate as appropriate.
posted by mistersix at 3:51 PM on April 18 [2 favorites]


And this may be the latent Catholic in me but it is incredibly tacky hearing people brag about their charitable donations. Bringing down the hammer on that is one of the few things Jesus got right and yet in this nation of Judeo-Christian values...
posted by East14thTaco at 3:54 PM on April 18 [8 favorites]


Which leads us to the situation where these folks aren't at all interested in, say, solving the housing crisis in the Bay Area. Because that would be very expensive, and living in the area, they are familiar with all the red tape that would be involved in that. But if you send money elsewhere and make it very clear that you don't want to hear anything but good news about what great work it's done, then you can imagine yourself a superhero without having to get your hands dirty.

This is a great way of describing the problem with both sending money across the world and spending money on what one imagines is longtermism - you don't have to confirm that your consequentalism is having the consequences you've intended or have been promised.

(Still a consequentalist though. And not someone who's going to argue against giving to the right causes - but maybe going to argue about what the right causes are.)
posted by mistersix at 3:58 PM on April 18 [6 favorites]


"Ehh, most philosophy is bunk, anyways, suitable only for sophomores and sophists. You want to do good in the world? Then do good, in your own way, as best you can. Don't try to maximize it. Just do what you can. Be kind. Be better. And be goddamn wary of those who think they've already got it all figured out."

Please define good and kind.

Be better than whom? Should we stop being better at some point or always strive to be better?

Why should we be wary? What is the good of being wary?

Do you believe that you have it all figure out when you say that "most philosophy is bunk, anyways, suitable only for sophomores and sophists?" If not, why should we agree with you? If so, why should we agree you?

Is the philosophy that formed the foundations to modern democracy, the end of chattel slavery in the US, equal rights movements, physics, and computer programing, to offer just a sampling of philosophy's fruits, included in your "mostly bunk" category?
posted by oddman at 4:22 PM on April 18 [8 favorites]


The common problem I see with the shallow pond and SBF's coin flip is that they ignore the fact that, for life, zero is very, very different from any other number.

The people who don't get the 1-year-extra pill continue to have possibilities to defy expectations. The dead child in the pond does not. The earth not being made twice as good still has possibilities. The destroyed earth does not.

It's a mathematical game where the only allowable operation is multiplication, and once you hit zero the game ends, but they're playing it like it's an addition game.
posted by clawsoon at 4:48 PM on April 18 [10 favorites]


Is the philosophy that formed the foundations to modern democracy,

Had more to do with encounters with other, more democratic cultures; philosophers followed behind.

the end of chattel slavery in the US,

Had more to do with religious fanatics; philosophers followed behind.

equal rights movements,

Again, more to do with encounters with more egalitarian cultures; philosophers followed behind.

physics

I'd argue that it was experimentalists who didn't care much about philosophy who awakened the world to empiricism, and philosophers followed behind to theorize about what they did.

and computer programing

Fine, you can have one.
posted by clawsoon at 5:00 PM on April 18 [5 favorites]


"Followed behind" that's a cute bit of rhetoric obscuring a lot of important analysis, argument and conceptual development.

But, hey, you dig those ideological heals right in as strongly as you'd like.
posted by oddman at 5:08 PM on April 18 [1 favorite]


I guess I'm going to have to make a profile page edit to add, “If there is to be a future, the TESCREAL cretins must be opposed at all costs by any means necessary.”
posted by ob1quixote at 5:58 PM on April 18 [1 favorite]


The core concept that effective altruism tried to build on isn't actually bad. We have limited resources and large problems, putting the resources we have into getting the most good done is reasonable.

But they ran into the brick wall of it being close to impossible to actually determine what "the most good" even means, much less how to address those big most problematic problems. So they fell back to that techbro "it makes sense to me" way of thinking that, naturally, prioritized the things techbros need and like.

You can use motivated reasoning to argue that almost anything is the most good done per dollar if you're intellectually dishonest and don't care about the truth.

The real problem, of course, is that most of our biggest problems are the result of the very capitalist system which made them rich. And they weren't going to say that it'd be a good idea to overthrow capitalism, or even to get it out of certain sectors.

We don't have hunger because we lack food. We make plenty of food, every single human on Earth could eat well based on current food production. No new crops, no new farms, no new nothing. Just the food we make right this second. We have hunger because we decided, collectively, to have hunger. Once you make food part of capitalism, some people will starve and a lot of food will be wasted.

we don't have homelessness because we lack housing. There really are about 6 empty homes for every homeless person in America. Of course for a lot of homeless people just stuffing them into a house won't actually resolve the problem that made them homeless, but it's a start and it would work for quite a few homeless people. And the bigger point is that if we have the resources to have six empty homes for every homeless person then we clearly have the resources to build the necessary shelters and hire the necessary staff to actually help all the homeless people. But we choose not to because we put capitalism into housing.

Effective altruism would be looking at investing in new building techniques as a means of fixing homelessness. Or funding research into genetic engineering for greater crop yield for resolving hunger. Clearly both of those would be good, but neither would actually be fixing the problem.

Basically, effective altruism had a STRONG niftyness and tech bias. If solution X is flashy and cool and impractical and not yet fully developed, and solution Y is dull, practical, available, and could be implemented, they'll go for X every time. And they'd tell you it's because X would be more effective in the long run, but we all know they'd be lying. They just like flashy stuff. And, of course, funding flashy stuff means people like them get money....
posted by sotonohito at 6:02 PM on April 18 [12 favorites]


Is the philosophy that formed the foundations to modern democracy, the end of chattel slavery in the US, equal rights movements, physics, and computer programing, to offer just a sampling of philosophy's fruits, included in your "mostly bunk" category?

... or perhaps I mean to include in my "mostly bunk" category the philosophy of social Darwinism (which formed the foundation to eugenics), or Aristotle's "natural slave" idea which descends directly to the Enlightenment's justification of slavery as a necessary evil for the inferior races, or just the entire field of Marxist beliefs which led to the most devastating policies of the 20th century if not all time (90 million deaths and still counting).

To tie this back to the original article, the problem with many forms of philosophy, the problem that makes it all so much bunk, is that we messy humans can draw from it just what we need to justify what we were going to do anyways. Or, use it to justify what we are going to do to other people. Or, use it to justify that mansion in Bermuda because it'll make us more comfortable and hence better able to save the billions of unborn future babies from the dangers of Roko's Basilisk.

I'm not here to argue an attack on all philosophy. I too have found myself falling under the sway of Plato and his cursed forms, Saint Thomas Aquinas's separation of the soul and the body, Rousseau's mythology of the noble savage (neither noble nor savage, it turns out), and so much more. God help me, in high school it was Nietzsche and Ayn Rand. And so often I would be so easily swayed this way and that by these seductive theories of life and mind.

Oh, you could tell me the fault is not in these stars and these black suns of philosophy, but in the fissures of my own heart, so quickly swamped by one idea after another. I give you that. But I thank God that I was never so completely sunk that I actually believed in them for more than brief spurts here and there, because that's the real danger, that's where your own mind tricks you into believing that your way is the true way, whether by religion or science or phiolosophy it does not matter, because that's when you start to make those immoral calculations that this life is worth only this much, and then you're spending millions not on better roads but on better AI's.

It's exactly when the worst are full of passionate intensity that things fall apart and mere anarchy is loosed upon the world.
posted by fuzzy.little.sock at 6:11 PM on April 18 [6 favorites]


But, hey, you dig those ideological heals right in as strongly as you'd like.

I'm arguing that a lot of different kinds of people contributed to making the modern world what it is, for good and for bad. Philosophers were only one of those groups of people.

If I were feeling provocative, I might argue that the anonymous Dutch opticians who put a couple of lenses together to create the telescope and microscope transformed our approach to knowledge in a way that no philosopher ever did or could. Plenty of philosophers have had the idea that there's secret knowledge that can only be accessed via special procedures. (And so have plenty of theologians, and astrologers, and mystics, and everyday cranks.) But not one of them thought to find those special procedures by playing around with glass.

And that's why the telescope and microscope - and the general approach to knowledge acquisition that they inspired, of building precise apparatuses to detect the unseen - were able to show that most of the philosophizing about the natural world up to that point was bullshit.

But there's a tendency of philosophy that's darker than that. It can become an invitation to leave empathy behind and focus only on the "rational". If you're a clever enough philosopher, you can show that it's only rational that those millions of Irish or tens of millions of Indians starve to death. There are plenty of philosophers who don't do this, but the impression I've formed is that it's the ones who have the highest opinion of the power of pure philosophy who are the most likely to head in that direction.
posted by clawsoon at 6:47 PM on April 18 [6 favorites]


This is a deep topic that I'm only a little bit familiar with (there are EA folks in my communities who can't seem to talk about anything else) and I won't pretend that I can completely grasp all of the arguments being made here. I'm seeing strong arguments describing the flaws in the philosophy behind the EA movement and strong evidence that the calculus EA folks tend to use to make decisions about their charitable works is flawed and often self-serving. On the other side, though, are answers to a question people are attempting to ask here which is, "how can we determine how best to make a difference in the world," such as Frowner's list and other similar responses. These responses I will sum up as saying, with a cynical tone, "there's no accurate way to measure your impact, so don't try. Instead, stick to this sensible folk-wisdom (such as giving locally or donating to politics)". Even more doubtful to me is this "protestant work ethic" sort of idea that if you're enjoying giving, you're doing it wrong. If you're giving confidently, you're doing it wrong. If you feel accomplished, you're doing it wrong.

True Charity should be uncomfortable, confusing and leave you with a sense of guilt that you've done it wrong. It's particularly important to remember that the decision you just made caused somebody to suffer.

This is one of the most tragically cynical viewpoints I've encountered, but also one I've seen no justification or evidence of, as though it should be self-evident. And I say this as a person whose mind is pathologically drawn to cynicism. It smacks of religious dogma, the idea that one must suffer with nothing but faith that your righteousness will be rewarded in an afterlife. I don't think the reckless optimism of the EA movement prescribes this as the remedy.

None of the EA folks in my community are particularly rich, but yes, they all have minds that are analytical, focused on tech, and whose motivational and reward-systems are highly geared towards problem-solving. Most have strong personality traits that are neuro-atypical and are also somewhat naive . From the tone of this discussion, and if I didn't know any better, I would think that a lot of you see people like that as inherently malignant "tech-bros". I would also bet money that most of those people never thought much or engaged seriously in charity until they caught wind of a movement that pressed their specific buttons, similar to gamification. These are people who, while currently caught up in a severely problematic movement, are primed to spend their money and attention on doing good in the world.

What can you offer them as an alternative, other than "your impact is unknowable" and "doing good should be miserable"?
posted by WaylandSmith at 7:37 PM on April 18 [13 favorites]


As someone who has been involved in advocacy for my entire career, my advice is this: advocating for change is a skill that takes time and practice to become more effective at; giving money is not a skill that improves with the same.

This does not mean that no one should give money, but that taking the time they would have spent working (assuming taking time off is an option, which I know isn’t a given but in these circles tends to have more leeway; you can also learn to advocate within your job) on learning to be an advocate will have compounding effects across your lifespan in a way that spending money won’t. The skills I learned running a rag-tag group of college kids who wanted to get Safe Zone stickers for dorm doors were immensely helpful ten years later advocating for the development and implementation of standards for gender-affirming care in a large hospital.

Also, money is necessary but not sufficient for change. Most of the issues I have worked on were not ones that could have been solved by throwing more money at the problem. They would probably have made it easier, sure, but I would have taken one person who knew how to advocate giving me 5 hours of their time per week for a year than a single $10k donation from a donor with zero on-the-ground perspective on what we’re working on. Like, I will totally also take the $10k, but if you want to know how to do the most good and you believe in humans as beings of positive potential, you should probably invest your time and energy into becoming a human being who is really good at making change. Because that’s a skill you will use over and over and over and it will compound throughout your life,, and will also more effectively help you understand how advocacy and change projects work and therefore better direct any financial resources you may also want to donate.
posted by brook horse at 8:11 PM on April 18 [12 favorites]


Metafilter: opposed at all costs by any means necessary.
posted by chromecow at 9:39 PM on April 18


The reason that doing good isn't going to be rollicking fun for the ol' self esteem is that it's difficult. Also it involves other people and paying close attention to their wants and needs. It's not that feeling good about yourself is bad; it's that it is unlikely to be the primary result of doing good, and the type of "good" which substantially produces "gee I'm swell, I'm a great leader, I'm setting the course of history" is fake or counterproductive.

I mean, you also have to suffer if you want to replace the plumbing in an old house or sit down to do estate planning or do night-time feeds for a baby or care for a parent in failing health, because those things are difficult. They may be satisfying to have accomplished, they may be important to do, they may be a way of living your values or strengthening emotional bonds, but frankly a lot of the work is just going to suck. And you're going to second-guess yourself a lot, because it's important to get right but also complex and requires attention to others' needs and wants. It's not religious sentiment which leads one to feel that fixing plumbing or caring for a parent who can no longer get out of bed is difficult; it's the nature of the task.

Every few years, someone in my extended activist circles says something like "we're never going to make any change if people can't have fun! Protests should be fun! All our meetings should be less than an hour!" I've even said this. And you know what? It's 90% wrong. Most of the things you have to do to accomplish goals are not going to be fun-forward. They may involve fun, you may discover that you enjoy them over time, you may find a way to make them fun, you may connect with the people you work with, you may learn things about yourself, etc - but if you are looking for something that will be majority-fun, you're looking for a hobby. Not that there's anything wrong with that!

Further, if you're running any kind of complicated project and you want to run it on at all democratic lines, your meetings (or some other form of ongoing group decision-making, whether that's email, Signal or whatever) will take quite a lot of time, much of which will be serious and some of which will be boring.

I compare the time I spent in a cultural project that was basically a hobby dressed up as activism with the time I've spent in projects where we actually had to do a set of specific things consistently, thoughtfully and well or things would be obviously, definitely worse for the people we served. The hobby was a lot of fun, I met a lot of cool people and made friends I have to this day. But it didn't matter that much - if I didn't do my bit, or if our meetings were a chaos of in-jokes and oneupmanship, nothing was really lost. The stuff I'm doing now is, frankly, more stress and less fun. I have less in common with the people I work with. When we're doing the project, we work the whole time - not a lot of time to lean, unlike at the hobby. I second guess myself a lot at the project because the work has a lot of moving parts and I have to consider the needs and wants of other people who are very different from me. I would not say that I do this because it's fun, exactly. I do it because I think it's important, because I want to live at least some of my values, because I hate to see this work going undone, because I have learned that I'm a type of person who has to at least try to improve our collective situation or I feel useless - but it's not really possible to do this work and feel sure that one is a sterling person who is awesome and a leader and making Big Decisions That Are Correct Because Of My Analytical Brain.

I'm old now. I've done a lot of volunteer work and a lot of activism. I've known some truly accomplished and admirable people whose work I've seen to be effective over years. I've never met anyone who did good work who was able to go around thinking "what fun this is, what a leader I am, how easy and pleasant it is to hand out lots of money and tell people what to do", because that's not how change is made. It's how careers can be made, sure, and I guess being part of the charity-industrial complex is better than working for Lockheed Martin, but it's not how change is made.

The reason to support some transparent smaller projects and local projects is not that effects are unknowable; it's that they are likely to be more knowable when you're looking at something smaller, partly because the projects are easier to understand (aha, I guess I'm just dumb, right? If only I could understand the future of humanity out to 20,000 years like an intelligent person.) and partly because their public presence isn't managed by a large social media team. Also because they're less likely to be beholden to, eg, the type of foundation that yanks your funding if you don't toe the line on Israel.
posted by Frowner at 9:44 PM on April 18 [25 favorites]


This:

What can you offer them as an alternative

Is a really important question. Because EA has shifted from encouraging earn to give (SBF) to convincing college students and new graduates to commit their careers to 1) doing something they are good at that helps people 2) in an EA way, most prominently through the EA organization 80,000 Hours. For item 1, doing good, see for example this article. Having some concrete advice for young people who are convinced of 1 and becoming skeptical of 2 is an opportunity to have a significant impact.
posted by lookoutbelow at 11:04 PM on April 18 [5 favorites]


This article isn't really fair to Peter Singer in that most of the zany ideas that EA is notorious for (long-termism etc) come from Nick Bostrom. I think that the drowning child analogy is basically a good one - there are a lot of moral choices for us in developed nations that are matters of minor personal inconvenience for us but matters of life and death for those in developing countries. It doesn't imply that every moral choice can be quantified in terms of dollars. Where EA goes off the rails IMO is this version of utilitarianism that allows offsetting of harms with (often hypothetical) goods. For example the idea of "earn-to-give" is really a license to do harm to others (because that's what's usually involved in making large amounts of money) as long as you make up for it by donating a part of it. Same thing with long-termism: inflicting harm on actual humans now is justified in terms of the good that could be done for hypothetical humans in the far future. It's a distortion of utilitarianism and not really what Singer advocates for (I believe he has specifically criticised long-termism). Also, if you're going to donate money isn't it just common sense that you should evaluate where to donate in terms of where the money does the most good? Obviously this should be done with humility and consideration of the perspective of the recipients, but it seems unavoidable if you are to give money at all.
posted by L.P. Hatecraft at 11:08 PM on April 18 [12 favorites]


Related to clawsoon’s most recent comment: there is even specifically a thing in the shared EA—rationalism world where they dismantle empathy. A set of techniques and rationalisations for it. This might be mentioned in the article, I haven’t read it yet as I’m tired. Will go dig up the stuff if anyone wants.
posted by lokta at 3:22 AM on April 19 [6 favorites]


A notion at the core of traditional (western) deontological and utilitarian theories is not that people are fungible, but, rather that no one person has more intrinsic moral standing than any other person. Which is to say that, in a strict moral sense, your family's intrinsic value (their right to live so to speak) is no greater than the intrinsic moral value of anyone else's family.

To my way of thinking, the fault in this viewpoint lies not in anything it has to say about the moral standing of persons, but in what it ignores about who is making that judgement call. By framing the argument in terms of intrinsic moral value, it completely ignores that it's people who make moral judgements and thereby assign relative moral value, and that people do ascribe a higher moral value to those they perceive as close than to those they don't, and that people are diverse.

We just do care about some things, and some people, more than we care about other things and other people. This happens not because we are morally deficient, but because each of us has just the one life and none of us is omniscient. We just do act more favourably toward those close to us than we do toward those less so, and there is nothing morally wrong with that. There exists no objective standard by which one person's claim on another's support can be measured. The only applicable standards are personal, subjective and as diverse as people.

Giving as much as you care to to such causes as you care about cannot be wrong, and the corrective for under-served causes is not hotshot MBA types armed with one-size-fucks-all metrics, it's awareness campaigns in the first instance and public policy improvements in the longer term.
posted by flabdablet at 4:25 AM on April 19 [3 favorites]


"True Charity should be uncomfortable, confusing and leave you with a sense of guilt that you've done it wrong. It's particularly important to remember that the decision you just made caused somebody to suffer."

This is one of the most tragically cynical viewpoints I've encountered, but also one I've seen no justification or evidence of, as though it should be self-evident.


In choosing to prioritise A, you are choosing not to prioritise B. This is a choice to let B suffer. You cannot opt out either, that would be a choice to let A and B suffer. Unless you have limitless means and never have to choose, I guess.



Giving as much as you care to to such causes as you care about cannot be wrong

Yes it can. I don't care how fervently someone may actually believe in e.g. Nazism or Trump, giving to those causes is always wrong.
posted by Dysk at 4:45 AM on April 19 [3 favorites]


Fair point, but it's wrong not because a Nazi or MAGAhat has any kind of objectively lower intrinsic moral worth as a person. Also arguable that political death cults don't legitimately count as "causes" in the traditional sense of possible recipients of charity, which is the context here.

In choosing to prioritise A, you are choosing not to prioritise B. This is a choice to let B suffer.

Not if you have a reasonable belief that other people will choose to prioritize B. None of us can fix all of the world's problems on our own and it's self-destructive to try.

A bit of collective action on the part of the billionaire class - mainly ceasing to evade or avoid paying their fair share of taxes - could fix 99% of them, though; and it's exactly because most of the world's problems are not close to the daily concerns of the billionaire class that dragging them kicking and screaming in that direction is going to be the only way they actually do it.
posted by flabdablet at 5:47 AM on April 19 [4 favorites]


In choosing to prioritise A, you are choosing not to prioritise B. This is a choice to let B suffer. You cannot opt out either, that would be a choice to let A and B suffer.

Doug Forcett called. He wants to know if you'd like a glass of water.
posted by RonButNotStupid at 6:01 AM on April 19 [2 favorites]


Sounds like everyone would like a way to make their.. altruism.. more.. effective.
posted by Aethelwulf at 6:30 AM on April 19 [4 favorites]


"We just do act more favourably toward those close to us than we do toward those less so, and there is nothing morally wrong with that. "

Generally I agree with your analysis of the psychology of moral decision making, but the tricky part is in finding the balance between between being realistic about what people actually consider when they make moral decisions and proposing moral norms that aspire to minimize the bad elements of "the way things are."

For example, the point I quoted might be taken to give free reign to cronyism and nepotism. Surely we don't want to say that there is nothing morally wrong with cronyism, do we?

I don't have a simple solution to the dilemma, but we ought to acknowledge that there is one.
posted by oddman at 6:42 AM on April 19 [5 favorites]


Not if you have a reasonable belief that other people will choose to prioritize B. None of us can fix all of the world's problems on our own and it's self-destructive to try.

I just want to highlight this because I think it's really important.

If you treat these questions like each person is acting in a vacuum, there's really no way to be moral (because you can't do everything—even if your resources were unlimited your time is not). But if you can have a reasonable expectation that if you work on your stuff, and other people work on what is important to them, and from time to time there's some sort of "hey is anybody falling through the cracks?" assessment, then collectively we can act in a moral way.
posted by joannemerriam at 8:20 AM on April 19 [7 favorites]


It's almost like focusing on mutual aid and solidarity projects (which also need funding) rather than charity per se (which by definition doesn't involve building reciprocal solidarity-based relationships) would elide a number of potentially problematic issues.
posted by eviemath at 8:23 AM on April 19 [7 favorites]


Also arguable that political death cults don't legitimately count as "causes" in the traditional sense of possible recipients of charity, which is the context here.

Political parties of all stripes are pretty eager to accept donations. And if you're a true believer, anything that advances the cause is a moral good.
posted by Dysk at 8:59 AM on April 19


Not if you have a reasonable belief that other people will choose to prioritize B.

And at some point you're left choosing between the things that you can't reasonably assume that about, and the dilemma is the same. You do what good you can, but any decision is always a decision not to do everything else. I'm saying this bit to suggest that it's futile, but to suggest that it's worth giving the decisions some thought.
posted by Dysk at 9:02 AM on April 19 [1 favorite]


This was the topic of recent Sam Harris podcast. I was a bit shocked that SBF got 25 years, myself.
posted by daHIFI at 9:41 AM on April 19


I do appreciate that some EA organizations contributed to popularizing GiveDirectly and unconditional cash transfers to people experiencing poverty, and to spreading information concerning evidence of GiveDirectly's effectiveness. But as GiveDirectly points out, GiveWell no longer listing GiveDirectly as a top charity reflects a difference of opinion on the value of human agency:
Ultimately, this results in a spreadsheet. This framework combines the views of a relatively small number of stakeholders and then applies those outcomes to millions of people.

GiveDirectly believes that the weights that should count the most are those of the specific people we’re trying to help. Each individual will have their own specific needs, preferences, and aspirations. We have yet to see a place we worked in (village, county, country) where everyone made the same investments, so why prescribe the same solution for everyone? Why not treat each individual person living in poverty as exactly that, respecting their individuality and allowing them the dignity of pursuing their own goals?
posted by lookoutbelow at 9:50 AM on April 19 [5 favorites]


Which specific premise is untrue?

I'm gonna point to two:

- You can spend money to save lives: maybe, as many other people in this thread have pointed out, this depends a lot on the model for what happens when and how you spend money. If buy a pair of shoes, am I offering someone who needs it employment? Does that participation in an economy result in more or less value than just giving my efforts away? Is money a medium of exchange? A medium for control of capital? Something else? There's a whole lot that we wrap up in money.

- Most people would gladly forego an extra pair of shoes to possibly save a life: I'm active in street safety advocacy. Most people won't walk fifty feet (like, park around a corner to allow a protected bike lane to be built) to save a stochastically fractional life. And the model for that is pretty well established, the model for changing your consumption patterns has a lot more uncertainty in it.

But both of those, especially the one about the nature of money, are way too nuanced for your average twenty something guy who's had smoke blown up his ass for all of his lilfe so far (let alone a Princeton philosophy professor), to question.
posted by straw at 12:39 PM on April 19 [8 favorites]




Nick Bostrom’s Future of Humanity Institute closed this week in what Swedish-born philosopher says was ‘death by bureaucracy’

And nothing of value was lost. Well played, that bureaucrat.
posted by flabdablet at 6:03 AM on April 20 [3 favorites]


The thing that has always stopped me cold with EA is its name. Every form of philanthropy strives to be effective, by definition; they can vary wildly according to the honesty of the particular altruist in terms of what they want to accomplish (i.e. contribute to the infrastructure of institutions of higher learning, but also putting their names on those buildings to put them in the mouths of future generations), how well and thoroughly they examine the institutions by which they seek to accomplish those ends, what alternatives exist (and the possibility of creating new alternatives), and, yes, potential drawbacks. Way back in the eighties, there was some criticism of the Band-Aid/USA For Africa/Live Aid juggernaut in terms of how much of that aid actually went to help people and how much of it went to the local warlords/dictators, and IIRC Bob Geldolf's response was, well, what's the alternative? The aid workers interviewed by this article's writer are quite well aware of the problems of the charities that they work with/for/through. You know, it's a perennial problem that meta-charities such as Charity Navigator seek to deal with.

But EA tries to short-circuit both this examination and any criticism of their actual efforts by insisting that they're doing things the smart way, and if you don't get it, well, you're just not on their level, fella.
posted by Halloween Jack at 10:47 AM on April 20 [5 favorites]


But if you can have a reasonable expectation that if you work on your stuff, and other people work on what is important to them, and from time to time there's some sort of "hey is anybody falling through the cracks?" assessment, then collectively we can act in a moral way.

Well...

Just as an easy and hopefully not-deeply-fraught, example, I've done a lot of work on ending exploitative practices by financial institutions. (This is not, strictly speaking, charitable work, but I have certainly forgone opportunities for significantly better-paid jobs in doing this one, so the opposite of "earn to give." My first job out of the private sector paid 18%, literally, of what my last job in the private sector paid.) You will often hear defenses of these practices by well-paid mouthpieces for those institutions to the effect of "well, if we didn't lend at 50%, then people would just go to the loansharks at 100% plus risk of broken legs upon default." And a lot of it, most of it, is self-justifying bullshit (because the underlying idea is that any approach that doesn't absolutely maximize profits is unthinkable for a corporation to undertake). But there is, at the day-to-day level, some reality to it. The corporations won't choose not to maximize profits, and I can't reform the entire financial system at once to provide for people otherwise. So maybe, if my organization succeeds in ending this particular practice, some people really won't be able to, e.g., afford to buy a mattress this year. On the whole, it's better if we do. But we are making (or asking courts or Congress to make) judgments for other people because we believe in the end it's better for them and for society as a whole, and they might not always agree, and they might not always be wrong about that. Trying to simultaneously cultivate a litigator's killer mindset and appropriate humility about your policy goals is not easy. But necessary.
posted by praemunire at 7:02 PM on April 20 [6 favorites]


There is a moral problem related to EA that I think of in terms of a discussion I once had on Metafilter with Eyebrows McGee. (I tell this story to other people, but I'm a little afraid of telling it here!)

My own summary of the question we were discussing: "If you're an upper middle class parent, is it morally preferable to send your kid to the school you think will be best for that kid, or to send your kid to the school where the presence of your family will do the most good for the school and the other kids there?" (This wasn't a hypothetical question for either of us. I think that's part of what made the discussion feel different than other moral arguments I've had on the internet.)

Anyway, I argued the "You have a special duty to your own kid" side, and Eyebrows argued the "Other people's kids have just as much moral value as your own, and the net good you do in the world is much greater if you refuse to participate in White Flight" side. (I hope that is a fair summary!) But my purpose now is not to re-litigate that question--I am no longer at all confident that I know the answer. I just want to use it as an example of a type of moral dilemma, between "special duty to your own" and "other people's kids have just as much moral value."

As flabdablet says above "We just do care about some things, and some people, more than we care about other things and other people. This happens not because we are morally deficient, but because each of us has just the one life and none of us is omniscient. We just do act more favourably toward those close to us than we do toward those less so, and there is nothing morally wrong with that. There exists no objective standard by which one person's claim on another's support can be measured. The only applicable standards are personal, subjective and as diverse as people."

(Honestly I think this kind of "family loyalty" morality is instinctive behavior for most humans, what the book "Moral Tribes" describes as moral behavior in automatic mode, rather than moral reasoning as such.)

But as oddman says, this could be "taken to give free reign to cronyism and nepotism. Surely we don't want to say that there is nothing morally wrong with cronyism, do we?" It also gives rise to all kinds of in-group and out-group, my tribe vs the Other dynamics that lead to a lot of tragedy in the world.

But it shows up in every day life with questions like "Is is moral to give my child a birthday present when I could have spent that same money on a bed net for a child in a malaria stricken country?"

I find this type of question really troubling, because as much as my instincts say one thing, my logical reasoning mind says the other. I guess I mostly go with my instincts in daily life, but I find it troubling that I can't really logically justify that. And I think about it a lot. This kind of question actually seems to come up all the time.

The closest I have been able to come to an answer about what one "should" do is "subsidiarity." I think this is a Catholic concept, but I got it from one of my favorite philosophers -- blogger Fred Clark, who writes the blog "Slacktivist." Here's one of his discussions of it:
Our roles and responsibilities differ — they may be direct or indirect, sometimes several steps removed. But everything is connected. If I abdicate my direct responsibilities, I will end up placing a heavier burden on those with indirect responsibilities — forcing them to play a more direct role. If I neglect my indirect responsibilities, I will end up placing a heavier burden on those who bear a more direct responsibility — possibly causing them to fall under the weight of it. This mutuality is, as King said, inescapable. Others affect me and I affect others, inescapably."
Basically, I have a primary responsibility to my own children. But I also have a responsibility to other people's children (that is to say, to all people) to pick up the slack when those with primary responsibility can't or won't meet their needs. But it's not just two layers, there are these widening circles of responsibility. When parents can't meet their kids' needs, the extended family picks up the slack. And if they still can't meet those needs, the local community steps in. And then local institutions, like schools and county government and possibly the local church. And then more distant groups... The state government. Multi-state charity groups. And then the national government, and international aid organizations... Each more distant group trying to fill the gaps in the previous layer of support.

I don't think it's a complete solution to the problem - it still doesn't provide a direct answer to the question Eyebrows and I originally debated, for instance. But it's a way of thinking about the problem that I think is better than the flattened "you have just as much responsibility to the kid on another continent who might die of malaria as you do to the kid drowning in front of you" framing that effective altruists use. You do have a special responsibility to the people who are closer to you, if only for practical reasons -- you are in the best position to help. But that doesn't mean you have ZERO responsibility to the distant child with malaria either. You can at least donate a few dollars.

One thing the link in the FPP added to this understanding for me is that the people closest to the situation are also the ones who can best be held accountable, when their attempts to help go wrong. I think that's a really important insight. Of course things are going to go wrong sometimes. The hardest part of trying to be "effective" is not knowing what the effects of your actions will really be. Or you could say the problem with consquentialism is that we never really know all of the consequences of anything we choose to do before we do it. But choosing to "act local" does mitigate that somewhat, I guess. Because you're there to see the consequences, and learn from them. This seems to put some logical support under the idea of supporting those close to you more, I guess... But in a "subsidiarity" way, not an all or nothing way.
posted by OnceUponATime at 6:24 AM on April 21 [10 favorites]


it's a way of thinking about the problem that I think is better than the flattened "you have just as much responsibility to the kid on another continent who might die of malaria as you do to the kid drowning in front of you" framing that effective altruists use

and certainly way better than the Greatest Good For The Greatest Number line that the longtermist wing of EA pushes, where they just make up whatever future Number would be Greatest enough to swamp any difficult value judgements that might present themselves in the here and now.
posted by flabdablet at 7:11 AM on April 21 [7 favorites]


ob1quixote: “‘If there is to be a future, the TESCREAL cretins must be opposed at all costs by any means necessary.’”
“Obi-Wan, aren't you being the tiniest bit hyperbolic?”

Absolutely not.

“The Tech Baron Seeking to ‘Ethnically Cleanse’ San Francisco,” Gil Duran, The New Republic, 26 April 2024
posted by ob1quixote at 7:29 PM on April 27 [4 favorites]


« Older The ultimate con   |   I Go Meow Newer »


You are not currently logged in. Log in or create a new account to post comments.