Effective obfuscation
November 27, 2023 11:36 AM   Subscribe

Web 3 is Going Just Great creator Molly White writes in her Citation Needed newsletter on effective altruism and effective accelerationism: As Sam Bankman-Fried rose and fell, people outside of Silicon Valley began to hear about “effective altruism” (EA) for the first time. Then rifts emerged within OpenAI with the ouster and then reinstatement of CEO Sam Altman, and the newer phrase “effective accelerationism” (often abbreviated to “e/acc” on Twitter) began to enter the mainstream. Both ideologies ostensibly center on improving the fate of humanity, offering anyone who adopts the label an easy way to brand themselves as a deep-thinking do-gooder.

At the most surface level, both sound reasonable. Who wouldn’t want to be effective in their altruism, after all? And surely it’s just a simple fact that technological development would accelerate given that newer advances build off the old, right? But scratching the surface of both reveal their true form: a twisted morass of Silicon Valley techno-utopianism, inflated egos, and greed.

... Effective accelerationists in particular also like to suggest that their ideas are grounded in scientific concepts like thermodynamics and biological adaptation, a strategy that seems designed to woo the technologist types who are primed to put more stock in something that sounds scientific, even if it’s nonsense. For example, the inaugural Substack post defining effective accelerationisms’s “principles and tenets” name-drops the “Jarzynski-Crooks fluctuation dissipation theorem” and suggests that “thermodynamic bias” will ensure only positive outcomes reward those who insatiably pursue technological development.

... It is interesting, isn’t it, that these supposedly deeply considered philosophical movements that emerge from Silicon Valley all happen to align with their adherents becoming disgustingly wealthy.
posted by Bella Donna (59 comments total) 32 users marked this as a favorite
 
Is there a difference between effective accelerationism and longtermism? Because I've already read a bunch of great takedowns of longtermism.
posted by subdee at 11:49 AM on November 27, 2023 [1 favorite]


2. This is a view known as longtermism.

Answered my own question...
posted by subdee at 11:50 AM on November 27, 2023 [7 favorites]


At least in science fiction like The Three Body Problem, the framing isn't a nebulous "we we be better off as a species if we use earth's natural resources to allow rich individuals in rich nations to pursue more and more technological development," but a much more specific "if we DON'T develop technologically, someone else will develop more FIRST and annihilate us."
posted by subdee at 11:54 AM on November 27, 2023 [6 favorites]


This was as smart and wry a piece as she usually writes, but it was also a really handy primer to compare & contrast these two bad ideologies.

I really like the point that it's not either/or, and that we should be looking for other ways to frame the issue of AI:
Some have fallen into the trap, particularly in the wake of the OpenAI saga, of framing the so-called “AI debate” as a face-off between the effective altruists and the effective accelerationists. Despite the incredibly powerful and wealthy people who are either self-professed members of either camp, or whose ideologies align quite closely, it’s important to remember that there are far more than two sides to this story.
posted by wenestvedt at 11:59 AM on November 27, 2023 [8 favorites]


I've been thinking about this a lot lately, and reading the article, I think the obvious disdain (largely justified) the author has for some proponents of Effective Altruism has kind of blinded them to an extremely significant difference between the philosophies. Yes "Just like effective altruism, effective accelerationism can be used to justify nearly any course of action an adherent wants to take" but that is also true of a huge number of publicly popular philosophies. Philosophy often starts as a purely intellectual pursuit, but a critical mass of people will only publicize and support a philosophy if it justifies actions they think are important. People are inherently somewhat self serving, so of course movements like EA will get somewhat co-opted by individuals who do not actually care about the needs of other people. Most of the early activities of EA were clearly positive for the world (buying mosquito nets, etc) but has that deteriorated over time.

One of the main goals of Effective Altruism is is to encourage people who are not naturally compassionate toward strangers to think about the world in a more compassionate way. Engineers (I am one) and venture capitalists tend to not have a ton of natural empathy, and all humans are kind of bad at being compassionate towards strangers. We have to learn how to be good to others and EA is one attempt to teach a certain group of people how to be good. It hasn't always succeeded but it definitely helped some people and led to actual improvements in some areas.

Effective Accelerationism has literally 0 room for compassion in its philosophy. There is no time to think about what might actually be good for the general public because the goal is to get to the future as quickly as possible. The assumption is that progress is more important than any other consideration, and trying to maximize something like that is incredibly dangerous. I am actually fairly optimistic about technology in general, but we are already moving into the future at an extremely fast rate and trying to go even faster is just insane. Effective Accelerationism is a deliberate attempt to co-opt the theoretical positivity of EA and use it to justify making things even worse.
posted by JZig at 12:01 PM on November 27, 2023 [9 favorites]


I appreciated this article for laying things out clearly (and I've been enjoying following Molly White on Masto). I've been vaguely aware of the whole TECSREAL thing for a while—there was a book years ago titled Great Mambo Chicken that made the extropians out to be a bunch of harmless cranks. It seems that they've acquired an aura of legitimacy and actual clout more recently, and they are not so harmless.

The distinction between the zoomers and doomers seems like the distinction between Protestants and Catholics from the perspective of a non-Christian.
posted by adamrice at 12:01 PM on November 27, 2023 [6 favorites]


My favourite part of TFA (the fucking article) appears at the very end, in Note 6, when the author refers to Marc Andreessen thusly:

I must at this point remind you that this is a man who built a web browser, not goddamn Beowulf.
posted by Bella Donna at 12:09 PM on November 27, 2023 [36 favorites]


@adamrice whoah deep cut on the Great Mambo Chicken! I remember finding that book in middle school and getting my mind blown.
posted by PikeMatchbox at 12:12 PM on November 27, 2023 [1 favorite]


One of the main goals of Effective Altruism is is to encourage people who are not naturally compassionate toward strangers to think about the world in a more compassionate way.

Ehhhhh… gonna disagree on that one.

Firstly EA leans misogynist and white supremacist, so who they are “compassionate” about tends to be limited by who they consider people.

And the “compassion” is often directed to a bunch of extremely hypothetical future people, always turns out to be concentrating wealth to a handful of individuals and ignoring climate change - both of which are enormously harmful in the short term and don’t leave much room for a long term.

In short both kinds of idiot are insane and harmful.
posted by Artw at 12:14 PM on November 27, 2023 [44 favorites]


Getting as rich as possible as fast as possible, no matter the consequences, isn't altruism.

I've worked in philanthropy. It is, at its best, a weak and expensive solution to problems that shouldn't exist. From where I'm sitting, having a handful of people sit on Scrooge McDuck-style vaults of money that they parcel out bit by bit is nowhere near as effective and efficient as a world in which people are more equal to begin with.

If VCs want to do good in the world, they could start by paying taxes and making sure that everyone gets paid as well as their engineers.

And the “compassion” is often directed to a bunch of extremely hypothetical future people
This part. Learning how to be good to other people, by necessity, means learning about and talking to those people, not skipping that step in favor of hypothesizing about the future with your equally cloistered rich friends.
posted by evidenceofabsence at 12:24 PM on November 27, 2023 [51 favorites]




Engineers tend to not have a ton of natural empathy

Citation needed... I hire and train engineers. I think spreading this trope that engineers are somehow psychologically special is one of the reasons Silicon Valley has become so toxic.
posted by tofu_crouton at 12:30 PM on November 27, 2023 [71 favorites]


I vaguely remember "effective altruism" meaning "go back and check whether your attempts to help worked, for whom, and for how long. Try to improve."

I don’t think it had even acquired capitalization yet.
posted by clew at 12:30 PM on November 27, 2023 [8 favorites]


Effective Altruism came about as a reaction to the pure-profit and growth-at-all-costs nature of Silicon Valley and was founded by people who said they genuinely wanted to help, and those people then spent a lot of money trying to help people instead of buying a yacht or whatever. I'm not saying that EA is objectively great, but it is much better than the default philosophy of Silicon Valley (basically the same as Effective Accelerationism) that it grew out of. I definitely don't support what EA is today, the movement has been very damaged by SBF and the AI doomers.

Firstly EA leans misogynist and white supremacist, so who they are “compassionate” about tends to be limited by who they consider people.

Yes, that is clearly true but that is related to the inherent nature of how compassion works in the human brain. Our current best understanding of human psychology is that human biology gave us an intrinsic love of our family but we have had to use cultural evolution to expand that to larger social groups over time. We are not born with an inherent love and respect for people who are significantly different than us, if that was true then we wouldn't need to spend so much time teaching it to our children. There's no evidence that we are born hating people who are different (as long as they don't conflict with the needs of our own family) but we are naturally ambivalent to the needs of strangers.
posted by JZig at 12:30 PM on November 27, 2023 [3 favorites]


The entire point of both "effective" philosophies is to be able to say they're helping the world in a way that means they won't have to pay more taxes.

Ensuring health care? Funding education? Building cheap houses for people? Feeding the poor? HA! Instead, they will build this elaborate money machine in the name of (airy gesture) helping people, but actually funnels money into their accounts, with exactly one more gear than a statistical average of people will be able to follow.

You want to know who has less time to devote to untangling money maching guts than that statistical average of people? Politicians, the very people who might be able to raise their taxes, who often have to delegate that understanding to experts. But if they can indoctrinate enough experts into their hellish ideology then enough of the people those politicians talk to will tell them that their convoluted helping system is Good Actually that they may be able to keep their taxes low.

The amazing thing about it is how obvious it all is. Maybe it's just because of where I'm sitting, comfortably outside of their big-money-no-whammies hustle, but I don't even see how LLMs, as we now understand them, can do anything other than produce bulk amounts of bullshit, ruin most websites, and provide an inexpensive way for companies to get their answers at least as wrong as they currently do. It's so obvious, to me, but no one listens to me, or any of us.

In the future, everyone will be Cassandra for 15 minutes.
posted by JHarris at 12:31 PM on November 27, 2023 [41 favorites]


I guess now we will have to start using the term 'actual altruism' to refer to anything that is actually a selfless act to aid others.
posted by snofoam at 12:36 PM on November 27, 2023 [16 favorites]


“Acoustic altruism,” perhaps.
posted by migurski at 12:43 PM on November 27, 2023 [11 favorites]


In the future, everyone will be Cassandra for 15 minutes.

Says you.
posted by The Bellman at 12:45 PM on November 27, 2023 [12 favorites]


If you have to come up with an entire philosophy in order to explain that what you're doing and how you're doing it is capital-G Good, then you're doing it wrong.

I also get the impression that a lot of effective altruism--the parts about applying largess to causes that do the most good--is a carefully constructed excuse to justify limited support. Having a whole philosophy that supposedly finds the maximum good for them to put their money towards also provides a way of shutting down critics who might object to a billionaire only donating a few million towards a cause.

Effective Altruism provides an imaginary force-multiplier: this donation may only be a million dollars, but it's such an efficient donation it will provide the same quality adjusted life years as a hundred million given to some other cause. It's a philosophy geared towards rewarding people for doing less
posted by RonButNotStupid at 12:51 PM on November 27, 2023 [19 favorites]


JHarris: "Maybe it's just because of where I'm sitting, comfortably outside of their big-money-no-whammies hustle, but I don't even see how LLMs, as we now understand them, can do anything other than produce bulk amounts of bullshit, ruin most websites, and provide an inexpensive way for companies to get their answers at least as wrong as they currently do. "

It's not just you. These guys are convinced their giant-chicken breeding program will produce an airplane.
posted by adamrice at 12:55 PM on November 27, 2023 [13 favorites]


Firstly EA leans misogynist and white supremacist, so who they are “compassionate” about tends to be limited by who they consider people.

Yes, that is clearly true but that is related to the inherent nature of how compassion works in the human brain


Misogyny and white supremacy aren't inherent to our biology, and any self-declared genius who leans in that direction is fully and entirely responsible for doing so.
posted by evidenceofabsence at 12:58 PM on November 27, 2023 [24 favorites]


“Acoustic altruism,” perhaps.

Artisanal altruism?
posted by kirkaracha at 1:03 PM on November 27, 2023 [1 favorite]


"Free Range Altruism"?

...any self-declared genius who leans in that direction is fully and entirely responsible for doing so. If they are young and/or ignorant they maybe get one forgiveness -- but then after they have had it pointed out to them again and again, yes!!
posted by wenestvedt at 1:05 PM on November 27, 2023 [2 favorites]


I think the truth is that EA is the SV version of robber baron philanthropy on one end and an engineering-brain targeted version of (upper) middle class philanthropy on the other. If those sound like the same thing to you - well, they fit together pretty nicely, but I think there are somewhat distinct mindsets.

EAcc ultimately seems like the mask off/gloves off/whatever you want to call it version but I don’t have an issue with suggesting that it’s probably worse, because the way they treat it as a big joke strikes me as a particularly bad mindset with which to approach this stuff.
posted by atoxyl at 1:09 PM on November 27, 2023 [5 favorites]


applying largess to causes that do the most good--is a carefully constructed excuse to justify limited support

How do you decide where to direct your finite ability to improve the world?

Billionaires gonna billionaire until we fix them, but I have some money and time to help until then and I do prefer that it be more rather than less helpful.
posted by clew at 1:15 PM on November 27, 2023 [1 favorite]


I actually went deep into what EA charities once, to help a friend who wanted to know how to best donate. He wanted to be effective, and here was a website that ranked charities, making it easy! He asked me to help decide between the different options there, so I looked into them.

The first problem is that by its nature, sites like these are only going to find and rate things like big name NGOs. Each dollar I donate to a school board candidate in a book banning state probably has more direct impact, but that type of thing wouldn't meet the criteria to even get rated. You're starting from a mix of places that have high overhead and then rating them by which has the least overhead.

The greater problem is that we don't really know how to measure impact. If we did, the charitable giving landscape wouldn't be confusing to start with. For example, look at the methodology for rating The Humane League. The Humane League does corporate advocacy to get corporations to sign on to pledges saying they will use eggs that are labeled as cage-free. Whether or not that is the most effective cause for you to spend your money on is partially a personal decision. Does it matter that a place that uses child labor has cage free eggs? Does it matter that the chicken that's going to be slaughtered has more room in the barn? Different people will answer those questions differently. Giving What We Can avoids that problem by ranking it only against similar charities. Why does The Humane League come out on top? Because they focus on chickens (versus, say, pigs), and they focus on countries that they think are more open to change. Each animal type and country is given a value for criteria that GWWC invented. These criteria allow GWWC to quantify the charities, but at the end of the day does it really add up to "efficacy"?
posted by tofu_crouton at 2:02 PM on November 27, 2023 [15 favorites]


How do you decide where to direct your finite ability to improve the world?

Yeah see here is the part that appeals to the rank and file technical/professional sort, who has a bit of money to give away and knows they are fortunate to be in that position, and I don’t really have a fundamental problem with it either, granting that I am exactly that kind of person.

The bolder self-justification built into a fair amount of EA discourse is the line of logic that goes:
- I am good
- I can make a lot of money
- If I make a lot of money I will give it to a good cause, because I am good
- If I don’t make a lot of money, somebody else will, and they may not be so good
- Therefore I am justified in doing whatever I can to make as much money as possible (so I can give it to a good cause)
posted by atoxyl at 2:05 PM on November 27, 2023 [9 favorites]


I'd extend that the self-justification baked very deeply into EA is also

- Money spent to help people in the global south benefits future humanity less than money spent to maximize the ability of those already with relative wealth to contribute to the betterment of future humanity.
- Therefore a dollar spent to prevent a poor child in a developing nation from starving is a dollar wasted.

That's the immoral core of EA that most of the EA folks only say very, very quietly.
posted by tclark at 2:13 PM on November 27, 2023 [23 favorites]


If you have to come up with an entire philosophy in order to explain that what you're doing and how you're doing it is capital-G Good, then you're doing it wrong.

It's not even that. When you read these people's writings, it's clear that they aren't smart, wise, well-read, and thoughtful people who have grappled with the truth. Their philosophical ideas are roughly at the level of first-year dorm conversations. You can tell this because, if you follow any of their ideas for more than a few steps, you find out that it's stupid. None of it holds up under scrutiny because it's not real; it's just a fig leaf for their own selfish urges.

These guys are just the larvae who want desperately to pupate not into philosophers or philanthropists, but Elon Musk.
posted by GenjiandProust at 2:27 PM on November 27, 2023 [16 favorites]


Money spent to help people in the global south benefits future humanity less than money spent to maximize the ability of those already with relative wealth to contribute to the betterment of future humanity.

One can argue that parts of the movement have ended up there, but it’s documented that its original stated principles were almost exactly the opposite and I don’t think it’s nearly so simple as that being a lie all along. So I don’t think asserting that there’s a secret elitist core is enough to understand what happened.

(I think what happened was a combination of the disproportionate influence of certain individuals and a core acceptance of an overly-literal brand of utilitarianism that opened the door for all kinds of perverse, self-serving conclusions.)
posted by atoxyl at 2:28 PM on November 27, 2023 [11 favorites]


Even if you had some way of ranking the absolute "effectiveness" of every single cause in the world, there's going to be such a long tail of things that are all effectively the same when it comes to impact such that it wouldn't really matter if you chose one over another.

Effective Altruism makes the mistake of assuming that there's some determinable maximum you can optimize for, and that obsession seems to drive a lot of terrible rationalizations.
posted by RonButNotStupid at 2:31 PM on November 27, 2023 [3 favorites]


Many EA charities do help the global south because "lots of people below the poverty line" is one thing that's easily quantifiable.
posted by tofu_crouton at 2:36 PM on November 27, 2023 [3 favorites]


“Acoustic altruism,” perhaps.

Artisanal altruism?

"Free Range Altruism"?


Altruism Unplugged™
posted by They sucked his brains out! at 2:41 PM on November 27, 2023 [3 favorites]


This is a great article if you want to stoke your (reasonable!) dislike of tech bros and crypto fraudsters while using the worst exemplars to tar lots of people with guilt by distant association. It's also a great article if you want to sneer at some weirdos and philosophers without really engaging with any of their arguments or if you don't really care whether you have a good understanding of the people or views being mocked.

If, on the other hand, you want serious engagement with arguments, charitable interpretation, reasoned criticism, empirical evidence, engagement with relevant professional literature, and the like ... look elsewhere.
posted by Jonathan Livengood at 2:49 PM on November 27, 2023 [4 favorites]


One can argue that parts of the movement have ended up there, but it’s documented that its original stated principles were almost exactly the opposite

I dunno, this gets awful close to a No True Scotsman argument, especially because Peter Singer (who basically started EA) leans so hard on bare-metal utilitarianism as to not give straight answers to questions that would reveal his philosophy's eugenicist implications.
posted by tclark at 2:59 PM on November 27, 2023 [14 favorites]


If, on the other hand, you want serious engagement with arguments, charitable interpretation, reasoned criticism, empirical evidence, engagement with relevant professional literature, and the like ... look elsewhere.

Sure come in a shit on the article and then flounce out without any links to such articles. Also why should EA or e/acc get a charitable interpretation? They’ve in fact been given one over and over and come up lacking every single time. One more will get them over the edge I’m sure!
posted by Uncle at 3:11 PM on November 27, 2023 [16 favorites]


If, on the other hand, you want serious engagement with arguments, charitable interpretation, reasoned criticism, empirical evidence, engagement with relevant professional literature, and the like ... look elsewhere.

Can you recommend somewhere specific to look, or is this comment like Effective Altruism itself, where the good stuff outweighs the bad stuff, but all the good stuff is somewhere in a notional future (paywalled, maybe?) and the bad stuff is here now?
posted by Pickman's Next Top Model at 3:26 PM on November 27, 2023 [18 favorites]


or if you don't really care whether you have a good understanding of the people or views being mocked

The incestuous nature of these AI companies is interesting. For example, Holden Karnofsky (of Givewell fame) being on the board of OpenAI, married to the cofounder and president of Anthropic. It's definitely a unique time in history when philosophizing in the abstract about morality gets turned into selling investors on a dream of the future, basically turning raw huckstering into a means for raising rounds of capital. It seems clear it has been about the money and has always been about the money, lent generously with academic trappings for a veneer of legitimacy and novelty.
posted by They sucked his brains out! at 3:34 PM on November 27, 2023 [8 favorites]


This is a great article if you want to stoke your (reasonable!) dislike of tech bros and crypto fraudsters while using the worst exemplars to tar lots of people with guilt by distant association.

Yes, this is good.

It's also a great article if you want to sneer at some weirdos and philosophers without really engaging with any of their arguments or if you don't really care whether you have a good understanding of the people or views being mocked.

Also a good and worthwhile thing to do. These people are hugely self important losers and their views and arguments are garbage.

If, on the other hand, you want serious engagement with arguments, charitable interpretation, reasoned criticism, empirical evidence, engagement with relevant professional literature, and the like ... look elsewhere.

Absolutely not worth your time. This stuff is all hollow garbage and the people involved deeply unpleasant. The only possible reason you should want to delve in further is to mock it in more detail and with more accuracy.
posted by Artw at 3:35 PM on November 27, 2023 [30 favorites]


I understand why donors might be attracted to something like EA; figuring out how to best help people is very, very difficult. Do you help a few people significantly, or many more marginally? Is raising awareness actually valuable? How about the arts? etc. If you try to rank these things against each other, you'll just lose your mind. And that's a core problem with so many of these charity-ranking exercises: essentially they're pitting all these causes against each other in a kind of altruism deathmatch. It doesn't work! Your giving is going to end up being an extension of your values. Let it be that.

EA ends up being for people who don't come around to that realization, or seek justification because they don't like their own values.
posted by phooky at 3:41 PM on November 27, 2023 [5 favorites]


I dunno, this gets awful close to a No True Scotsman argument

“Awful close” is covering an awful lot of distance, here, because my comment was not even a little bit about policing who ought to be included under the banner of EA when evaluating whether EA is good or bad. I’m fine with the conclusion that it’s bad. It’s just also a fact that many of its founding documents (including some of those contributed by Peter Singer) were explicitly arguing essentially the inverse of this

Therefore a dollar spent to prevent a poor child in a developing nation from starving is a dollar wasted.

As such I think the question of how it got from Point A to Point B in 15 years or so is a serious one. Even if the answer is just some combination of “Peter Singer’s notion of utilitarianism (eugenicist on an individual level, humanitarian on a global level?) is bad” and “the movement came to rely too heavily on SV money men (and aspirants) who are more interested the prospect of the Singularity than the humanity of poor children.”
posted by atoxyl at 4:45 PM on November 27, 2023 [3 favorites]


As such I think the question of how it got from Point A to Point B in 15 years or so is a serious one.

Doesn't much matter what the founding documents say if their own authors more or less disavow them in their contemporaneous interviews and writing. Your point, as much as I can determine, is actually an indictment that it was keeping two sets of books from day one.

Point B was already there at Point A, and it's just a matter which part was the quiet part. The answer, I contend, is that EA was always a stalking horse for a utilitarianism so high on its own farts and so easily co-opted into just another iteration of plutophilic onanism that it looks like it was purpose-built for precisely that.
posted by tclark at 4:59 PM on November 27, 2023 [6 favorites]


Whatever anybody says these things are or wherever they come from, the whole scene has "trickle-down" written all over it.
posted by rhizome at 5:17 PM on November 27, 2023 [6 favorites]


When I was young, I had a "Cyrano" description list in my Usenet signature that included the phrase, "God/Emperor-in-Training and Imperialist Yankee Running-Dog." Like most people I grew out of sophomoric ideas and getting my philosophy from The Notebooks of Lazurus Long. It's embarrassing to have thought one was so much smarter than everyone else that the world would be better if they decided what everyone should do.
posted by ob1quixote at 5:33 PM on November 27, 2023 [2 favorites]


Do you want firing squads, because this is how you get firing squads.
posted by misterpatrick at 5:51 PM on November 27, 2023 [1 favorite]


“Making God,” Emily F. Gorcenski, 25 November 2023
posted by ob1quixote at 6:18 PM on November 27, 2023 [9 favorites]


Your point, as much as I can determine, is actually an indictment that it was keeping two sets of books from day one.

My point was that I find “keeping two sets of books from day one” to be an overly facile way of looking at it, but then again I find “a utilitarianism high on its own farts and easily co-opted” to be more or less on target so maybe it’s just semantics.
posted by atoxyl at 6:27 PM on November 27, 2023 [3 favorites]


Holden Karnofsky (of Givewell fame)

I can never hear "effective altruism" without thinking of the MetaFilter Givewell affair, which was 16 years ago! Grift from the beginning.
posted by grouse at 8:08 PM on November 27, 2023 [16 favorites]


From Emily Gorcenski’s “Making God”, linked by ob1quixoute above:

Effective altruism is a political gift to the wealthy, packaged absolution that gives them moral permission to extract as much as they want. It is also perilously close to the edge of the cliff of fascism.

Marc Andreesen, the famous venture capitalist, took a flying swan dive off that cliff last month. In a rambling “techno-optimist” manifesto, he references both longtermist ideas as well as neoreactionary and classically fascist ones. […] One may as well summarize the entire philosophy with fourteen simple words: “we must secure the existence of our people and a future for our children.” This is just one small change away from a different 14 words, but simply look at some pictures of these philosophers and ask yourself to whom “our'' refers.
posted by azarbayejani at 8:56 PM on November 27, 2023 [7 favorites]


I can never hear "effective altruism" without thinking of the MetaFilter Givewell affair, which was 16 years ago!

Oh, wow.

(Didn’t know about that one but kinda miss the days when this was a hot place to astroturf)
posted by atoxyl at 9:37 PM on November 27, 2023 [4 favorites]


Defective altruism.
posted by Termite at 12:52 AM on November 28, 2023 [9 favorites]


The illogical reasoning resembles that of Putin's: in order to save the "brothers and sisters" in Ukraine, they need to be destroyed first. It’s the logic and language of greedy imperialist assholes.

Of course silicon valley needs children to mine rare minerals in Africa, Chinese teenagers to screw devices together 24/7 and young adults to view and sort disgusting content in Southeast Asia — how else will they accumulate enough profit to make a token contribution to fight child poverty there?

Designed in California, Made in Desperation.

It’s the White Man’s Burden over and over and over again. Just go away already.
posted by UN at 1:45 AM on November 28, 2023 [3 favorites]


Wow. That piece by Emily Gorcenski that's linked above is really worth a read. It pinpoints the way these weak and incoherent 'philosophies' use an eschatological frame to enact an ideological project.* Worth taking the time to read, genuinely.

* Let's think of 'philosophy' as something that you do, and 'ideology' as something that is done. The difference between thought and action.
posted by prismatic7 at 2:23 AM on November 28, 2023 [3 favorites]


I suppose "cruelty free" altruism is too much to hope for.
posted by JohnFromGR at 3:22 AM on November 28, 2023 [2 favorites]


I vaguely remember "effective altruism" meaning "go back and check whether your attempts to help worked, for whom, and for how long. Try to improve."


It was called "program evaluation" back in the day.
posted by srboisvert at 3:34 AM on November 28, 2023 [3 favorites]


I can never hear "effective altruism" without thinking of the MetaFilter Givewell affair, which was 16 years ago! Grift from the beginning.

Indeed, grift from the beginning, by a grifter. I should have used the term infamy, rather than fame, but yes it could prove useful for proponents to considering drawing a timeline back for some of these folks.
posted by They sucked his brains out! at 9:22 AM on November 28, 2023 [2 favorites]


A problem (one of them) with EA is the (apparent, at times) comfort with secret motives that differ from publicized explanations for behaviour.
posted by lookoutbelow at 11:19 AM on November 28, 2023 [3 favorites]


Supporters of both EA and other faction seem happy enough with capitalism but pretty grumpy about democracy. Or is that just my imagination?
posted by Bella Donna at 2:18 PM on November 28, 2023 [2 favorites]


phooky: Your giving is going to end up being an extension of your values. Let it be that.

The values are above criticism because they're raw rational thought. None of it is biased assumption, all of it is cosmologically empirical and universally so (far) right that you can't argue with it.

misterpatrick: Do you want firing squads, because this is how you get firing squads.

Revolution is successful coup, and coup is organised by elites against other elites.
posted by k3ninho at 3:12 PM on November 28, 2023


« Older Rethinking the Green Revolution   |   Coming soon to the box office. Newer »


This thread has been archived and is closed to new comments