“mere ripples on the surface of the great sea of life”
October 24, 2021 10:06 AM   Subscribe

The point is that longtermism might be one of the most influential ideologies that few people outside of elite universities and Silicon Valley have ever heard about. I believe this needs to change because, as a former longtermist who published an entire book four years ago in defence of the general idea, I have come to see this worldview as quite possibly the most dangerous secular belief system in the world today.
Against longtermism by Phil Torres, an essay about the dangers of a philosophical movement that prioritizes all future potential humans over actual living ones.
posted by Kattullus (56 comments total) 31 users marked this as a favorite
 
Forgot to mention that it was via MeFi’s own cstross.
posted by Kattullus at 10:11 AM on October 24, 2021


Basically anything Nick Bostrom is advocating is dangerous. Fascinating, but dangerous. (Kidding, but not.)
posted by PhineasGage at 10:28 AM on October 24, 2021 [4 favorites]


Longtermism is a non-theistic religion masquerading as a philosophy. The argument that there is such a thing as destiny and "lost potential" which allows you to disregard existential threats either in the light of something like culling the weak or some wacky neo-Social Darwinist idea of strengthening humanity via trials for the take of quadrillions of human descendants in the deep future becomes a convenient scaffold on which to throw one's predispositions, which all seem to conveniently end up pointing toward the idea that some people are worth saving and others aren't. And it all slots suspiciously nicely into an eco-fascist gestalt. Very suspiciously.

It's at minimum superstitious hubris and at maximum straight-up millenarian cultist religious thinking to posit that humanity has a "destiny" or even should have one, and that, by virtue of your cleverness as substitute for divine revelation, you've figured out what it is.
posted by tclark at 10:31 AM on October 24, 2021 [27 favorites]


Or, what Charlie Stross said much more succinctly.
posted by tclark at 10:33 AM on October 24, 2021 [4 favorites]


So convenient a thing to be a reasonable creature, since it enables one to find or make a reason for every thing one has a mind to do.

- Benjamin Franklin

posted by atoxyl at 10:45 AM on October 24, 2021 [13 favorites]


The fact that hackernews-type bros are really into it should be enough to make people wary.
posted by simmering octagon at 10:46 AM on October 24, 2021 [20 favorites]


The idea has some legitimacy, but the abuse of it, is unfortunately all too real, and not *just* from TxH movement types.

Listen to assholes like Manchin invoke "our grandchildren" and all the breeders (no offense intended) who invoke their "children" against us godless atheist childless people... "Why should YOU have a say about the future, you don't have a child" As if I can't have concern about the future of the planet because I'm a goddamned human with love for humanity and can see past my own fucking little gene pool?

And Manchin can go choke on a pipe. He doesn't give a shit about his "grandchild" He gives a shit right now about how much money gets in his bank account. His caring only is concerned about how much money that child has in a land of fire and death.

But I think long-termism shouldn't be thrown out with the bathwater (or the baby, I don't remember what's what in the metaphor anymore).

The enemies of the human race are absolutely focused on short-termism. The corporate profit model driven for each new quarter, never looking ahead except for the fiscal year. Technology has made the imperative ever shorter and less long. Who can forecast a decade from now the faster this shit moves. 20 years ago we had flip phones. iPod didn't exist yet, MPAA was suing Napster. Bitcoin was a twinkle in Satoshi Nakamoto's dick-for-brain.

On the one hand this would be a good thing to remind us that maybe longtermism is a folly, if the ones making the future with all their will are unable to predict it, what good are we?

I guess I'm saying the same thing but in different terms and just complaining about semantics. A long term focus (that imagines a post-human world) does no good to us in the here and now, just as a short term focus does no good to us.

Perhaps scale of time should be concerned with scale of population. Immediatism can work locally. As the aggregate social collective increases in size, the scale of understanding can perhaps grow. The problem of course is holding either hostage.

Perhaps the problem is ignoring the key issue: Sustainability. And I don't mean that as green speak for corporate incentives to cut pollution, I mean if you kill us all near term, what's the point of the long term (which is what long-termers go for, or seem to imply, but miss out when they don't explicitly incorporate it).
"
Meh, I guess, as usual, I should actually read the article. But usually I like Phil's writing so I have a feeling I'm going to agree with the general thrust.
posted by symbioid at 10:47 AM on October 24, 2021 [7 favorites]


never trust men named Nick!!

seriously tho, I have not yet read but this is really fascinating. I had not heard this term before but the concept, this idea that the future is worth sacrificing the present for (esp if you are not personally making the sacrifice!!) is deeply toxic. I've been sort of groping in this direction in some thoughts. looking forward to reading this essay for some more structured insights. (Stross quote is 1000%)
posted by supermedusa at 10:48 AM on October 24, 2021 [4 favorites]


This article makes a lot more sense once you realize it's not talking about long term thinking as an average person would conceptualize it and is in fact engaging in a much more specific critique of a minority (albiet influential) view within the rationalism/silicon valley bubble.
I broadly accept and agree with the thrust of the article's argument, but I was not convinced of the truth of the assertion that "It is difficult to overstate how influential longtermism has become." Outside the bubble of people the article engages with, nobody cares. I agree with cstross's tweet but also billionaires are always going to find excuses to ignore the common welfare. I don't see panicking about some obscure (outside their community) philosophers is the most urgent thing on my climate change/global development agenda.
posted by Wretch729 at 10:48 AM on October 24, 2021 [11 favorites]


As a utilitarian (but very much not a total utilitarian as described in the piece) I sort of support the ideals of longtermism in theory, but everything they try to do in practice seems like a complete waste of time and money that would otherwise go towards actually useful altruistic projects. Because they're trying to extrapolate so far ahead in the future, they have to rely on very uncertain assumptions about how human society will evolve. And then they use those far future extrapolations as evidence for their assumptions, and become way too certain about how things will go. This is the same flaw in reasoning that has created predictions of "the singularity will come within the next 30 years" for more than 30 years. And it's basically just religious apocalypticism which has been around as long as humanity itself.

We know almost nothing about what humanity will be like in 10,000 years or the chances of us surviving that long. So let's fix the very obvious problems that exist now (and for the next 100 years or so) and let our theoretical superintelligent future selves deal with those theoretical future problems.
posted by JZig at 10:51 AM on October 24, 2021 [9 favorites]


Essentially - we can create a general rule:
If you read about it in Wired Magazine, abolish it from your consciousness. And make sure it doesn't spread.
posted by symbioid at 10:56 AM on October 24, 2021 [6 favorites]


I don't see panicking about some obscure (outside their community) philosophers is the most urgent thing on my climate change/global development agenda.

The silicon valley bubble created and implemented the gig economy and they still think it's the greatest thing on earth. They have the resources to do enormous amount of damage. Dismissing them is naive at best and dangerous at worst.
posted by simmering octagon at 11:01 AM on October 24, 2021 [8 favorites]


JZig, I'm pretty utilitarian as well, and I'm not sure what's being described in the article as Longtermism is so much utilitarian as hubris. The idea that we can somehow plan for a long term future is absurd on its face--look back just 30 years to see how much we haven't predicted about today.

The only realistic approach I can think of for long-term planning for humanity is to make sure future options are open, which is pretty much what our current culture does not value, nor do Long-termers. The former is precipitating a crisis due to short-term thinking while the latter seems to be trying to help a crisis along so we can rebuild in their own stupid vision of human Godhood.
posted by Ickster at 11:01 AM on October 24, 2021 [9 favorites]


Outside the bubble of people the article engages with, nobody cares

I agree that this is true, but the article's assertion of importance is based on the idea that people like Peter Thiel are legitimately influential outside of their personal spheres. The people who are influenced most by the ideas of longtermism are wealthy and idealistic entrepreneurs, who because of how capitalism works right now can be quite impactful on the rest of society. More specifically, these ideas are kind of taking over the Effective Altruist movement in general. The intellectual energy and money that is being piped into longtermism right now is the same energy and money that could be really effective at actually dealing with global warming. Yes there are lots of tech bros and oil barons who are completely self-centered, but some of them do actually want to help humanity and are being led estray by Bostrom and others.
posted by JZig at 11:06 AM on October 24, 2021 [6 favorites]


This form of thought is a close parallel to Nietzsche’s primary attack on Christianity. Christianity evolved to ignore the human condition in the here and now in favor of a promised paradise in the future. There is no need to correct or fix conditions in the world today because everything will be better (for us Christians) in the future. Nietzsche phrased this as a denial of life, here in the real world, for an imagined paradise somewhere and sometime in the future. He saw this as both an elevation of what he characterized as “slave morality,” as the misery and subservience of existence is good because we will be rewarded later, and as a simple method of social control by distracting people away from present problems by promises of something better. This type of thought is nothing new. It’s been around for thousands of years and usually promoted by those in charge of and benefiting from the existence of this “slave” class. And no, Nietzsche wasn’t promoting these ideas. He was attacking them though most of his writings.
posted by njohnson23 at 11:11 AM on October 24, 2021 [16 favorites]


There was a recent Ezra Klein podcast with Holden Karnofsky - one of the GiveWell co-founders - which covered a whoooole lot of ground, but included longtermism as a subject.
HK: "I do agree that there can be this vibe coming out of when you read stuff in the effective altruist circles that kind of feels like it’s doing this. It kind of feels like it’s trying to be as weird as possible. It’s being completely hardcore, uncompromising, wanting to use one consistent ethical framework wherever the heck it takes you. That’s not really something I believe in. It’s not something that Open Philanthropy or most of the people that I interact with as effective altruists tend to believe in.

"And so, what I believe in doing and what I like to do is to really deeply understand theoretical frameworks that can offer insight, that can open my mind, that I think give me the best shot I’m ever going to have at being ahead of the curve on ethics, at being someone whose decisions look good in hindsight instead of just following the norms of my time, which might look horrible and monstrous in hindsight. But I have limits to everything. Most of the people I know have limits to everything, and I do think that is how effective altruists usually behave in practice and certainly how I think they should."
There's clearly a more-correct point of moderation between the hardcore reddit interpretation of longtermism and the corporate maximize-shareholder-quareterly-profits-at-all-costs approach. The Unhinged Redditor approach does disservice to the basic idea that we (as a society) really should value future generations... Though ironically+unfortunately, we (as a society) often fail to value people living today.

I do find the idea of mitigating moral monstrosity interesting. Take as a given that future generations will look back and think that we are, on average, monsters - what are the things they're looking at? It's easy to come up with obvious candidates (climate change!) but it's worth digging a bit deeper and questioning a bit harder.

Here's an example: I was recently listening to another podcast that goes into the Colonialism in the anti-slavery movement. It was a large enough movement to send over 15k people to West Africa, who eventually founded Liberia. And the motivating philosophy was that integration was impossible, and America would remain a white nation after slavery was abolished. Anti-slavery was a good thing, of course, but the idea that Blacks should be displaced to West Africa instead of recognized and accepted as equal members of American society was monstrous, and arguably directly fed into the nation-wide segregationist system that came after abolition.

So that's a good stance (anti-slavery) paired with a popular and monstrous solution. I think it's worth it to look hard at the way we live our lives and look for these hidden (to us) points of moral monstrosity, even while admitting that we might be wrong, and acting on the observations in a moderated way.
posted by kaibutsu at 11:15 AM on October 24, 2021 [4 favorites]


Despite their overblown fear of AI, these are exactly the kind of nerds who would end up building a paperclip maximizer basilisk, because their entire philosophy is paperclip maximization under a different name.
posted by Pyry at 12:28 PM on October 24, 2021 [7 favorites]


I feel like their argument for humanity's destiny and warnings about existential risk ignore the most likely destiny for humanity is its search for ways to wipe out all life on this planet.
posted by perhapses at 12:28 PM on October 24, 2021


The intellectual energy and money that is being piped into longtermism right now is the same energy and money that could be really effective at actually dealing with global warming.

I think there's a too-cool-for-climate-change problem. These people see dealing with climate change as some dumb thing that government and everyone else is doing, while they (visionaries, free thinking disrupters) know that the threat of AI or not mounting space colonization missions fast enough is actually much more important because [contrarian hand-waving]. It's not intellectual, it's about feeling like you're smart — smarter than everyone else, which is basically what Silicon Valley is all about. Somehow we need to make longtermism, effective altruism and so on seem boring, stodgy and uncool.
posted by ssg at 12:29 PM on October 24, 2021 [25 favorites]


I think there's a too-cool-for-climate-change problem.

Definitely. I'm not 100% sure about this but I suspect this is a side effect of the activism about it being very vocal but not particularly effective: It seems like "everyone is already complaining about climate change" so individual people with money think they shouldn't bother. This definitely comes up in the "rationalist" community, where the argument 5 years ago was that longterm topics like existential risk were being dangerously ignored compared to obvious issues like climate change. But now the AI risk people have proper conferences, press articles, and actual funding but still complain about it being ignored, while the discussion about climate change has kind of petered out. There are a lot of ways that people can spend money effectively to help make positive social changes.
posted by JZig at 12:45 PM on October 24, 2021 [1 favorite]


Metafilter's previous discussion on longtermism from August should be linked here for convenience
posted by demonic winged headgear at 1:09 PM on October 24, 2021 [7 favorites]


It's tempting to say that the biggest problem humanity faces is the existence of billionaires, not climate change or some other issue. So tempting, that I just said it and will say it again and again.
posted by tommasz at 1:10 PM on October 24, 2021 [15 favorites]


alternately, the biggest problem billionaires face is the the existence of humanity. what's required bluntly is robots
posted by philip-random at 1:18 PM on October 24, 2021 [3 favorites]


A potential counter-argument to seeing climate change and other things as "lesser" catastrophes - something that kills large parts of humanity also kills the potential of those individuals - and perhaps one of those people would be the one to find a solution to "further down the road" existential risks.
posted by ymgve at 4:01 PM on October 24, 2021


This seems a lot like manifest destiny.
posted by Chrysopoeia at 4:10 PM on October 24, 2021 [3 favorites]


This longtermism thing seems so transparently stupid that it seems like a prank except for the names and dollars involved. What kind of mental gymnastics are required to aver, with a straight face, that essentially the sole metric of utility is how many human-like beings exist? I propose, with at least equal authority and attention to detail, that diminishing marginal utility applies to the cosmic utility of each additional human-like being who comes into existence; longtermism is therefore disproven, and the donation checks for my many Institutes should please be made out to cash.

Like, just say that you don't want to feel bad about being shitty rich people and save us all the angst.
posted by sinfony at 4:38 PM on October 24, 2021 [4 favorites]


Glad to see at least one of them escaped the cult. Hope more of them do.
posted by clawsoon at 5:05 PM on October 24, 2021


Dude... couldn't we achieve even more total utility in the universe with 10100 happy ants than we could with 1050 happy people? Maybe we do need a Derek Zoolander center for ants...
posted by clawsoon at 5:14 PM on October 24, 2021 [2 favorites]


If they were really focused on the long term, wouldn't it make sense to try to make the present as good as possible?
posted by Saxon Kane at 5:46 PM on October 24, 2021


Late at night, I had been listening to the Youtube videos of Isaac Arthur to help me drop off to sleep. They're fascinating and soothing at the same time -- all about the possibilities in space, together with O'Neill cylinders, arcologies, whether we should settle the Moon or Mars first, that kind of thing. I enjoyed it -- still do -- but in the night I would quietly think: c'mon, man, we're never gonna do this shit. Any of it. We're never gonna get it together. And I couldn't relax after that.

If I had SV money and an SV brain, I could afford not to have to face these things. I could relax and imagine a future where humanity ranks on the Kardashev scale. Maybe I'd have to retreat to a bunker in the meantime, but I could handle that, and it wouldn't bother me so much. But I can't, and it does.
posted by Countess Elena at 8:43 PM on October 24, 2021 [4 favorites]


According to my calculations, if humanity were to fall back to a pre-technology state without creating superhuman descendants, there is a 1% chance that dolphins would evolve a society containing 10^59 sentient happy genetically engineered dolphin-human hybrids, which obviously count as humans. Therefore every action these longtermists take towards their human-driven future is actually reducing the expected value of humanity in a way that's equivalent to killing billions of people per day.

Monsters.
posted by bashing rocks together at 9:14 PM on October 24, 2021 [2 favorites]


These guys are the sorts of wankers who only wank off in natural bodies of water because even a 10^-50 odds of impregnating a passerby integrated out to infinity means it would be a genocide-level crime to wank at home.
posted by chortly at 10:02 PM on October 24, 2021 [3 favorites]


demonic winged headgear: Metafilter's previous discussion on longtermism from August should be linked here for convenience

Thanks for linking to that previous post!

One really interesting thing that Torres says in the Current Affairs essay is: “If we shouldn’t discriminate against people based on their spatial distance from us, we shouldn’t discriminate against them based on their temporal distance, either.”

I think that right there is the logical fallacy that lies at the root of so much of longtermist thinking. It’s a kind of fatalism, as it treats the future as being indistinguishable from the past. That looking backwards along the axis of time is the same as looking forwards.

But it’s not, what has happened has happened. And what didn’t happen didn’t happen. It isn’t probabilistic, unlike the future. The past is the realm of the actual, while the future is the realm of the potential.

This makes it easier for me to see it as a religious movement. In some theologies, god is atemporal, and therefore, to the divine observer, there is no difference between the past and the future. All of time is actual, not potential.

Funnily enough, this is the exact same line of reasoning that makes some Christians give just as much moral weight to unborn fetuses as their mothers. The potential human being has equal status to the actual human being. And we know what kind of horrors can follow actions based on that kind of thinking.

After making that link in my head, I’m suddenly considerably more worried about the longtermist ideology, as I now have a model for how it can lead to direct harm. I could definitely see how laws could be used to enact longtermism on actual, living human beings.
posted by Kattullus at 2:27 AM on October 25, 2021 [7 favorites]


Long-termism gives equal weight to imaginary people (those in the future) as it does to real people.
posted by PhineasGage at 6:04 AM on October 25, 2021 [4 favorites]


PhineasGage: Long-termism gives equal weight to imaginary people (those in the future) as it does to real people.

Seems like it's giving extra weight to a specific kind of imaginary future person, too: The sort of Ubermensch who can carve out Lebensraum in space for a Thousand-Year Trillion-Year Reich.
posted by clawsoon at 12:30 PM on October 25, 2021 [1 favorite]


Bostrom himself argued that we should seriously consider establishing a global, invasive surveillance system that monitors every person on the planet in realtime, to amplify the ‘capacities for preventive policing’ (eg, to prevent omnicidal terrorist attack

Hi, I'm Troy McClure, ever wanted to know about the rationale of Genocide, ever wonder if we have to many people.
posted by clavdivs at 2:33 PM on October 25, 2021 [2 favorites]


I fail to see how driving Gawker into bankruptcy advances the longtermism vision, but what do I know.
posted by RobotVoodooPower at 2:47 PM on October 25, 2021 [2 favorites]


Bostrom himself argued that we should seriously consider establishing a global, invasive surveillance system that monitors every person on the planet in realtime, to amplify the ‘capacities for preventive policing’ (eg, to prevent omnicidal terrorist attack

Surely the only people actually capable of an omnicidal terrorist attack would be people in possession of realtime knowledge about where everyone on earth is at a given time...
posted by clawsoon at 3:53 PM on October 25, 2021 [3 favorites]


"I, I've been watching you
I think I wanna know ya (know ya)
I said I, I am dangerous..."

-Morris Day and The Time.
posted by clavdivs at 5:16 PM on October 25, 2021


Metafilter's previous discussion on longtermism from August

Which was also about a Torres article. Opposing longtermism seems to be a project of his, and a reversal of his earlier and favorable work along those lines.


I'm doing some early research into this now myself. Like symbioid, I hesitate to lose both baby and bathwater. Our era's obsession with a three month horizon is clearly bonkers and the general idea of thinking long term is badly needed. That actually leads us to climate change work, among other things, as we think ahead generations and centuries.

I am curious about Torres' rejection of space exploration, which he does very quickly and without nearly as much argument as he raises against other aspects of longtermism. Like Countess Elena I sometimes fear that the human race will fail to get seriously off planet.

Or to put it another way, can longtermism be fixed to get rid of these zillionaires' problems and the embrace of present-day cruelty?
posted by doctornemo at 7:10 PM on October 25, 2021 [1 favorite]


Like Countess Elena I sometimes fear that the human race will fail to get seriously off planet.

I have some fundamentalist Christian friends, and their hope of heaven strikes me as similar to the idea that we'll ever seriously colonize any other body than earth. Astronomical observations have pretty much destroyed the idea that there's a heaven "up there", and at the same time they've pretty much destroyed the idea that any of the places which are up there will be places for us to live. No place that's close is reasonably livable, and no place that's livable is reasonably close.

If we manage to do good by the planet and people we've got I think it'll count as a major achievement.
posted by clawsoon at 7:29 PM on October 25, 2021 [5 favorites]


The most persuasive comment I have heard that we shouldn't look to space travel as humanity's salvation is "It'd be a lot easier to terraform Earth and make it liveable once again than to try to terraform Mars."
posted by PhineasGage at 8:01 PM on October 25, 2021 [6 favorites]


Salvation is a tricky idea here.

There is the religious sense, as your friends hold, Clawsoon. That's mostly an analogy, although a part of the American space program was very Christian. David Noble argued that one reason for that was where key installations were built: in the Bible Belt, notably Alabama and Texas. We could think of the famous Apollo mission Bible reading as a small example. We could also add the religious analogy interpretation of posthumanism as a form of the Christian Rapture. I'd go further and add the more speculative, religion-philosophy-who knows what hybrid of Russian Cosmism.

That's all very different from a more secular salvation, going into space to protect humanity from physical suffering and destruction. I think that has two distinct versions. One is the use of space to improve human life, which has been amply demonstrated by weather satellites, spinoffs, etc. This is clearly a case where space directly helps us grapple with climate change. The other is migration off-planet to establish humanity elsewhere, in case Earth gets hit by the existential threats the longtermists rightly worry about. The first is a mild form of salvation, and possibly one most people would accept, even though space is rarely popular. The second is vastly more ambitious, and gets a lot of criticism now, primarily directed at Elon Musk, its most vivid exponent.

Personally, I don't think salvation is the right framework. It's too religious for me as an unbeliever. I do like the secular aspects, but they don't need the ecclesiastical framing. Moreover, salvation doesn't really describe what the longtermists have in mind. That's a short term piece of their idea; the long term is so much more positive, a sense of flourishing.
posted by doctornemo at 5:40 AM on October 26, 2021 [1 favorite]


If we manage to do good by the planet and people we've got I think it'll count as a major achievement.

I hear that, clawsoon. Some days my outlook turns very dark, partly because my wife works in public health in the US, and partly because my work requires considering bad futures very seriously. In the book I just turned in one chapter concluded with the extermination of the human race.

I've heard related views from some other folks. Naomi Klein, for example, is very openly anti-space. She sees the famous view of the Earth from off-planet as a dangerous one, giving people a view that ignores the richness of each point on the world, causing us to ignore ecological realities. Donna Haraway strikes me as similar in her urging for humanity to reduce itself, shrinking its population and footprint on the Earth without leaving it. If I'm reading them correctly, they see your goal, clawsoon, as not only vital but all consuming of our attention. For them, we can't waste time haring off into the Oort Cloud.

On the other hand, on other days, I see the opposite view. Doing two things is something humanity can accomplish: migrating into space while undoing industrialism's damage to the Earth. There's been a lot of thinking and planning along these lines, like the Tofflers' idea of moving industry off-planet. We're a huge and capacious species, capable of accomplishing wonders. We could aim for decarbonization and O'Neill colonies together.
posted by doctornemo at 5:47 AM on October 26, 2021 [2 favorites]


"It'd be a lot easier to terraform Earth and make it liveable once again than to try to terraform Mars."

I can see that idea, PhineasGage, especially as we've already been engaged in terraforming Earth since 1800 or so. (Especially if by "liveable" we mean "more liveable," since Earth is obviously liveable now.)

Yet we can still try to do both. The strongest reason not to isn't difficulty but an argument from scarcity, that our resources are too scant to do both and we must prioritize.
posted by doctornemo at 5:53 AM on October 26, 2021


Ah, so the venture capitalists have now adopted Stalin's "a single death is a tragedy, a million deaths are a statistic."
posted by Schmucko at 8:34 AM on October 26, 2021


Oh, yes. I wasn't advocating just sharing that viewpoint I have heard (maybe even here on MeFi). I do think, though, that on a pure survival basis there is no way that we could make any place off planet fully liveable for a meaningful number of humans in a time period short enough to make that a 'solution' to the situation here on earth right now.
posted by PhineasGage at 8:35 AM on October 26, 2021 [2 favorites]


I think we need to consider whether humanity SHOULD continue to shape the inhabitants of the future according to their values if they (we) botch climate change. It's something we've known is coming for decades, not some last-minute asteroid. To avert it we only need to tweak our institutions to be a little less selfish in specific ways. Those who survive the deaths of billions would probably be those most responsible for the machinery of their death. I'd rather this species not become space-faring in case it displaces more benign indigenous species. I'm not interested in the infinite bliss of architects of a genocidal system.
posted by Schmucko at 8:38 AM on October 26, 2021 [2 favorites]


I'm doing some early research into this now myself. Like symbioid, I hesitate to lose both baby and bathwater. Our era's obsession with a three month horizon is clearly bonkers and the general idea of thinking long term is badly needed. That actually leads us to climate change work, among other things, as we think ahead generations and centuries.

I think this mischaracterizes most people's objections to longtermism. In the linked article and the previously, "long term" is defined as "thousands, millions, billions, and even trillions of years" into the future. I'd wager that everyone commenting here agrees that we have an ethical responsibility to future generations, and for that reason we should be good stewards of the environment. So that baby at least is not being thrown out. I think you're probably referring in part to the incentives for elected governments to prioritize current voters over the future, but even there it's not obvious to me that getting longtermism into policy making would be an improvement over the coalition of groups trying to enact environmental protection laws right now.

My opinion after reading these articles is that is sounds like longtermism is on shaky ground philosophically, and at best leads to the same conclusions that we come to in other ways (e.g., we should care about the environment). At worst it acts as a salve for billionaires and techno-libertarians who want to spend their money on AI research but also want to be patted on the back and told that spending money on AI research is better than giving it to the homeless from an ethical standpoint.
posted by jomato at 9:11 AM on October 26, 2021 [7 favorites]


I did a deep dive into this after the August post. I read about longtermism, existential risk, and effective altruism. There's also a connection via Nick Bostrom to the wacky world of simulation arguments. I listened to podcasts. I watched "A Glitch in the Matrix", which was just godawful, but that's another story. I came away thinking that these people are trying to end-run around the ages-old "meaning of life" question by applying their academic brains. Some interesting questions are raised, but nothing's answered, and the questions have been asked in other forms for thousands of years.

For longtermism, it's important to find out what someone means by that. The term was coined in 2017 in the effective altruism (EA) community and didn't necessarily mean "millions of years from now", but I think that's what is meant when we have these threads. I think that "we should think about the world our children will inherit" is a pretty clear and convincing line of thought. The problem is that as you move outward in time (or space, frankly), it feels like reductio ad absurdum. "Eat your peas, Jimmy (or maybe fewer peas)! Think of Calphalon Bogstandardus IV living on Alpha Centauri 10^8 years from now." No offense, but I don't care about Calph at all.

Some of the obvious problems with very-long-termism:

* Assumes that the grand destiny of humans is to fill up the universe with homo sapiens, and that this is objectively "best" for ... idk the universe?
* Assumes that our judgements and preferences now have some relation to those of our descendants living far in the future.
* Often includes the idea of *even more* pseudo-people living in computer simulations. Which, see A Glitch In The Matrix, is pretty dumb. Also assumes needs and wants of those "people".
* This and effective altruism (and sim theory) are busy trying to quantify all this. Unclear what the justification for that is or whether quantified altruism is compatible with how humans make or should make decisions.
* Doesn't jibe qualitatively with the prioritization that most people make at their best.
* Turtles all the way forward: every generation is expected to prioritize the future over the present, therefore no one ever experiences a maximally enjoyable "now".
* Heat death of the universe. Whoops! This all ends at some point regardless (probably).
* As a purpose in life, it's not much different from what bacteria and frogs and wildebeests do. Survive to procreate and die. Really it's just "make it so that a maximum number of future people can also survive to procreate and die". Maybe live longer? Maybe forever in a sim? Is that good? Who knows? Does it provide "meaning"? Doubtful.

Etc. etc. It just doesn't add anything useful or believable to the human decision-making matrix.

I can't resist a parting shot at the matrix stuff: One of the most common claims you see in articles about this is basically: "The odds that we live in the base reality of this untestable hypothesis are (insert math)." LOL.
posted by freecellwizard at 10:00 AM on October 26, 2021 [9 favorites]


I can understand having some doubts about the merits of human survival; OTOH, to be able to reach that position myself, I would have to be able to pass my human judgment from a non- or super-human perspective, and I'm not quite sure how I would go about doing that.

Anyway, assuming (as I suppose I must) that human survival is a desirable goal, if we were actually going to be terraforming/geoengineering Earth, then I'd say that makes a very strong case for doing some other worlds first. Not even for humans to live there, necessarily, but just because the odds of getting such a complex engineering task right the first time are incalculably small -- and if you're doing your initial experiment on the only planet you have to live on, you're probably not going to survive long enough to learn anything useful. (I mean, arguably we are running such an experiment right now, which seems likely to have the outcome you would expect -- but we probably need to concentrate on not doing that anymore.)

Which is probably just another way of saying that neither terraforming Earth or terraforming Mars seems especially plausible. To be able to manage such a massive task as terraforming a planet, we would need a level of collective with-it-ness that would also allow us to solve our immediate problems on Earth without resorting to geoengineering.

Back on the main topic, these longtermists seem like a tedious bunch of cranks who have followed the usual trick of cloaking their regressive goals with progressive terminology. Funny how advocating for the interests of the unborn always ends up supporting existing hierarchies... The amount of money going into their nonsense is disturbing, but I can only hope that it mostly ends up being laundered into something more socially useful, such as cocaine for the office party.
posted by Not A Thing at 10:00 AM on October 26, 2021 [3 favorites]


I think this mischaracterizes most people's objections to longtermism.
Yeah, I wasn't trying to characterize the critiques there. I was trying to get at what I see as positive features of the idea, and to see how that might be useful without disaster.
posted by doctornemo at 10:16 AM on October 26, 2021 [1 favorite]


freecellwizard: Often includes the idea of *even more* pseudo-people living in computer simulations. Which, see A Glitch In The Matrix, is pretty dumb. Also assumes needs and wants of those "people".

If pseudo-people are okay, wouldn't it make sense to expand the possibilities for which utility-maximizing organism we target? MVSH, minimum viable sentient happiness, to maximize total sentient happiness? Shouldn't we be telling the billionaires to be spend their money on finding out which species achieves sentient happiness with minimal resources?
posted by clawsoon at 10:26 AM on October 26, 2021


This speaks to the dangers of a culture that's into optimization and maximization of "utility", undervaluing the humanities and social life, only able to value MORE. Some have suggested that creativity can thrive on limitations. Mozart's music for example, came from a culture where there were all sorts of conventions and restrictions, the tonal system, sonata form, etc. A lot of what we enjoy is from rootedness in organic matter and Earth. There's a level of almost insanity in projecting that all our actions aim at creation of unlimited simulated beings in a simulated environment. This seems an exercise from deficient people who can't really imagine that life of regular people on Earth is enough and think that by just building MORE into life it could be justified.
posted by Schmucko at 11:46 AM on October 26, 2021 [3 favorites]


If pseudo-people are okay, wouldn't it make sense to expand the possibilities for which utility-maximizing organism we target? MVSH, minimum viable sentient happiness, to maximize total sentient happiness? Shouldn't we be telling the billionaires to be spend their money on finding out which species achieves sentient happiness with minimal resources?

Yeah you'd think so! Some of the Effective Altruism people like Peter Singer also have a focus on reducing animal suffering, so mostly vegan, against animal experiments, etc. And I would imagine other planets' life too. In the old thread someone asked something like "What if another sentient species exists in another system, and they have the same idea PLUS a higher ability to appreciate improved welfare? Shouldn't we defer to them?"

That's why I find all this so thorny. There are good ideas at the root of the EA (effective altruism) and longtermism discourse. But the famous people latching onto the more extreme versions are a problem. For instance, I love space exploration and think we should keep going (maybe with 0.1% of our resources), but if we cannot find a way to make Earth sustainable and find a way of all getting along, it's pointless to expand our failures elsewhere.
posted by freecellwizard at 12:25 PM on October 26, 2021 [3 favorites]


I have a useful heuristic for when someone is full of it in this vein. If someone is particularly interested in talking about how rational they are and contrasting their rationality to the rest of the worlds' purported irrationality, they are probably feeding you some bullshit. People who actually have a convincing argument simply make it and address others' objections as well as they can; people who have to dress up their argument in noise about how rational they are and pre-emptively dismiss everyone else as irrational are probably trying to sell you something stupid.
posted by ssg at 1:41 PM on October 26, 2021 [7 favorites]


« Older Living Alone in the U.S. Is Harder Than It Should...   |   Dispatches from the Upside Down Newer »


This thread has been archived and is closed to new comments