The Dangerous Ideas of “Longtermism” and “Existential Risk”
August 2, 2021 9:06 AM   Subscribe

"So-called rationalists have created a disturbing secular religion that looks like it addresses humanity’s deepest problems, but actually justifies pursuing the social preferences of elites."
posted by simmering octagon (81 comments total) 37 users marked this as a favorite
 
What exactly is “our potential”? As I have noted elsewhere, it involves subjugating nature, maximizing economic productivity, replacing humanity with a superior “posthuman” species, colonizing the universe, and ultimately creating an unfathomably huge population of conscious beings living what Bostrom describes as “rich and happy lives” inside high-resolution computer simulations.

You know who else wanted to replace his own race with a posthuman species that would colonize the universe, maximize economic productivity, and subjugate nature....

I don't know if Daleks are rich and happy, though I do welcome someone to retcon them as Longtermismists. Could make for some very interesting new stories.
posted by RonButNotStupid at 9:28 AM on August 2, 2021 [11 favorites]


Yes, as with any religion or philosophy, "Any cause that tells you its goals are more important than the people it is supposed to serve is evil." – Nicholas Gage, author of Eleni
posted by PhineasGage at 9:33 AM on August 2, 2021 [26 favorites]


That definition of "existential risk" as "could actually threaten me, a billionaire" is pretty enlightening.
posted by straight at 9:49 AM on August 2, 2021 [18 favorites]


It's no wonder that these longtermists are terrified of any artificial superintelligence that they, themselves, do not control. Any rational AI would immediately perceive billionaires as caches of untapped resources which can be redistributed for the greatest good.
posted by Faint of Butt at 9:53 AM on August 2, 2021 [33 favorites]


I guess the presumption that being rich also implies being intelligent/rational is on very shaky ground, given that these people have latched onto a guiding philosophy that reads like bad science fiction. Do they have some profound guilt that requires erecting a massive edifice of bullshit to cover it? Or are they just immature boys, telling themselves stories about their greatness?
posted by njohnson23 at 10:00 AM on August 2, 2021


To be fair to longtermism, many of the qualms that Torres rightfully discusses have been addressed: https://forum.effectivealtruism.org/posts/xtKRPkoMSLTiPNXhM/response-to-phil-torres-the-case-against-longtermism

That isn’t to say the current crop of billionaires with bunkers are taking such a deep view, however many longtermists (especially post-Ord’s The Precipice) are indeed concerned with existential risks like climate change and think Bostrom’s views are not the majority any longer.

No need to want future Humans to leave Earth or colonize the universe. No need to overly value the chance of an untold number of Humans living in the future versus the ones living now.

Torres takes a very specific view against some Longtermists and then unfairly applies them to all of them.

Longtermism at its core is trying to give humanity potential for long term survival and flourishing. That can and does absolutely include mitigating climate change, helping the Global South, etc. I say that as a longtermist and as someone who started and manages a nonprofit in the Global South.

So yeah. Take aim at specific people saying specific things (especially early Bostrom) but don’t throw the baby out with the bath water.

Also screw Bezos and Branson. Random rocket launches to be part of some billionaire near-space club is definitely not so easily excused under the guise of longtermism come on.
posted by PaulingL at 10:02 AM on August 2, 2021 [18 favorites]


"Any cause that tells you its goals are more important than the people it is supposed to serve is evil."

So ... Any group larger than about 20 people, then?
posted by ZenMasterThis at 10:03 AM on August 2, 2021 [6 favorites]


"What exactly is “our potential”?

As I get older, I hope more and more that a "anti-human potential" movement will emerge. Like a polar-opposite of EST, etc. + their many oily, self-serving contemporary Silicon Valley analogues. One that sees us for the sad, dangerous little monkeys that we are, and works to undermine our sense of what we can accomplish, and be cynical about doing anything in large groups. A kind of emotional coaching for the end of global civilization.
posted by ryanshepard at 10:12 AM on August 2, 2021 [22 favorites]


So, PaulingL, as I was reading I kept thinking that there's a shell game going on in longtermism where various future outcomes are held superior to vastly more concrete present outcomes, without factoring in an inherent uncertainty in future outcomes. It's easy to say that 10^14 simulated lives in the future are worth sacrificing 10^8 lives today, but the most obvious rejoinder to that is that we need to discount the value of those future lives by the odds of that outcome actually occurring. There's a non-zero chance of an extinction level meteor hitting us before we get to simulated paradise; or that Roko's Basilisk finds us wanting and creates for us a simulated hell for 10^14 humans such that we'd be better off extincting ourselves.

Like predicting the weather, any outcome within days, months or years is reasonably predictable in terms of numbers of lives saved, but on a gut level I'm finding it impossible to take seriously any outcome that talks about the fate of humans thousands or millions of years in the future.

I realize you're not defending longtermism as the article presents it, and I'm more curious to hear from you if this aspect of the utilitarian numbers game has been addressed by other longtermists. How do you weigh future outcomes with respect to the practical chances of actually realizing that outcome?
posted by fatbird at 10:18 AM on August 2, 2021 [9 favorites]


One that sees us for the sad, dangerous little monkeys that we are, and works to undermine our sense of what we can accomplish, and be cynical about doing anything in large groups.

Schopenhauer quite literally argued that it was immoral to bring children into this miserable damned existence.
posted by fatbird at 10:19 AM on August 2, 2021 [7 favorites]


Something about the header graphic...I can't quite put my finger on it...
posted by jquinby at 10:30 AM on August 2, 2021 [7 favorites]


The fun thing about positing such huge numbers of beneficiaries in the future is that no matter how unlikely that seems, you can make the math come out to seeming like it's worth spending untold fortunes on it now.
posted by BungaDunga at 10:44 AM on August 2, 2021 [8 favorites]


Schopenhauer quite literally argued that it was immoral to bring children into this miserable damned existence.

I'm fine w/a highly local, lightweight, mostly ephemeral civilization where the highest possible good is sitting around a fire, eating and telling stories. Even w/high rates of infant mortality and occasional famine.
posted by ryanshepard at 10:48 AM on August 2, 2021 [4 favorites]


Isn't there a more leftist body of thinking that also which wants to give consideration and rights to future generations when we're calculating costs of inaction on climate change, etc.?
posted by PhineasGage at 11:01 AM on August 2, 2021 [7 favorites]


Digging into this a bit, I found "Two Types of Long-Termism" which distinguishes between personal and impersonal approaches. The work is by Tatjana Višak, who focusses on the ethics of killing non-human animals. According to her Wikipedia page, she argues that "only the utility of actual beings is taken into account in the judgement of the rightness or wrongness of an action." This outlook is opposed to tendencies toward the impersonal in longtermism that reduce actual humans to the level of exploitable organic resources whose value is subordinate to a projected future population.
To fight these tendencies it is necessary to insist that the ethics of all doctrines involving humans remain on the basis of the well-being of existing persons. We should of course act with due regard for the well-being of future persons, but we can only do so effectively on the basis of due regard for the persons that currently exist.
The quantitative argument (6 billion today against unlimited numbers tomorrow) works only if we consider the well-being of the individual as important. If we instead consider important only the well-being of humanity as whole, then the actual number of individuals is relatively trivial.
posted by No Robots at 11:12 AM on August 2, 2021 [7 favorites]


Isn't there a more leftist body of thinking that also which wants to give consideration and rights to future generations when we're calculating costs of inaction on climate change, etc.?

I think there's obvious moral arguments to make in terms of maintaining future potential outcomes by not forestalling them now by allowing millions to die, or runaway resource consumption or inequality that leads to authoritarianism, despair and war. Basically, our stance now should be to minimize resources usage and maximize overall human flourishing, just because that leaves open the most possible futures, maximizing future happiness at the same time. This is where leftism and longtermism can overlap.

I don't recall the name but I know I encountered one philosopher in the 90s who'd already made such an argument, that future generations have some moral weight in our existing moral calculations, but given our inability to control or predict the future, we were limited to avoiding damage to it with a kind of quietude in our choices now.
posted by fatbird at 11:24 AM on August 2, 2021 [13 favorites]


To quote a man who did terrible things in the service of ideals, "I beseech you, in the bowels of Christ, think it possible that you may be mistaken."

The problem with treating humans as fungible resource-consuming units of experience, is that you ignore all the potential for someone to have the brilliant ideas that could make a difference. The next Einstein may already have died of malaria, and we wouldn't know. The person that could refine the ideas behind EA and really make it work could be one of the little people that the current generation of effective altruists are willing to sacrifice to limited global warming.

I take the point about not discriminating based on spatial or temporal distance, but the uncertainty over time becomes so massive that all of this stuff about a 0.000000001% chance is assuming much more predictive power than we actually have.
posted by Wrinkled Stumpskin at 11:33 AM on August 2, 2021 [7 favorites]


Aha - found some discussions variously headlined "Obligations to future generations," "Enviroethics: The Rights of Future Generations," "Do Future Generations Have the Right to Breathe Clean Air?," "What Do We Owe Future Generations?." There are some different viewpoints, but all offer an interesting broadening of how we might make present day decisions with an eye to the future. A notable difference from the OP folks seems to be the change in thinking when the unit of analysis is "humanity" versus "humans."
posted by PhineasGage at 11:34 AM on August 2, 2021 [6 favorites]


I don't recall the name but I know I encountered one philosopher in the 90s who'd already made such an argument

Debates about intergenerational justice go back to ancient times. The current version of it is roughly from the early 1900s, with some of the greatest 20th century philosophers having discussed it. These arguments are part and parcel of ethics textbooks and undergrad classes. I have a real hard time seeing what the special thing is about "longtermism", it seems to be the same old stuff with techbro mumbo-jumbo heaped on top.
posted by Pyrogenesis at 11:35 AM on August 2, 2021 [4 favorites]


There's a difference between talking about generations that will almost certainly exist (in, say, the next few hundreds of years) and purely theoretical trillions of simulated minds instantiated in computronium orbiting Barnard's Star.
posted by BungaDunga at 11:39 AM on August 2, 2021 [8 favorites]


Ages ago I read the Stern report on climate change and it was the first time I saw someone trying to think about super-long term societal level investments and concepts like multi

One relevant point in this context is that sacrifices now, for long term gains, are transfers from poor people (us) to rich people (future humans.) Obviously if you support climate change activity this is an additional wrinkle to think about, but the flip side is it's weird thing to ask current people to fund your silly video game fantasies.

. . . without factoring in an inherent uncertainty in future outcomes..

I work in an industry where we do research that has a very low chance of success but might eventually pay off in the tens of billions. Every once in a while we get some MBA type who wants to see ROI calculations to justify software purchases, renovations, or other projects. My response: "Well, if we want the project to go ahead we can say it'll increase our chances of success by 0.05% so it's worth investing millions, if we don't want it to go ahead we'll estimate in only improves our odds by 0.01%."

This sort of cynicism--even nihilism--about payoffs seems inevitable if you really embrace long termism. Estimates about discount rates are another simple lever to reach the desired outcome. In fact, in formal terms there's the Folk Theorem which has been summarized as proving that you can rational any damn thing you want if you use game theory.
posted by mark k at 11:40 AM on August 2, 2021 [5 favorites]


Some of the big guys in longtermism seem to think that, on the whole, a foreseeable global catastrophe in the next few generations is not important compared to the (totally theoretical) trillions of humanities' virtual progeny among the stars.

Some of them (Thiel) would almost definitely be up for an ecofascist regime that sacrifices most of humanity if the remnant humanity will pursue this vision of the future.
posted by BungaDunga at 11:44 AM on August 2, 2021 [2 favorites]


No thanks and bad math.

I'm all about long-term thinking, and in the abstract I'm not against weighing the interests of future generations into current decision-making. That sounds like kind of a good idea, on its face anyway.

But this "longtermism" seems to be an exercise in navel-gazing apologia by people with very entrenched interests in the status quo, even when it should be pretty obvious that the status quo is unsustainable and bad for our species' future.

It would be one thing if they were proposing, say, a crash program to colonize space in order to get humanity off the face of a single planet ahead of a catastrophe or something (a la Neal Stephenson's Seveneves), and arguing that some short-term pollution should be tolerated in order to reduce existential risk to the entire species. That seems like a pretty rational argument that reasonable people could discuss and debate.

But that's not the situation we're facing: it's the current political-economic-industrial system that is creating the systemic risk to our species. (And perhaps not incidentally, it's also what made some of this philosophy's apparent proponents fantastically wealthy.) And in order to reduce this risk, they want to... keep doing what we're doing? That doesn't really make sense.

I mean, they seem to be completely neglecting the risk that high-technology, high-complexity, energy-intensive civilization might be a one-off event that can't be repeated in the future if we have a significant collapse of technology/complexity. The Industrial Revolution was fueled by easily-accessible fossil fuels (coal and then oil), most of which have been tapped out at this point. If civilization collapses, it might be a lot harder to rebuild a second time around, perhaps prohibitively hard. This, to me, is the ultimate argument for doing whatever we can now, including making whatever short-term changes might be hard but necessary, in order to make energy-intensive civilization sustainable: if we fuck it up, humanity might not get a second shot. Hell, the lizard people who evolve into sentience once we're gone might not even get a chance, since we used up all the oil that took geological timescales to accrue.

A more thoughtful "longtermism" would seem to cry out for rapid, even painful, changes today, in order to preserve and maximize the chances of becoming an interplanetary (or even interstellar) species in the future. All existential, systemic risks should be strongly minimized, because so much is riding on it.

If I was an unimaginably rich person with an interest in humanity's future, I can think of a few things just off the top of my head that I'd want to do on Day 1 of amassing my ridiculous fortune, like:
  • Make effective birth control available to anyone who wants it, and provide funding to advance the state-of-the-art in contraceptives, with the goal of driving down unintended pregnancies to zero, forever. Overpopulation has historically been associated with the collapse of complex societies and is a significant existential risk. Fewer people living qualitatively better lives with less conflict should be regarded as a more desirable outcome than more people overall.
  • Rapid decarbonization, to reduce the existential threat of runaway climate change, and systemic risk due to second- and third-order effects like famine, war, etc. Promote multiple alternatives to fossil carbon for various use cases, including renewables, nuclear, fusion research, etc. The development of alternatives to concrete as a building material needs significant funding, and might represent "low-hanging fruit" for carbon emissions control.
  • Lobbying and political pressure to advance public health programs related to infectious diseases, particularly given the shortcomings of the global response to COVID, vaccine hesitancy, etc. The goal would be to vastly reduce the cycle time from disease identification, to vaccine development, testing, deployment, and universal vaccination. The use of coercive means to ensure universal vaccination is acceptable. (In the US, I would push for a significant increase in funding to the Public Health Service and its integration with other branches of the military. "Get the shot or get shot" would be the next pandemic's motto.)
  • Promotion of international fora for binding dispute resolution as an alternative to the "last argument of kings", with a particular eye towards the prevention of nuclear war as an existential threat. Promotion of nuclear disarmament, or at least nonproliferation generally, would probably also count.
  • Development of advanced water treatment and desalination systems, to be used in the short term to avoid water shortages, and in the long term as a stepping stone towards closed-cycle life support systems necessary for interplanetary colonization (which doesn't currently exist, nor are we even particularly close).
  • Political changes to promote more even income distribution and decrease the possibility of a highly-destructive class war or revolution.
Definitely not on the list? Space tourism. Humans on Mars. (Moons of Jupiter look a lot more interesting anyway.) "Clean" coal or other fossil energy development. Population-growth subsidization. Anything that furthers or exacerbates income inequality. Super yachts.
posted by Kadin2048 at 11:50 AM on August 2, 2021 [39 favorites]


I'm reminded of something that Stuart Brand wrote in How Buildings Learn — "Every building is a prediction, and every prediction is wrong." He talks a lot about the degree to which architects — humans in general, really — are bad at anticipating the needs far-future inhabitants will have, and how the most successful structures over the long term are the ones that are "good enough" for now but are easy to repurpose and restructure to meet future needs unanticipated by the original designers.

All of that is to say that these "long futurists" seem to be optimizing away the "now" in favor of a very specific end state they believe is — and will continue to be! — desirable.

All of which is to say, it's ironic that Brand was also one of the co-founders of the Long Now Foundation, which feels like an inspirational north star for these types. But they've mistaken "planning for the future" for "planning for and prioritizing A very specific future," ignoring many other lessons…
posted by verb at 11:58 AM on August 2, 2021 [13 favorites]


If the skyheaven is populated entirely by instances of Musk, then surely you can bump the exponent up appeciably from their estimates. HashMusk as it were.
posted by joeyh at 12:04 PM on August 2, 2021


Bostrom writes that if there is “a mere 1 percent chance” that 1054 conscious beings (most living in computer simulations) come to exist in the future, then “we find that the expected value of reducing existential risk by a mere one billionth of one billionth of one percentage point is worth a hundred billion times as much as a billion human lives.”

This is a get out of jail free card for literally any argument you care to make. Here, I'll try one:

It is morally imperative that you give me your wallet. There's a chance that if you do so and I spend your money on fancy ice cream, the money will end up supporting someone whose future child will invent a benevolent AI fifty years from now. Obviously the invention of a benevolent AI (which is definitely a thing that could exist) will ensure the survival and happiness for uncountable numbers of people in the future.

Now there's a very small chance that this will happen as I've laid out, but you can't prove it definitely won't. Let's say it's a chance of one in a billion billion billion, extremely unlikely! Nonetheless if we do the math we find the expected value of your action is ten trillion human lives. It's the only rational thing to do!
posted by echo target at 12:06 PM on August 2, 2021 [36 favorites]


Okay, so yes there is a response to Torres' piece on the EA forums (thanks for finding that, PaulingL!) but/and there is still a point to be made about Bostrom's influence and position compared to "longtermist philosophers" as a group. Longtermists as a group might well have evolved from and dealt with a number of Torres' issues (and I agree with Torres on them!), but that doesn't discount the fact that Bostrom is and continues to be representative of the movement as a whole.
posted by danhon at 12:42 PM on August 2, 2021


It's just another version of Pascal's Wager, as was Roko's Basilisk. As echo target says, when you imagine any outcome that is (effectively) infinitely positive, it breaks the utilitarian equation and you can argue it's the most rational choice no matter how unlikely it is.
posted by Nutri-Matic Drinks Synthesizer at 1:09 PM on August 2, 2021 [8 favorites]


The thing is that if you google "Pascal's Mugging", which is the "rationalist" term for the problem, you find a Nick Bostrom post nearly at the top of the results, so I assume longermism types must have some argument about why it isn't one.
posted by BungaDunga at 1:24 PM on August 2, 2021 [1 favorite]


these simulated humans aren't very convincing yet.
posted by 20 year lurk at 1:37 PM on August 2, 2021 [8 favorites]


Utilitarianism fucking sucks, and so does human exceptionalism.

> I hope more and more that a "anti-human potential" movement will emerge

Yes, please.

> It would be one thing if they were proposing, say, a crash program to colonize space in order to get humanity off the face of a single planet ahead of a catastrophe or something [...] That seems like a pretty rational argument that reasonable people could discuss and debate.

Nah. They call it "space" for a very good reason.
Pinning hopes for a better future on "getting out of this rock", as the gullible are wont to say, is delusion of magnitude similar to flat-earth belief.

Agree with most of the rest of your comment.
posted by Bangaioh at 2:36 PM on August 2, 2021 [2 favorites]


This is just one of the many ways Yudkowskian "rationalism" is dogshit; if you don't accept that indistinguishable-from-human simulated consciousnesses in indistinguishable-from-real simulated reality is a thing humans can achieve (and frankly I see no reason beyond "I want it really hard" to do so; the sheer power necessary is way beyond what the cultists seem to realize and the actual technology is far, far behind where they assume it must be or will be soon) the make-believe math falls apart and all that's left behind is nonsense, just as with Roko's Basilisk. To even get to the point where you start having these thoughts requires that you buy into a bunch of premises that are shaky at best, and just as Roko's Basilisk requires you to be deeper into their ideas than they understand to find it genuinely frightening, longtermism requires you to be deeper into their ideas than anybody not already sold on nonsense realizes before you start to mistake their gibberish for math.
posted by Pope Guilty at 3:01 PM on August 2, 2021 [17 favorites]


I started writing out a long post, but realized it said exactly what PaulingL had already said.

Please don't conflate all current longtermists with the now almost two decades old work by Bostrom and the actions of tech billionaires in general. You can listen to this podcast if you want to hear Toby Ord to explain his more modern perspective on longtermism rather than having to read the whole book.

I'd also like to say that I find the pessimism and nihilism that often comes up in these discussions both perplexing and depressing. Do people really think that there is no way we can do better in the future than sitting around a camp fire, waiting to die of infectious diseases as individuals and a natural disaster as a species? To me it seems obvious that almost none of our current problems are insurmountable, and that we will solve them all eventually if we don't destroy ourselves. Which is why the things that really could destroy us, the existential risks, seem so important to get a grip on.

I mean, please give me one example of a problem that is so bad that we should rather give up the whole civilization idea and let 95% of people die while we go off into the woods.
posted by Spiegel at 3:14 PM on August 2, 2021 [6 favorites]


"Any cause that tells you its goals are more important than the people it is supposed to serve is evil."

Wouldn't this also apply to the move away from subsistence farming to the accumulation of surpluses, specialisations and social hierarchies that led to the current system we live in?
posted by acb at 3:15 PM on August 2, 2021 [1 favorite]


Something about the header graphic...I can't quite put my finger on it...

I suspect for some of these guys, that's one of the main attractions. I mean, when 'humanity' comes to mean 'detatched consciousnesses roaming round computer simulations' there's no need to pay attention to the billions of people who don't look, think or act like you. No race or gender, no variety of viewpoints and experiences, and therefore no need to think about (or devote your attention, energy or billions to) trying to change entrenched injustice and inequity in the present world.
posted by andraste at 3:32 PM on August 2, 2021 [8 favorites]


> Do people really think that there is no way we can do better in the future than sitting around a camp fire, waiting to die of infectious diseases as individuals and a natural disaster as a species?

Sure there is, it just doesn't involve either space colonies or simulated consciousness or whatever technobabble nonsense so called "rational optimists" enjoy peddling.

Gotta love the false dichotomy of status quo or "going back to the stone age".
We can (and will) have civilisation when fossil fuels (effectively) run out, it just won't look anything like it does now.

Moreover, it's ludicrous to worry about human extinction level natural disasters when by far the greatest threat to the biosphere is expansionist ideology, especially of the capitalist variety.
posted by Bangaioh at 4:13 PM on August 2, 2021 [7 favorites]


AI gods. The cause of and the solution to all of life's problems.
posted by Pyry at 4:17 PM on August 2, 2021


it just doesn't involve either space colonies or simulated consciousness or whatever technobabble nonsense so called "rational optimists" enjoy peddling.

Gotta love the false dichotomy of status quo or "going back to the stone age".


You talk about false dichotomies while simultaneously casting longtermism as the desire to uphold the status quo (and also somehow a desire for "space colonies or simulated consciousness or whatever technobabble nonsense", dunno how to square that circle). If you are complaining about a lack of nuance then maybe (a) employ some yourself and (b) stop straight-up lying about what your opponent's goals are.
posted by Anonymous at 4:36 PM on August 2, 2021


> Gotta love the false dichotomy of status quo or "going back to the stone age".

That false dichotomy is surprisingly real, and exactly what I was pointing to. Just in this thread we have had several people agreeing that it would be good to see "a "anti-human potential" movement [as] a kind of emotional coaching for the end of global civilization." I heard similar things when I did survey interviews of the members of the EA-aligned non-profit I worked for.

I think there is a reasonable case to be made for space colonies and simulated consciousnesses (I mean, do you think it's impossible?), but I also see how reasonable people can disagree. I do not see how reasonable people can think that the world is irreversibly doomed and we should just give up on civilization.

> Moreover, it's ludicrous to worry about human extinction level natural disasters when by far the greatest threat to the biosphere is expansionist ideology, especially of the capitalist variety.

In fact, longtermists do not tend to worry about human extinction level natural disasters. In the interview I linked above for example, Toby Ord considers almost all the long term risk to humanity to come from humans themselves. Mostly in the form of unaligned artificial intelligence, nuclear war, artificial biological threats, and extreme climate change. How much of that risk is driven by "capitalism", and especially how much better the realistic alternatives would be seems like an open question to me.
posted by Spiegel at 4:46 PM on August 2, 2021 [1 favorite]


This snippet of Torres' argument strikes a chord with me:
At the heart of this worldview, as delineated by Bostrom, is the idea that what matters most is for “Earth-originating intelligent life” to fulfill its potential in the cosmos. [...insert Bostrom's definition of potential here...] This is what “our potential” consists of, and it constitutes the ultimate aim toward which humanity as a whole, and each of us as individuals, are morally obligated to strive. An existential risk, then, is any event that would destroy this “vast and glorious” potential.
The choice of long term future goal to strive toward is subjective, therefore "existential risks" are not universally well-defined but are subjective, they are risks with respect to achieving a particular goal. That said, many quite different long-term goals may agree on a few shared existential risks.

Maybe my grand vision of humanity's potential is to maximize the amount of money i can personally grift from rich technocrat donors while selling a vision about tomorrow's glorious posthuman cosmic future, while your vision of humanity's cosmic potential focuses on maximising your clan's opportunities to sit around camp fires sharing food and telling stories. Maybe our two grand visions of the future are somewhat in conflict, but we are both in complete agreement we do not want a global thermonuclear war today.
posted by are-coral-made at 5:15 PM on August 2, 2021 [2 favorites]


Longtermism - what a convoluted way of justifying cruelty and callousness! It'd be hilarious if it wasn't such a pathetic attempt to mask an underlying nastiness.
And its "argument" can be flattened by a simple twist. Instead of 10^17 simulated beings, why couldn't the future be just One simulated being? Who's to say that more is what evolution/revolution is aiming at? Ok, so let's say it's just One. Then clearly our 7 billion souls are worth on the order of 10^9 times more than that lonely future Monoid.
posted by storybored at 5:15 PM on August 2, 2021 [4 favorites]


> (I mean, do you think it's impossible?)

Impossible as in "there are not enough resources on Earth and enough technical expertise to build and maintain a toy colony with a few hundred inhabitants for a few decades on another solar system planet/moon"? No.

Impossible as in "millions of humans will be able to survive on independent, self-sustaining space colonies that can double as some sort of plan B from our fuck-ups on Earth as a last resort"? Yes, of course.

> How much of that risk is driven by "capitalism"

I said: "the greatest threat to the biosphere is expansionist ideology". Liberal capitalism with its growth and competition fetish is the most prominent example but sure, socialism or any other political system, even if they wouldn't be as morally repulsive otherwise, could conceivably operate under a similar mindset and lead to the same outcome through a different path.
posted by Bangaioh at 5:26 PM on August 2, 2021 [4 favorites]


Given the age and size of the universe, statistically speaking it's almost certain some other civilization has already produced a Benevolent AI God, if such a thing is possible. Therefore, taking a genuine long view, to maximize utility we should commit civilizational suicide, to avoid competing with the more advanced aliens and holding them back from filling the universe as efficiently as possible.
posted by Pyry at 5:52 PM on August 2, 2021 [6 favorites]


As well as trading off short-term vs long-term, another major difference is the conflict between favouring the individual versus favouring the collective. Yuval Noah Harari* wrote about this in the books Sapiens and Homo Deus. I don't have a copy of the book to quote from, but here's a summary of Harari's framing of three different kinds of rival humanisms (from, of all places, the american conservative):

> 1. Orthodox Humanism [aka Liberal Humanism aka Liberalism]: The belief that the individual is sovereign, and that “we should give as much freedom as possible to every individual to experience the world, follow his or her inner voice and express his or her inner truth. Whether in politics, economics, or art, individual free will should have far more weight than state interests of religious doctrines.”

> 2. Socialist Humanism: According to Harari, socialist humanism resolves the conflict within liberal humanism by faulting it for focusing too much on the individual, and not enough on the collective. Liberal humanism, on this view, blinds individuals to the needs and wants of others. Socialist humanism focuses more on social forces that prevent human flourishing, and advocates collective action through strong institutions to shape those social forces towards collective liberation.

> 3. Evolutionary humanism: This is a very different way of addressing the conflict problem raised by liberal humanism. Evolutionary humanism says that conflict is not always a problem to be solved, but something to cheer, because it pushes evolution forward. “Some humans are simply superior to others, and when human experiences collide, the fittest humans should steamroll everyone else,” Harari writes (though I hasten to say he’s describing this worldview, not necessarily endorsing it).

I think in rich western societies we place too much concern on the freedoms of the individual. I do not particularly want to have my freedom of action or rights restricted, but if I am free to do whatever I like, then I can put the collective at risk. E.g. for global-scale tragedy of the commons type problems such as climate change, if everyone is free to pollute as much as they want and breed as much as they want, then that produces a failure state for the collective (making much or all of the Earth's biosphere uninhabitable to human life for a hundred million years or so, say). So that seems to refute Liberal Humanism as a sustainable option in the long run, so my freedom to consume and pollute and reproduce should be regulated for the good of the collective. Yet anyone politically campaigning for introducing rationing and a one-child policy is going to struggle to get elected, since not many people are going to sign up to be regulated in this way. But people should selfishly be very much be in favour of "those other people" having their freedom to produce/reproduce/consume regulated, so maybe that's the way forward -- with some suitable way to trade agreements and submit to mutual regulation.

* another intellectual beloved of wealthy technocrats.
posted by are-coral-made at 5:57 PM on August 2, 2021 [1 favorite]


Oh FFS are these the same general asshats (or asshats adjacent) that came up with terrifying punishing god AI that I won't mention for fear of traumatizing other people who haven't heard of it? It sure sounds like it. To borrow a religious metaphor, judge the tree by what it produces.
posted by treepour at 5:57 PM on August 2, 2021


Because, suppose we do make our AI God, and populate the cosmos, and then we find aliens who have a better AI god. What would we do? Probably get in an AI god war. But the long-long term view is that we should consider sacrificing our 10^58 simulated consciousnesses if instead those resources would be better used by 3↑↑↑3 simulated alien hyper-consciousnesses, so better to never colonize the galaxy in the first place.
posted by Pyry at 5:57 PM on August 2, 2021 [2 favorites]


Because, suppose we do make our AI God, and populate the cosmos, and then we find aliens who have a better AI god. What would we do? Probably get in an AI god war.

Sounds like a great anime series bro! Or the next multiplayer AAA video game! Let's do this! BRO!!!! Elon Musk has gone to SPACE already! Like REAL space dude! Fuck yeah, can you imagine how awesome a space AI would be with brains like ours???

sorry
posted by treepour at 6:04 PM on August 2, 2021 [2 favorites]


This is just total nonsense and frankly seems not well thought out at all. It’s philosophy contaminated by engineer’s disease. Of course, when in doubt it makes sense to leave resources for future generations, but 10^58 consciousnesses in a computer simulation is just techbro wanking. It’s about as stupid as Roko’s Basilisk, which is to say utterly stupid. I don’t believe for a minute these guys have spent time thinking about what makes life worth living, or what the overall “purpose” of humanity might be.

If the universe is going to experience heat death anyway , they’re just prolonging the inevitable anyway. Live a little life and die surrounded by loved ones. That’s it.
posted by freecellwizard at 6:27 PM on August 2, 2021 [9 favorites]


re: basilisks, here's an amused 2013-era Charlie Stross blogpost:
The screaming vapours over Roku's Basilisk tell us more about the existential outlook of the folks doing the fainting than it does about the deep future. I diagnose an unhealthy chronic infestation of sub-clinical Calvinism (as one observer unkindly put it, "the transhumanists want to be Scientology when they grow up"), drifting dangerously towards the vile and inhumane doctrine of total depravity. Theologians have been indulging in this sort of tail-chasing wank-fest for centuries, and if they don't sit up and pay attention the transhumanists are in danger of merely reinventing Christianity, in a more dour and fun-phobic guise. See also: Nikolai Fyodorovich Fyodorov.
...
> It's interesting to note that the argument can be summed up as:
> - SI will happen
> - SI will save hundreds of thousands of lives by making human life better
> - SI will be angry if it could have been made sooner and saved those lives
> - SI will simulate all people who knew about possibility of making SI and didn't give 100% of their non disposable income to the singularity institute
...
We might as well move swiftly on to discuss the Vegan Basilisk, which tortures all human meat-eaters in the AI afterlife because they knew they could live without harming other brain-equipped entities but still snacked on those steaks. It's no more ridiculous than Roko's Basilisk. More to the point, it's also the gateway to discussing the Many Basilisk's objection to Roko's Pascal's Basilisk wager: why should we bet the welfare of our immortal souls on a single vision of a Basilisk, when an infinity of possible Basilisks are conceivable?
posted by are-coral-made at 6:32 PM on August 2, 2021 [8 favorites]


> Impossible as in "millions of humans will be able to survive on independent, self-sustaining space colonies that can double as some sort of plan B from our fuck-ups on Earth as a last resort"? Yes, of course.

You're not thinking as long-term as the longtermists. Space colonies and simulated consciousnesses wouldn't be Elon building a plan B refuge on Mars in a few decades. It would be hundreds of planets or pure space habitats settled over thousands or millions of years. One of the reasons to be optimistic about the future if we don't destroy ourselves is that there is so insanely much of it.

And to the five commenters above me, I suggest thinking about another idea from the Rationalist space than Roko's Basilisk: Try to engage with the strongest version of an opposing view, rather than constructing the weakest.
posted by Spiegel at 6:36 PM on August 2, 2021 [2 favorites]


Nah, people are just arguing with the vision promulgated by the loudest proponents of longtermism, who are indeed the same people behind all the terrible ideas folks are mentioning in this thread.

And you've repeatedly strawmanned people's arguments in this thread, as far as not recognizing where degrowth folks are coming from, and assuming that they're defeatist, rather than having a different perspective on things than you.
posted by sagc at 6:43 PM on August 2, 2021 [6 favorites]


> people are just arguing with the vision promulgated by the loudest proponents of longtermism

The whole point is that the loudest proponents (that you hear) are not always the best proponents. The version of longtermism promulgated by people like Ord, MacAskill and Greaves has very little to do with Roko's Basilisk and space bro billionaires.

> you've repeatedly strawmanned people's arguments in this thread

I'm really sorry if I have. I genuinely had the impression that the people in this thread talking about the end of global civilization were defeatist, in that they would have preferred to solve the problems they see but don't think that it can be done. Perhaps they would like to go back to a primitive lifestyle even if they saw no problems with today's world, but I would be even more critical of that stance.

I have also met and talked with many people who have assured me that they do believe we are doomed, and that we should just give up. There is also some data that suggests many people hold unreasonably pessimistic views about the world and the direction it's headed in, and some data suggesting many people choose not to have children due to their fears for the future.
posted by Spiegel at 7:20 PM on August 2, 2021 [3 favorites]


The risk of the most expansionist elements of the human species transforming an ever-expanding expanse of the galaxy into maximally utilitarian self-replicators -- now there's existential risk. How much rich fascinating alien life could exist in a galaxy and be paved over by a strain of utilitarian goo whose utility function so decrees?

I doubt the human species needs any help stopping itself from gaining such capabilities, but if it did, I'd say stopping it would be the real long-range play.
posted by away for regrooving at 8:22 PM on August 2, 2021 [4 favorites]


Amazing how many of these men (sic) seem to take it for granted that money capitalism is a property of the Universe more basic than the constancy of the speed of light in inertial reference frames, or the the DeBroglie wavelength of matter (which is the basis of the Uncertainty Principle).

A person could laugh themself sick ..... in the unlikely event they weren't nauseated to start with.
posted by jamjam at 9:15 PM on August 2, 2021 [5 favorites]


I've read a couple of Bostrom's papers and they seem fine to me.

Jordan Peterson was actively harmful in driving a kind of right-wing or libertarian young adult sentiment. I don't see Bostrom as taking that sort of approach and falling prey to some flawed theory of utopianism.
posted by polymodus at 9:35 PM on August 2, 2021


I keep wondering if belief in AI is some kind of magical thinking. For example, say we develop AI - who's to say it's smarter than we are? Who's to say it's so smart it can magically solve all our problems? There's this assumption we can build something as "smart" as a super computer that is also "conscious." But what's odd about this assumption is there is just about zero support for it. What we call AI, as far as I've seen, is really a lot of math happening very quickly - and math isn't consciousness.

Further, these guys seem to assume that AI would create more AI in simulations and that these simulations should somehow count as human lives. First, why would AI do that? And second, on what level would the simulated AI be human? By my judgement, nothing about AI (if we managed to build it) would be human in any way we currently apply the term. So then we're supposed to weigh the lives of actual living humans against simulated ones? And then the simulated ones are considered more valuable?

Finally, a lot of these Longtermist arguments depend on a pretty biased concept of potential. They appear to have already decided the goal-state they will accept as rational. If anyone disagrees with their decision, they are just irrational. This makes any real discussion about it fairly difficult to even begin (unless of course you accept their terms.)

All of this is just bonkers.
posted by elwoodwiles at 10:11 PM on August 2, 2021 [5 favorites]


The superintelligent AI stuff all seems to rely implicitly on a stripped down version of Anselm's (brilliant!) proof of the existence of God, which is, essentially, following what I can remember of Russell's paraphrase in his History of Western Philosophy : 'consider the greatest possible object of thought. Does it exist or does it not exist? If it exists it is clearly greater than if it does not exist, therefore it must exist by its very nature as the greatest possible object of thought.'

Which rubs our nose in the fact that this disturbing secular 'religion' is very thinly disguised Medieval Scholasticism, complete with a distinctive overlay of sanctimonious incense above the rank and terrifying corruption of the grave.
posted by jamjam at 10:57 PM on August 2, 2021 [3 favorites]



"Any cause that tells you its goals are more important than the people it is supposed to serve is evil."

Wouldn't this also apply to the move away from subsistence farming to the accumulation of surpluses, specialisations and social hierarchies that led to the current system we live in?
posted by acb at 3:15 PM on August 2 [1 favorite +] [!]


One of my great-grandfathers was a subsistence farmer. His son, my grandfather, was overwhelmingly grateful for the opportunity to move to the bottom level of the social hierarchy in the nearest town; even working two jobs to make ends meet was more healthy and less physically hard. Check out the discussion of subsistence living at the prepper conversation (https://www.metafilter.com/192198/The-Rise-and-Fall-of-the-Ultimate-Doomsday-Prepper) for a better perspective on farming as an alternative to the current system.
posted by tumbling at 11:45 PM on August 2, 2021 [3 favorites]


It's so irritating and perplexing IMO that Effective Altruism is now this insane umbrella category that includes people trying to figure out e.g. which charity they donate to will save the most lives or whether eating tuna is ethical or if they should to donate more to causes fighting climate change or causes promoting global health AND on the other hand also people who are really concerned about paperclip maximizers (not in like a theoretical way but in an urgent, concerned way) and the mental welfare of video game characters. The way that this philosophy founded on the most practical possible considerations like "how do we maximize lives saved per dollar given to charity" became taken over by a focus on totally hypothetical and imaginary problems reminds me of the Illwrath, who became evil because they were so good that they flipped a bit. We need a new set of names for these two groups...
posted by phoenixy at 1:28 AM on August 3, 2021 [7 favorites]


> Space colonies and simulated consciousnesses wouldn't be Elon building a plan B refuge on Mars in a few decades.

Where have I said that? The only "few decades" I mentioned was regarding the presumed lifespan of an imaginary toy colony that, in a sufficiently dystopic world, could presumably be built by humans, wasting a huge amount of effort and resources better spent in almost literally anything else.

> It would be hundreds of planets or pure space habitats settled over thousands or millions of years.


There's no timescale where significant space colonisation is workable. It's impossible.

Unless you meant sending bacteria or other microorganisms to a suitable environment and waiting dozens of millions to billions of years until some sort of civilised society emerges from that. OK, maybe something like that would work. Very big maybe.

Humans, though, nope, never gonna happen, we're too dependent on the rest of the biosphere for our survival, and for the particular historical aberration of modern civilisation, we're additionally dependent on burning concentrated dead beings to keep it going as is.

> the people in this thread talking about the end of global civilization were defeatist


Who did that? Where in this thread did someone advocate for the end of civilisation? (Unless by "global civilisation" you mean "globalised exploitative property-rights-obsessed competititive market-based capitalism", in which case there's at least one person all for ending that forever)
posted by Bangaioh at 2:22 AM on August 3, 2021 [2 favorites]


in that they would have preferred to solve the problems they see but don't think that it can be done.

In my case, that's right - I no longer think we're capable of willingly keeping ourselves w/in the environmental constraints needed to prevent the biosphere from collapsing.

Instilling a sense of deep skepticism/cynicism about human nature and potential, a kind of culture-wide Icarus-thinking, would help, but ultimately it is probably going to take climate change hamstringing even the possibility of large polluting projects to do the job.

I'm not talking about the end of civilization, but I do think a mass, highly interconnected, global industrial civilization is a fossil fuel artifact that will, and desperately needs to, end.
posted by ryanshepard at 4:46 AM on August 3, 2021 [2 favorites]


Do they have some profound guilt that requires erecting a massive edifice of bullshit to cover it?

Two quotes:
Conservatism consists of exactly one proposition, to wit:

There must be in-groups whom the law protects but does not bind, alongside out-groups whom the law binds but does not protect.

There is nothing more or else to it, and there never has been, in any place or time.

...

As the core proposition of conservatism is indefensible if stated baldly, it has always been surrounded by an elaborate backwash of pseudophilosophy, amounting over time to millions of pages. All such is axiomatically dishonest and undeserving of serious scrutiny.
- Frank Wilhoit, 2018
It is difficult to get a man to understand something, when his salary depends on his not understanding it.
- Upton Sinclair, 1934
posted by flabdablet at 6:35 AM on August 3, 2021 [5 favorites]


To be fair to longtermism, many of the qualms that Torres rightfully discusses have been addressed: https://forum.effectivealtruism.org/posts/xtKRPkoMSLTiPNXhM/response-to-phil-torres-the-case-against-longtermism
I read this rebuttal, but mostly it states without arguing (because presumably it is argued elsewhere) that the case for longtermism does not require you to believe this or that particular thing objected to (utilitarianism, computer simulations, the AI singularity), but it doesn't appear to argue against the far-future sci-fi component of longtermist ethics. I clicked around the site a little bit looking for a clear formulation of principles. I read this bit about spending money on medical research effectively that seemed unobjectionable, but then I got to this this pdf titled The case for strong longtermism, which includes postulates like "The highest far-future ex ante benefits that are attainable without net near-future harm are many times greater than the highest attainable near-future ex ante benefits" and a section titled "The size of the future" that explains "Consideration of digital sentience should increase our estimates of the expected number of future beings considerably".

I have only skimmed this PDF and I'm not claiming to understand the argument they're making, but at a glance it does seem include the bits that a lot of us will be suspicious of: namely, that a good system of ethics should factor in the existence of simulated beings in the far future.
posted by jomato at 8:52 AM on August 3, 2021 [1 favorite]


I studied math in graduate school where I specialized in probability theory, so for me probability theory is first and foremost a branch of theoretical mathematics, like geometry. Back when this discussion focused more on Bayesian statistics and updating your priors I tended to think that the difference between ethics and "computing" the posterior distribution with infinitessimal prior probabilities was like the difference between between building a house and drawing a house on paper with right angles and parallel lines. You can do a lot of impressive work on paper, but you're no longer in the same realm as house building.

From what I can tell, longtermism is an ethical theory. Arguing that longtermism isn't a bad ethical theory because addressing global poverty is compatible with longtermism* isn't making a case for longtermism. To make a positive case for longtermism you have to argue that we can meaningfully reason about the effects of our actions on humans millions of years in the future. You don't need to be a longtermist to be concerned about a climate catastrophe or nuclear war.

* On the subject of global surveillance, the rebuttal piece says that "longtermism does not have to be committed to such a proposal. In particular, one can simply object that Bostrom has a mistaken threat model". There seems to be a lot of wiggle room here for proponents of longtermism to choose a threat model that leads to the ethical conclusion that they're looking for.
posted by jomato at 9:19 AM on August 3, 2021 [6 favorites]


> a good system of ethics should factor in the existence of simulated beings in the far future

As I read that part, and as PaulingL and Belfield's response to Torres says, the argument for longtermism does not rely on simulated beings. The Case for Strong Longtermism includes it as a "radical possibility that, while very uncertain, could greatly increase the duration and future population sizes of humanity." There is plenty of potential future just on Earth with normal humans to make the argument work, what they call their "restricted estimate" of future potential. Part of the goal of Strong Longtermism is to show that longtermism works in many ethical frameworks, even non-consequentialist ones.

As I understand it, the proponents of longtermism do not see it as an ethical theory, but as a consequence of several different ethical theories. Most directly as a consequence of total utilitarian ethics, but as for example part six to nine of Strong Longtermism attempts to argue, it might also be a consequence of several other ethical frameworks.

> To make a positive case for longtermism you have to argue that we can meaningfully reason about the effects of our actions on humans millions of years in the future.

I think the longtermists agree. In the conclusion of Strong Longtermism, they say
In our own view, the weakest points in the case for axiological strong longtermism are the assessment of numbers for the cost-effectiveness of particular attempts to benefit the far future, the appropriate treatment of cluelessness, and the question of whether an expected value approach to uncertainty is too “fanatical” in this context. These issues in particular would benefit from further research.
However, there is at least one action we can take that we know almost certainly will negatively affect even the extremely far future: human extinction. As Derek Parfit said: Given three alternatives 1) Peace, 2) War that kills 99%, 3) War that kills 100%, then 2 is worse than 1, but 3 is much worse than 2. This holds as long as you value the existence of humanity, and think that the future will be at least somewhat long and good. It also might be what separates longtermists from others who are worried about climate catastrophe or nuclear war. In the longtermist view, extinction is uniquely bad. Efforts to reduce extinction risk therefore become uniquely valuable.
posted by Spiegel at 10:53 AM on August 3, 2021 [1 favorite]


Well, efforts to reduce the extinction risk of those already least threatened, in practice.
posted by sagc at 10:58 AM on August 3, 2021


Extinction is extinction. It has to kill everyone, or it doesn't happen, so we are all equally threatened.
posted by Spiegel at 11:02 AM on August 3, 2021 [1 favorite]


Forgive me if I am dense but I still don't understand what Longtermism is. This is my first hearing of it and it sounds like it boils down to something like "given a fixed amount of money we should spend it to stop planet killing asteroids rather then halting climate change." Is this about it?
posted by Pembquist at 11:14 AM on August 3, 2021


> I still don't understand what Longtermism is

As far as I understand it, the central idea is something like what MacAskill and Greaves say in the introduction to The Case for Strong Longtermism:
Even on the most conservative ... timelines, we have progressed through a tiny fraction of history. If humanity’s saga were a novel, we would be on the very first page.

Normally, we pay scant attention to this fact. Political discussions are normally centered around the here and now, focused on the latest scandal or the next election. When a pundit takes a “long-term” view, they talk about the next five or ten years. With the exceptions of climate change and nuclear waste, we essentially never think about how our actions today might influence civilisation hundreds or thousands of years hence.

We believe that this neglect of the very long-run future is a serious mistake. An alternative perspective is given by longtermism, according to which we should be particularly concerned with ensuring that the far future goes well. In this article we go further, arguing for strong longtermism: the view that impact on the far future is the most important feature of our actions today
The actual actions that longtermists advocate are usually based on marginal and counterfactual thinking about how individuals can reduce the risk of human extinction. In practice, they tend to advocate for AI safety research and efforts to reduce the risks from engineered viruses and other artificial biological threats. There are also some advocating for nuclear disarmament, improving international institutions, "meta" work on improving these recommendations and spreading the idea of longtermism, and a host of other problems. You can see a list of possibilities here.

An important point is that these actions are not central to the idea that the long term is important. They are just what most people convinced by the argument feel are the most effective ways of positively affecting the future right now.
posted by Spiegel at 11:42 AM on August 3, 2021 [2 favorites]


This strikes me a little like flipping the script on the retort to concern over climate change, that the future shouldn't matter as much as the present:

Old Slate piece arguing against fighting climate change b/c the future doesn't matter

It's almost Trumpian in that "I know you are, but what am I?" playground sense: oh yeah, if the future matters so much, how about the FAR future, where actually techno-lords developing VR headsets will eventually lead to more happiness and the miserable deaths of mere billions over mere centuries is just a blip?

One flaw: what matters is not just happiness, but self-determination. In damaging the environment we are imposing inhuman conditions on the future that we, not future generations, chose. The further we go into the future, the more that space travel or virtual reality pans out technologically or not, and humanity's social evolution makes us value it more or less, the more future humans will be in a better position to aim towards or away from it.

There's also an element of "you made me do it!" [with concern over the environment] to these arguments. Why not just... delay all this singularity stuff, this escape into transhumanism and virtual reality? Given exponential growth scenarios, whether we start in two or five centuries won't make much of long-term difference, except: we are battling climate change and other disasters and they could forestall this techno-utopia completely.

The hippocratic oath is "first, do no harm", not "well, if there's even one in an Avagadro's number squared chance that our patient under this treatment could evolve into a super-human, experiencing infinite bliss for all eternity, that would outweigh any possible harm!"
posted by Schmucko at 11:51 AM on August 3, 2021 [5 favorites]


… we essentially never think about how our actions today might influence civilisation hundreds or thousands of years hence.

I am ever so happy that those early hominids made the right decisions as they wandered about the savannas of Africa. Their planning made all that surrounds us possible. Thank you! wise forebearers.
posted by njohnson23 at 12:25 PM on August 3, 2021 [9 favorites]


"Many were increasingly of the opinion that they'd all made a big mistake coming down from the trees in the first place, and some said that even the trees had been a bad move, and that no-one should ever have left the oceans." --- Hitchhiker's Guide to the Galaxy.
posted by SPrintF at 1:11 PM on August 3, 2021 [10 favorites]


Given three alternatives 1) Peace, 2) War that kills 99%, 3) War that kills 100%, then 2 is worse than 1, but 3 is much worse than 2. This holds as long as you value the existence of humanity, and think that the future will be at least somewhat long and good

I think this suggests an ethical case, then, for what's been much debated in past threads around "should we be pouring a significant portion of the Earth's resources into 'lifeboats' & efforts to sustain life off-planet, or does that reduce the odds of Earth surviving in its own right?".

i.e. Mutually-assured survival. So long as the proponents of lifeboat theory are keen on "3 is much worse than 2", committing to ensuring that 2 becomes 3 if they choose to defect over assisting in 1 becomes compelling.
posted by CrystalDave at 1:27 PM on August 3, 2021 [3 favorites]


Given three alternatives 1) Peace, 2) War that kills 99%, 3) War that kills 100%, then 2 is worse than 1, but 3 is much worse than 2. This holds as long as you value the existence of humanity, and think that the future will be at least somewhat long and good

This also requires that you either believe that (a) it is NOT true that the lives of the remaining 1% would be absolutely terrible, or (b)that it is true, but it would be ok to subject them to that terribleness in order to make future people better off. I think (a) is ludicrous and (b) is human sacrifice, a.k.a the Sadistic Conclusion.
posted by bashing rocks together at 7:53 PM on August 3, 2021


(A caveat, not to add by editing: (a) is plausible if you posit extreme efforts of preparation aimed specifically at making it so, which in this scenario must be redirected from efforts aimed at 1. Roughly speaking, that would look like "climate change will make most of the world uninhabitable so let's start building stronger border defenses now to make sure no climate refugees come get our resources").
posted by bashing rocks together at 8:01 PM on August 3, 2021 [2 favorites]


Why make the leap from "longterm=next 5-10 years" to "longterm=next 5-10 million years"? What about starting to talk seriously about the duty we have to humanity in the next 50-100 years, when the outcomes of our actions are at least somewhat concretely predictable?
posted by rivenwanderer at 7:32 AM on August 4, 2021 [7 favorites]


What about starting to talk seriously about the duty we have to humanity in the next 50-100 years, when the outcomes of our actions are at least somewhat concretely predictable?

Absolutely. Our personal agency is restricted to this time range ie. our own generation, our children and our grandchildren. Coincidentally, this is the time period during which the current pressing global issues need to be resolved. One suspects that longtermism is an evasion of this reality.
posted by No Robots at 7:59 AM on August 4, 2021 [4 favorites]


I wholly disagree with your points, kadin, and I'm going to address them one by one.

Make effective birth control available to anyone who wants it, and provide funding to advance the state-of-the-art in contraceptives, with the goal of driving down unintended pregnancies to zero, forever

With most of the world living at poverty levels, the idea of a one/two child family that maxes out at replacement levels is only be applicable for those who are able to obtain the education for work that pays sufficiently to justify a one/two-income based family. Now you say it should be a 'choice' but this conveniently places the burden of the choice on the poor and ignores how disproportionate the carbon production/resource consumption is per capita between different nation states.

A family with ten kids in rural China will consume magnitudes less resources and energy than a two-child family living in the middle of Boston who are part of a natural gas consuming energy grid, who own vehicles and use them regularly, who purchase products containing rare earth metals, many of which are made with housing that will not biodegrade for thousands of years.

If the longetermist view places the burden on the rural family then that view is both classist and unethical since the majority of the resource consumption by that Boston family would be done to maintain their class standing (eg educational toys, driving children to school), for their convenience/longevity (eg running water), or its completely for luxury and relaxation (eg guying an expensive graphics card) whereas the rural family literally needs labor to survive at far below the US poverty level.

The idea of overpopulation only works if you imagine that everyone consumes the same things equally, and adding more numbers = worse outcomes. That's never been true and has always been a classist, colonialist approach towards longterm sustainability.

Rapid decarbonization, to reduce the existential threat of runaway climate change, and systemic risk due to second- and third-order effects like famine, war, etc. / Development of advanced water treatment and desalination systems

In countries that have experienced mass colonial exploitation, this would require

1) mass infrastructure improvements,
2) mass highly-specialized education initiatives, and
3) all of the supporting highly-specialized component manufacturing you would need for both of the former requirements.

This would never be realized under the current geopolitical system of self-interestedness and the idolatry of sovereignty.

The majority of our modern nation-to-nation treaties are written to only allow entry for highly-educated/highly-specialized labor and to deny entry to the uneducated and the poor for a reason. Things like knowledge and pre-built manufacturing capacity is utilized as power in global trade deals, political treaties, and to justify so-called soft power (ie colonialist) initiatives and is crucial for how power and society is constructed in the modern day. There's no mitigating systems like these which are, by their very nature, selfish.

Lobbying and political pressure to advance public health programs related to infectious diseases, particularly given the shortcomings of the global response to COVID, vaccine hesitancy, etc.

The hesitancy we have around realizing public health initiatives is for economic reasons only. Without economy in the picture, essential services would be reduced to things like agriculture, transport, energy production, and infrastructure, and everyone else would be quarantined. Vaccinations I think has more to do with how we've constructed the post-Cold War, pro-democracy, individual freedoms political narrative, and how that fundamentally would oppose government-sanctioned actions that infringe on short-term, individualized liberties, and how that would be seen as a trespass on the modern rule of law. The thing that prevents China from mass vaccinations is having enough vaccines and having effective ones. The thing that prevents the US from it is probably more to do that ever-present anxiety about losing liberties.

Promotion of international fora for binding dispute resolution as an alternative to the "last argument of kings", with a particular eye towards the prevention of nuclear war as an existential threat.

I'm not sure what the 'last argument of kings' is (other than a fantasy novel) but the issue with nuclear weaponry is less that they exist and more that they're the most dangerous tool being utilized in geopolitical struggles. They also are, arguably, currently the least harmful of these tools as weapons trade, resource extraction, invasions like that in Iraq and Afghanistan and Palestine, and careless disasters like the Bhopal Disaster or any of the oil-related ones have caused significantly more harm, with all of them justified in part in order to secure/protect one nations sovereignty and citizenry over that over every other.

Political changes to promote more even income distribution and decrease the possibility of a highly-destructive class war or revolution.

This is a hell of a thing to say given that the majority of the issues you're identifying are direct products of nationalizing and capitalism. Income inequality exists because of mass exploitation - bandaid programs like UBI are necessary to ensure basic human rights but these will never truly produce equality. Jeff Bezos (and so many other of the rich) are that wealthy because they took a percentage out of the value produced by every single one of their workers for themselves. There is no true equality under capitalism because it will always include some form of exploitation by the managing elites which capitalism expects to always require (since the masses are, according to its believers, stupid, unorganized, and short-sighted).

Pollution has been the ten foot tall skeleton in the tiny closet of capitalism for centuries, and is only now barely acknowledged as an 'externality.'

It's very nice and easy to sit in an armchair and dream of a better, more sustainable world organized around what the UN has pretty much put out in their guidelines year after year but, like with the UN, these goals won't ever truly be realized because it conflicts with the goals of sovereignty and wealth accumulation.

In the same way that you think mass resource extraction/pollution is reasonable so we could fund interplanetary travel or whatever (which, truly, would only ever benefit the rich and already powerful if realized under our current system), there are some of us who think a revolutionary overthrow of our existing systems of power is justifiable too, and that this would be a far better long-term solution that addresses the underlying pathology rather than just the symptoms.

Regardless, these are all certainly nice fairy tales that we can tell ourselves as we try to make our own communities better places. But I think the important thing here is not to kid yourself about how realistic it all is, and to not forget that the whole point is about suffering, past, present, and future, and figuring out the best ways we can minimize that.
posted by paimapi at 8:52 AM on August 4, 2021


paimapi, I think kadin might agree with you more than you think, given they say "it's the current political-economic-industrial system that is creating the systemic risk to our species."

But anyway, when a king runs out of diplomacy, his last argument is a cannonball to the face. Famously inscribed in latin as VLTIMA RATIO REGVM on the cannons of Louis XIV, seen here in a picture uploaded by none other than the wikimedia user Kadin2048.
posted by Spiegel at 10:15 AM on August 4, 2021 [3 favorites]


The thing that prevents China from mass vaccinations is having enough vaccines and having effective ones. The thing that prevents the US from it is probably more to do that ever-present anxiety about losing liberties.

Case in point
posted by flabdablet at 10:33 AM on August 4, 2021


Another thing I notice here is the grandiosity. It's one thing to have a long-term view. But some of these futurists seem to devalue common human life lived under the outlines it always has had. Jeff Bezos saying that if we don't expand into space it's a kind of stagnation. From the early days of the internet when the phrase "content provider" was coined, tech futurists have discounted as boring and trivial actual human life and thought as opposed to faster and more extreme means. Just like tabloid headlines get our attention no matter how untrue they might be, this "EXPAND INTO SPACE AND LIVE LIKE GODS IN A VIRTUAL WORLD" fantasy takes attention from what I think would be better aspirations. A world where more have enough, there's less hateful division, people can explore cultures and the natural world, deepen relationships, create art and science.
posted by Schmucko at 10:50 AM on August 6, 2021 [3 favorites]


« Older “The addition to your edition”   |   The Weeknd vs. Abel Tesfaye Newer »


This thread has been archived and is closed to new comments