Trying to reduce the odds of a catastrophe by .0001%
May 18, 2015 1:12 PM   Subscribe

The academic study of existential risk is being taken seriously. The University of Cambridge has the CSER, with its incredibly distinguished list of members, some of whom you can see speak about the risks inherent in scientific progress, or you can read the summary of why we need to work together to stop doomsday by Prof. Martin Rees. Oxford has the Future of Humanity Institute, headed by Nick Bostrom (fascinating profile of him), which has produced this taxonomy of threats, and has argued that far too little emphasis is placed on the issue. In the US, work is done in thinktanks like the Global Catastrophic Risk Institute and the Machine Intelligence Research Institute, which is focused on trying to tame AI, and predict when it will arrive (pdf). Though climate change gets a nod, the main concerns appear to be largely AI (which they are really worried about), nuclear war (chance of happening: between 7% and .0001% a year),threats from technological innovation like biotech or nanotech (pdf).
posted by blahblahblah (56 comments total) 29 users marked this as a favorite
 
It's funny. Because of Eliezer Yudkowsky, I have to fight my tendencies to ignore anything that a person writes about existential risk. I'm reading through the CSER stuff (the MIRI is tainted by Yudkowsky), which is interesting, although they do give credit to the idea of grey goo (uncontrolled replication), which has been more or less debunked for quite some time.

Given that the greatest existential threat seems to be nuclear war, I would think that the largest amounts of money to prevent human extinction would be towards making sure the governments that control the nuclear weapons are not run by people who would use them and eventually pushing these people towards a massive, if not complete, disarmament. (Enough say, that no nation could actually blanket the major population centers of the world with nuclear devices.) Not musing over AI or nanotech.
posted by Hactar at 1:36 PM on May 18, 2015 [1 favorite]


I've heard that anecdote about the early atomic bomb developers wondering if they could trigger fusion of atmospheric nitrogen (essentially turning the entire atmosphere into an uncontrolled fusion reaction), although I've heard it accredited to Teller, not Oppenheimer.

Amusing perspective from Wikipedia: Teller also raised the speculative possibility that an atomic bomb might "ignite" the atmosphere because of a hypothetical fusion reaction of nitrogen nuclei.[note 1] Bethe calculated that it could not happen,[27] and a report co-authored by Teller showed that "no self-propagating chain of nuclear reactions is likely to be started."[28] In Serber's account, Oppenheimer mentioned it to Arthur Compton, who "didn't have enough sense to shut up about it. It somehow got into a document that went to Washington" and was "never laid to rest".[note 2]
posted by Existential Dread at 1:41 PM on May 18, 2015 [2 favorites]


Though climate change gets a nod

FFS. Climate change is the catastrophe that is actually happening to us. Like, it's starting already. Give me super intelligent AI and nuclear power-- to help stop climate change.
posted by gwint at 1:43 PM on May 18, 2015 [39 favorites]


Somewhat similar, previously: Better charity through research.
posted by andoatnp at 1:49 PM on May 18, 2015


To combat global warming with a "catastrophic" event, we'd just need one largish volcano (like the size of Mount Vesuvius but just a little bigger).

Boom! Instant miniature ice age lasting for 10 years! Problem solved
posted by surazal at 1:50 PM on May 18, 2015 [1 favorite]


(Enough say, that no nation could actually blanket the major population centers of the world with nuclear devices.)
I thought there was only one nation with that capability left (or did Russia get enough of their weaponry deployed in Former Soviet Republics returned to them?). And that nation has only the Best Intentions, if you believe Sam Harris (and if you do, you're as delusional as most Religious people).

Climate change is the catastrophe that is actually happening to us.
I'm sorry to say that it's probably irreversible by now (maybe not physically but certainly politically), but it's a long, slow process (that I personally won't get to experience the worst of... I was SO born at the right time). And the 10-20% of the current world population that are surviving 100-150 years from now will all shrug it off as one of those 'modern annoyances'.

To combat global warming with a "catastrophic" event, we'd just need one largish volcano
...or maybe a little Nuclear Winter. Everything is connected. (I'm going to go into a corner and cry for humanity now)
posted by oneswellfoop at 1:54 PM on May 18, 2015


Direct destruction of population centres by nuclear weapons would kill billions, but not end humanity. Destruction of global supply lines and power sources would kill humanity by starvation a few months later, modulo perhaps a few million people, widely dispersed enough that they couldn't sustain technological continuity. Maybe twenty or thirty generations later we'd be back on our feet, but it wouldn't be "us" any more, it'd be something we'd find as alien as Ancient Egypt.
posted by topynate at 1:59 PM on May 18, 2015 [2 favorites]


something we'd find as alien as Ancient Egypt.
Well, if the "us" after all that find the "us" now as alien as Ancient Egypt, there'd be a good chance they'd be as much better than us as we are than Ancient Egypt, right?
that was not an easy thought to make into a sentence
posted by oneswellfoop at 2:04 PM on May 18, 2015 [1 favorite]


oneswellfoop: " there'd be a good chance they'd be as much better than us as we are than Ancient Egypt, right?"

You mean better at committing global suicide? Let's hope not.
posted by signal at 2:13 PM on May 18, 2015 [1 favorite]


The most recent studies I've seen conclude that as few as 100 Hiroshima-yield bombs over cities would cause a severe ice age and ozone depletion.

I know that a much fewer number would turn my country into an unrecognizable loony bin.
posted by RobotVoodooPower at 2:19 PM on May 18, 2015 [1 favorite]


Well, if the "us" after all that find the "us" now as alien as Ancient Egypt, there'd be a good chance they'd be as much better than us as we are than Ancient Egypt, right?

Not really. We've plucked most of the Earth's low-hanging resources. A post-apocalyptic industrial revolution, if it can happen at all, will look very different because all the easy-to-dig iron ore and coal is gone.
posted by BrashTech at 2:20 PM on May 18, 2015 [3 favorites]


Not to initiate a derail, but I'm curious why is [Eliezer Yudkowsky] in particular so in-credible.

Rational Wiki makes no real pretense at neutrality here, but their articles Eliezer Yudkowsky and Roko's basilisk are not a bad place to get some flavor.
posted by spaceman_spiff at 2:28 PM on May 18, 2015 [2 favorites]


Not really. We've plucked most of the Earth's low-hanging resources. A post-apocalyptic industrial revolution, if it can happen at all, will look very different because all the easy-to-dig iron ore and coal is gone.

There has been some interesting discussion of that (see also, The Knowledge). However, in the short term there would actually be a lot more available refined metals...
posted by blahblahblah at 2:31 PM on May 18, 2015 [4 favorites]


Boom! Instant miniature ice age lasting for 10 years! Problem solved

It's OK to love Mad Max: Fury Road. Just don't LOVE Mad Max: Fury Road.
posted by echocollate at 2:36 PM on May 18, 2015 [2 favorites]


Not to initiate a derail, but I'm curious why is [Eliezer Yudkowsky] in particular so in-credible.

Perhaps Hactar just has a deep moral discomfort with book-length Harry Potter fanfiction.

(I am now driving myself nuts trying to work out what RationWiki means by "an overweening interest in cryonics". Makes him sound like a too-old Hannah Montana fan... but for frozen heads.)

However, in the short term there would actually be a lot more available refined metals...

Including uranium and plutonium. Yes, I am seriously proposing the possibility of post-apocalyptic nuclear steampunk.
posted by topynate at 2:45 PM on May 18, 2015


BrashTech is if anything, an optimist. The odds are, we will never be able to manage another technological civilization any time in the next several tens or hundreds of thousands of years. Metals are currently in forms that are difficult to forge and work, and the reserves of high-temperature fuels are depleted. And that's leaving aside the expertise problem.

Honestly, we'll be lucky if we manage to go from hunter-gatherers to agriculture again before some global change renders us extinct.
posted by happyroach at 2:57 PM on May 18, 2015 [3 favorites]


Destruction of global supply lines and power sources would kill humanity by starvation a few months later, modulo perhaps a few million people, widely dispersed enough that they couldn't sustain technological continuity. Maybe twenty or thirty generations later we'd be back on our feet, but it wouldn't be "us" any more, it'd be something we'd find as alien as Ancient Egypt.

My solution? A few dozen heavily subsidized Engineering and Agriculture colleges with unexpectedly talented faculty and well-stocked libraries, located well away from any large city and but near important natural resources.

Basically, communities designed to independently improvise themselves into a stable technological state in the event of a catastrophe, rather than just collapsing into barbarism.

Metals are currently in forms that are difficult to forge and work

I'd say they're actually more easily available. The cities and landfills would become mines.
posted by cosmic.osmo at 3:05 PM on May 18, 2015 [3 favorites]


I'm not saying we wouldn't get our hair mussed.....
posted by gimonca at 3:12 PM on May 18, 2015 [6 favorites]


Nuclear war and climate change is much less interesting than AI because the remedies you come up with have a chance of being tested.
posted by benzenedream at 3:12 PM on May 18, 2015 [4 favorites]


I have a hard time getting too riled up about the AI concern. Perhaps it's because I don't think that sentient or self-aware AI is in theory even possible on a metaphysical level (despite the eventual sophistication of mimicking self-awareness), so that elimiates a number of rogue RoboCop concerns and questions about teaching machines values. When technology gets really, really sophisticated, it actually has a tendency to make things even safer along the way (self-driving cars will eventually make this apparent), or we have built in redundancies to keep automobiles "deciding" to kill people intentionally as the lesser of two evils, for example. We won't be ignoring these necessary built in concerns and constraints as things get more technical. The difficulty will be in keeping up with the sophistications of our own makings to make the mechanics safe and consistent with some forward thinking, not that machines could "take us over" in any real sense. So, I have a hard time taking seriously the Rise of the Machine concerns. I'll go on record saying that I think it'll never happen, we'll never have to worry about it except in our speculative writing, and we won't ever lose much night's sleep about it. However, from someone who loves science fiction, I love dreaming about these type of worlds and will probably be first in line to see the new Terminator movie. I like the machines can be imagined to be this self-aware in some possible world (if not ours), even though I don't think it's genuinely a live enough concern to warrant much ink spilt.
posted by SpacemanStix at 3:19 PM on May 18, 2015 [2 favorites]


There would however be vastly more information available on things like the shape of a plow, the concept of an overshot water mill, how to make a sailboat, the arch, the concept of zero, contagion, the scientific method, religion that does not require you to burn or bury 50% of your society's output, etc. I doubt the concept of written language could be lost with so much of it all around.

Once you're up to the Middle Ages, you can make charcoal, and with charcoal and water power you can get into the early Industrial Revolution. With charcoal you can make glass, and with glass you can make a solar heater. With water or wind power you can make an electrical generator.

One reason there is so much science fiction along the lines of A Connecticut Yankee in King Arthur's Court is that modern knowledge + medieval or even ancient Roman technology quickly vaults society into the early Industrial Revolution. One reason the ending to Battlestar Galactica sucked so much is that a couple dozen decently armed spacefaring people cannot meet up with hunter-gatherers 100,000 years ago without vaulting the hunter-gatherers into some kind of steampunk in two decades.

Any existential crisis, I think, either kills off humanity entirely, or we're back on our feet in a hundred years.

People are really, really smart. Not wise, maybe, but really, really smart.
posted by musofire at 3:21 PM on May 18, 2015 [2 favorites]


To combat global warming with a "catastrophic" event, we'd just need one largish volcano (like the size of Mount Vesuvius but just a little bigger).

Well, there's the Yellowstone Caldera, but I'm kind of hoping that stays quiet until I'm already dead.
posted by uosuaq at 3:23 PM on May 18, 2015


My solution? A few dozen heavily subsidized Engineering and Agriculture colleges with unexpectedly talented faculty and well-stocked libraries, located well away from any large city and but near important natural resources.

That's not really going to be enough. We really need about a million people to sustain a technological civilization. Your university might be better off carving a bunch of instructional stela, in the hope that later in somebody might be able to and interested in reading them.


There would however be vastly more information available

I world not be so sanguine about the survival of knowledge. With a 98-99% die-off, a lot of that "common knowledge" will be lost, either because it's not useful at the time, isn't passed on, or because people are isolated from connecting with others that have necessary knowledge.

As for using charcoal to power Ann industrial civilization? That's unlikely. Charcoal requires a lot of forest land for the energy given- England basically switched to coal because the existing forests weren't enough to power industry. Though, deforesting a continent in order to power machinery, and triggering both a economic and ecological collapse would be a ironic scenario.

Basically, I'm telling you that this is it as far as our chances go. If we lose our technological society, don't count on a bunch of preppers or vigorously alive technophiles to pull us up by the bootstraps.
posted by happyroach at 3:35 PM on May 18, 2015 [6 favorites]


Donald Knuth said in some interview that basically he thinks some people naturally have a style of thinking that lends itself to computer science. These people still existed in the past, just they were born too early to actually work on computers.

This is probably true for a lot of different things, and most of these "existential risk" people seem like theologians who were born too late for theology. They say evil AI and nanobots and genetic catastrophes are very likely, and their arguments are superficially plausible, but I can't escape the feeling that there's some kind of intellectual sleight-of-hand, that they're pulling a con.

A lot of theological arguments have the same flavor; I can't convincingly refute Anselm's ontological argument, or whatever theodicy you want to come up with, but come on.

Plus it just seems like the same attitude to the world -- a sense of awe, the smallness of humanity, and a focus on salvation or damnation in the end times.
posted by vogon_poet at 4:18 PM on May 18, 2015 [21 favorites]


vogon_poet, omg, yes! Roko's basilisk feels so clearly to me to be an essentially religious idea... it's somewhat incomprehensible to me that people seem to treat it like it's not.
posted by overglow at 4:26 PM on May 18, 2015 [2 favorites]


There's a reason Kurzweil's Singularity is referred to as the Nerd Rapture.
posted by benzenedream at 4:53 PM on May 18, 2015


This is probably true for a lot of different things, and most of these "existential risk" people seem like theologians who were born too late for theology.

I dunno, for me they just seem like an acute case of Engineer's disease filtered through an unexamined areligious form of calvinism. The fact these douchescapades get so much money and airtime pisses me off.
posted by smoke at 6:12 PM on May 18, 2015 [2 favorites]


I have a hard time getting too riled up about the AI concern.

Being frightened of AI is roughly like worrying that if your grocery list gets long enough and complicated enough, eventually it will go do the shopping itself.

People who do not have careers in IT: "omg AI is coming and it's going to destroy us all omg omg".

People with careers in IT: "Why doesn't the printer work? Why won't Linux talk to the projector this morning?"
posted by mhoye at 6:42 PM on May 18, 2015 [15 favorites]


People who do not have careers in IT: "omg AI is coming and it's going to destroy us all omg omg"

I feel like people who make a career out of these concerns are often taking advantage of people who watch too many movies. Or they watch too many movies themselves with just a smattering of philosophy-of-mind thrown in to make dangerous speculations sound interesting.
posted by SpacemanStix at 7:02 PM on May 18, 2015 [1 favorite]


Okay but there are some "legitimate" academics with backgrounds in philosophy worrying about this stuff. It bothers me because it's obviously implausible, but I can't refute their arguments for why it's actually inevitable.
posted by vogon_poet at 7:08 PM on May 18, 2015


They say evil AI and nanobots and genetic catastrophes are very likely

It doesn't have to be very likely at all to be worth worrying about- a humanity-ending asteroid strike in the next hundred years is very very unlikely, but worth worrying about, because the damage would be very bad. At least, if you can avert the world-ending event, you will also be able to avert the much more likely Very Bad Event (ex: Tunguska 2.0, This Time Over Beijing).

Not that I take the worries of world-ending strong AI very seriously either...
posted by BungaDunga at 7:23 PM on May 18, 2015


happyroach, if we mostly die off, continent sized forests will automagically grow given a couple of hundred years at most. Most of northern Arkansas was clearcut less than a century ago and now it is once again mostly forest, and the only thing that happened there was depopulation due to increasing migration to large cities.

Yeah, 20th century technology is hard without vast amounts of coal and oil, but wood renews itself in surprisingly short order. And that said, there is actually a lot of easily accessible coal left in the US. We stopped digging up and burning much of the easily accessible lignite and bituminous as technology made it more cost effective (and more necessary) to mine anthracite.

There are vast reserves of all sorts of relatively easily accessible minerals in the middle part of the country where they mostly quit mining in the middle 20th century. That is not so much the case with oil and natural gas, although the latter could be easily collectible from our mountains of trash if future civilizations knew it was there.

It would have to be a different progression of technology in the sense that the amounts available are lower, so they would have to move on more quickly, but there is, at least for the moment, enough to get there.
posted by wierdo at 7:35 PM on May 18, 2015 [1 favorite]


My money's on autonomous weapons. Way before we ever get to fashion some conscious machine in our own image we'll just do something stupid in the course of trying to make war inexpensive and automatic.
posted by XMLicious at 9:36 PM on May 18, 2015


From a purely selfish perspective, I'm relieved by the return of an imminent doomsday scenario.

Growing up during the Cold War, I figured I'd be a drifting bit of radioactive ash long before I had to make any important decisions about my life's trajectory. So I didn't.

Now, nearing 50, my failure to plan is catching up with me.

Thank goodness an extinction event looms so I can dive back into my comfortably nihilistic lifestyle.

*drinks kerosene, lights cigarette*
posted by BitterOldPunk at 4:09 AM on May 19, 2015 [5 favorites]


AI won't be a threat in the sense that it will become malicious and engineer our destruction, but an AI system could be quite dangerous if it's given too much responsibility over critical infrastructure and then just doesn't behave the way we expect it to.

Any useful AI is probably going to be a neural net of some sort. Someone who doesn't have a strong understanding of a particular AI but who has the administrative authority to replace humans with it could easily slot an AI into a system whose scope is greater or different than what it was originally trained for.

Imagine some fool fresh from a management seminar, heart aglow with the thought of saving a quarter million on labor costs, plugging an AI that was originally developed for air traffic control into a medical device network or a mine ventilation system. And then skimping on retraining costs.

An AI could fuck a lot of things up just by doing its honest best.
posted by clarknova at 4:11 AM on May 19, 2015 [5 favorites]


Yes, but there is no indication of the development of autonomous AI (that would be able to do its "honest best"absent its expected inputs). I mean, expert systems have been practically abandoned since the AI Winter. These days you've got... I mean, I'm not trying to take the piss here, but the industry leader is called Drools. Which makes me feel kind of bad for it, even though it is so far from any popular conception of what it means to be AI that it's ludicrous.
posted by sonic meat machine at 4:57 AM on May 19, 2015 [1 favorite]


Wow, I didn't realize how crank-ish Yudkowsky actually is. He puts up a good front, but it's more of a front than I thought.

Independently of this tangential matter, I absolutely think that this whole "existential risk" stuff is bullshit. We have clear, tangible problems. Climate change, ocean acidification, our agriculture depending on fossil fuels which will get much more expensive to extract. And the old standbies of war, hatred, and corruption.

Why are these people inventing esoteric end-of-the-world scenarios? Two reasons: As reasonably well-off people in first-world countries, the real big problems will affect them last, and least. And the real problems have no easy solutions. They - we are complicit in the systems of exploitation and oppression that cause them.

So instead of confronting ourselves about the evil we let happen, let's make up some stuff about AIs and do a bunch of stuff to counter this imaginary threat, and then pat ourselves on the back about having saved the world.
posted by Zarkonnen at 7:36 AM on May 19, 2015 [2 favorites]


Oh, on a similar note, and by the same set of people: The Fable of the Dragon-Tyrant, a cutesy attempt to get people to fund magical life-extension technologies. And my little reply-fable on the topic, which also applies pretty well to the "existential threat" stuff:

One day, an anti-dragonist on a speaking tour visited a town. When he arrived, most of the town's inns were already full, and he had to make do with a small room in a small in in a run-down part of the town.

The next morning, he stood outside the inn on his soap box and told people about how the dragon could be defeated. A small crowd gathered around him. When he had finished speaking, a woman asked: "My children are hungry. My husband went off to war against the tigers and never came back. How does killing the dragon help them?"

"Well, they too will one day be fed to the dragon!"

"But they are hungry now. My baby is very weak. She cries all the time. Even if she doesn't die, she's going to grow up stunted."

"I'm sure you can find a way. Anyway, I'm here to talk about the dragon, it's..."

Another interrupted him: "My son was killed by the king's men three weeks ago. They laughed as they cut him down. No one will hear my case."

"Well, I'm sure they had a good reason. Your son was probably a criminal."

Another said: "My family beats me because I don't want to marry the man they chose for me. Right now, I wouldn't mind being eaten."

"Listen. I'm not interested in the problems of you little people. They're not my problems, and anyway, you're probably lying, or exaggerating, or just not trying hard enough. But I'm scared of the dragon, because the dragon's going to eat everyone, including me. So we should concentrate on that, don't you agree?"

And the people rolled their eyes and walked away.
posted by Zarkonnen at 7:40 AM on May 19, 2015 [3 favorites]


Predicting AI:
With the data, we further test two folk theorems: firstly that predictors always predict the arrival of AI just before their own deaths, and secondly that AI is always 15 to 25 years into the future. We find evidence for the second thesis but not for the first. This enabled us to show that there seems to be no such thing as an “AI expert” for timeline predictions: no category of predictors stands out from the crowd.
AMAZEBALLS
posted by flabdablet at 8:26 AM on May 19, 2015 [3 favorites]


20th century technology is hard without vast amounts of coal and oil, but wood renews itself in surprisingly short order.

I don't know, this seems comparable to the fantasy that if civilization collapses people will just oil up their rifles and start living on squirrels and deer. Yes, that could happen, but what's more likely is that people in cities of any size will exhaust existing supplies of wood/animals within days or weeks at the most, and things will go downhill from there. [waves pitchfork]

In Jared Diamond's Collapse, he talks about how the Shogunate in 16th C Japan (Pop. ~20 million, roughly the current population of NY State) imposed draconian restrictions (including the death penalty, iirc) on cutting trees for charcoal because people were denuding the country.

Given the social chaos that's likely to ensue with any serious damage to civilization, I'd be more worried about how we could prevent the complete devastation of the world's forests (which aren't doing so well now) before things stabilized.
posted by sneebler at 9:18 AM on May 19, 2015


I'm going to make some points about AI, since I haven't really read up on the risks of nanotech or biotech catastrophes (at least recently). For what it's worth, I've also read everything Eliezer Yudkowsky has published, except for the Harry Potter fanfic.

First: when I talk about AI, I'm talking about truly general AI, not really advanced expert systems. Rather I mean AI that's at human level. Put differently, if Turing-test capable AI is weak AI, I'm talking about the strong version.

Second: a lot of arguments (including some in this thread) end up reducing to "something complicated like AI couldn't possibly come from something simple, like, say a big network of computers." But there's another, really obvious of example of intelligence emerging from vastly simpler systems: you. (That is, unless you're a creationist.) Now, what if the emergent behavior is being driven forward and guided by another intelligent agent?

Third: a lot of people talking about AI end up making a category error. AI risk seems to often be treated as something coming from an epistemologically unsound field of human knowledge, like it's a prediction of homeopathy. I'd say the problem is more that it's not grounded in any existing field of human knowledge, since it hasn't been invented yet. This makes it hard to evaluate any claims about its risk. But at least one example from a field of human knowledge that is more epistemologically sound ought to give pause: quantum mechanics. Even experts still think that shit's a bit crazy. What if the AI risk is real, but it's logically non-obvious, just like quantum mechanics? To pre-empt an objection, "it hasn't been invented yet" could be used as an argument that worrying about AI risk is worrying about nothing. Well, no. Humans have a long history of turning ideas into tools, and they seem to be successful a) when there's significant benefit b) when competition or politics or military force provide sufficiently little interference. To address another objection, "AI is not possible because of my metaphysical beliefs" doesn't mean it's not possible. It means it's not possible for you. So feel free to watch the debate from the sidelines :o)

Fourth: on Eliezer Yudkowsky (and I don't think this is a derail, since he's a founder of MIRI and a leading voice in the debate about AI risk). I understand the criticism directed at him. My take is a bit more subtle than some expressed here - I do think he's overly obsessed with his own intelligence. In one story he told (I can't remember where) he asked several people which other living human being has a level of raw brainpower comparable to his (hoping I think for no answers, and being disappointed when someone mentioned John Conway). So the temptation is great to dismiss him as a nut, or worse, to label him a nut then dismiss his ideas without due consideration of their merits. And I think, for worse, that happens. But in my experience, if you factor out the ego from his writing, he can sometimes make a large amount of sense. And I and several people I consider orders of magnitude smarter than myself seem to manage to take him seriously.

blahblahblah - thanks for the links, much interesting reading ahead!
posted by iffthen at 9:53 AM on May 19, 2015 [1 favorite]


An AI could fuck a lot of things up just by doing its honest best.

Yes. I think that perspective should be considered more widely...
posted by iffthen at 10:05 AM on May 19, 2015


> they do give credit to the idea of grey goo (uncontrolled replication), which has been more or less debunked for quite some time.

How can you "debunk" a possible hazard from a technology that doesn't even exist yet?

Overall, it seems simply prudent to investigate the risks from upcoming technology in advance, if only to design it to prevent such risks. But I honestly doubt we'll really get there before we're actually grownups- I think we're going to take a serious hit in the next two generations from climate change and we'll be spending increasingly large amounts of technology on that and not on fripperies like AI, and that the challenging of coming together to deal with the planetary upheaval will either destroy us or make us wiser inhabitants of the planet...
posted by lupus_yonderboy at 11:37 AM on May 19, 2015


How can you "debunk" a possible hazard from a technology that doesn't even exist yet?

Starting with a careful review of what microbiology can tell us about the adaptability, environmental requirements and energy utilization mechanisms of existing microorganisms is a pretty good way to regain the required sense of perspective.

One of the first things that seems likely to happen to grey goo is that something else would evolve a way to eat it.
posted by flabdablet at 11:48 AM on May 19, 2015 [2 favorites]


But there's another, really obvious of example of intelligence emerging from vastly simpler systems: you. (That is, unless you're a creationist.)

I don't want to get into a big discussion about dualism etc., but this is probably where one would need to come to terms with possible self-awareness for material objects. I think if one were a materialist, one would have to be open to the possibility of "thinking matter" in the future. If one thinks that minds might be brain + something else (and there are non-creationist philosophers who grapple with this based on some of the limitations of materialist notions of the brain), one might come to the conclusion that it's metaphysically impossible for machines to have self-awareness, for arguably good reasons, rather than simply epistemically mysterious at the moment.

However, if one were to assume other sophisticated material objects are like ourselves and compare it to the fact that we developed the ability for self-awareness from strictly materialist properties, you still have to go, huh, that's pretty statistically weird, which doesn't prima facie come close to me needing to start worrying yet. Whatever the proposed solution to getting from materialistic descriptions of properties that mimic self-awareness to self-awareness itself, I don't think one can argue that we know it's simply a matter of making calculations go really, really fast with more sophistication. Somehow getting from an outside third-person perspective to an inside first-person perspective of self-awareness requires some major voodoo (in the best scientific way possible, of course) that no matter how much we could be fearful of that kind of thing happening, we have no idea if we are even close enough for it to be a possibility, as we can't even formally describe yet (I don't think) a mechanism that makes sense to bridge that gap. It could be something different than simply being a really complicated computer that somehow creates an emergent property of self-awareness. How does one get from sophistication to self-awareness? It's still a pretty crazy open ended question in philosophy of mind and neuropsychology that people pick at, but are often hard pressed to establish more than it being pretty mysterious with a belief that there's an answer in there somewhere that somehow is linking our brain states to (first person) mental ones. Until that mechanism is discovered, I think it's a false analogy to suggest it has a strong possibility of happening elsewhere, simply because a system is complicated.

So again, I guess I'm not concerned about much more than the possibility of human error and bad decision making once things get super sophisticated (which should probably always be a concern, since technology seems to go more quickly than we can have discussions about it.) But even then, I'm pretty cool thinking that we should not be careless and have a number of major redundancies in place in case of errors, and we should probably take decision making regarding things that really matter out of the places where they can actually do catastrophic damage. So, maybe we should be concerned about people being not careful than AI doing anything to us, which has always been a pretty live concern as life gets more complicated.
posted by SpacemanStix at 1:04 PM on May 19, 2015


How can you "debunk" a possible hazard from a technology that doesn't even exist yet?

If a technology requires a violation of the laws of thermodynamics to work, then I consider it self-debunked.
posted by happyroach at 1:14 PM on May 19, 2015 [3 favorites]


My solution? A few dozen heavily subsidized Engineering and Agriculture colleges with unexpectedly talented faculty and well-stocked libraries, located well away from any large city and but near important natural resources.
That's not really going to be enough. We really need about a million people to sustain a technological civilization. Your university might be better off...

I agree, it wouldn't be enough to sustain a technological society at our level (or even say a 1950s level), but that's not the point. The point is to allow for some small fraction of our massively interdependent global civilization to gracefully degrade into something simpler and less interdependent, but short of a dark age, so things can bounce back more quickly.

I imagine that making that transition quickly would require a lot of frantic well-timed effort by knowledgeable and skilled people. Too late, and the knowledge goes away and the old infrastructure decays to unusability, and you have to start from 100 steps behind rather than 10.

Put a smallish number of the right people next to a hydroelectric dam during a collapse, and maybe they can keep it running themselves even if they couldn't build a new one. Others could improvise electric trucks and setup small-scale factories to manufacture spare parts using the power. Pretty soon you're on your way to rebuilding rather than just scavenging.
posted by cosmic.osmo at 1:22 PM on May 19, 2015


And I and several people I consider orders of magnitude smarter than myself seem to manage to take him seriously.
iffthen

A cult that attracts smart people (or people who think they're smart) is still a cult. I think Yudkowsky's bullshit is especially attractive to people who have a...healthy regard...for their own intelligence, since it has a comforting reassurance that those who join up with Yudkowsky are logical supermen above the masses, who are all irrational or can't/don't understand.

So the temptation is great to dismiss him as a nut, or worse, to label him a nut then dismiss his ideas without due consideration of their merits.

As I understand it, he never finished high school and has no formal education or training in AI research, has no body of actual AI coding or research beyond self-published non-peer reviewed papers that are not cited by anyone else in actual AI research, and is employed as "researcher" at an institution he founded (which publishes said papers). His primary claim to fame appears to be founding two popular blogs where he writes "fables" and is fawned over by his adoring followers, and writing a Harry Potter fan fiction.

If you looked at that description of any other person you would dismiss them as a nut, like the people with no physics background or research who are convinced they've disproved the Theory of Relativity or discovered perpetual motion and try to engage actual scientists. Can you give some specific examples or work of Yudkowsky's that makes him worthwhile at all?
posted by Sangermaine at 2:02 PM on May 19, 2015 [4 favorites]


The thing about odds, is that they can't be reduced. They are what they are.
posted by rankfreudlite at 2:32 PM on May 19, 2015


My solution? A few dozen heavily subsidized Engineering and Agriculture colleges with unexpectedly talented faculty and well-stocked libraries, located well away from any large city and but near important natural resources.

And call it Foundation.
posted by JackFlash at 3:10 PM on May 19, 2015 [1 favorite]


As I understand it, he never finished high school and has no formal education or training in AI research, has no body of actual AI coding or research beyond self-published non-peer reviewed papers that are not cited by anyone else in actual AI research, and is employed as "researcher" at an institution he founded (which publishes said papers). His primary claim to fame appears to be founding two popular blogs where he writes "fables" and is fawned over by his adoring followers, and writing a Harry Potter fan fiction.

He seems to be friends with a few people who I find easier to take seriously. He gets occasional cites from philosophy-sort-of-people and the speculative fringe of actual AI. But yeah his actual life's work - I've seen him sheepishly admit he has a much harder time writing academic papers than Harry Potter fan fiction although this seems only to detract marginally from his estimation of his own genius - seems to be a book about how to insure that entirely hypothetical superintelligent AI doesn't destroy or enslave humanity. He seems to actually have contempt for most people doing practical work in AI-as-we-know-it. Presumably he'd say they're in an intellectual cul-de-sac, focused on the wrong issues. But it's just - even as someone with an undergrad CS degree I can see he gets stuff like CS theory wrong not infrequently. What is supposed to convince me this guy is an expert?

Sometimes I enjoy Dale Carrico's "Amor Mundi" blog, which is an entertainingly vicious ongoing critique of singularitarians, transhumanists, etc. I don't find his philosphical argument on why computers will never "think" all that interesting, though, because I care a lot more about what they might do.

AI won't be a threat in the sense that it will become malicious and engineer our destruction, but an AI system could be quite dangerous if it's given too much responsibility over critical infrastructure and then just doesn't behave the way we expect it to.

I definitely think this is a more realistic concern, or an unexpected interaction of systems, or something like a more narrow (doesn't matter if it's actually "General Anthropic AI" or whatever) version of the "paperclip maximizer" scenario.
posted by atoxyl at 3:13 PM on May 19, 2015 [1 favorite]


If you looked at that description of any other person you would dismiss them as a nut, like the people with no physics background or research who are convinced they've disproved the Theory of Relativity or discovered perpetual motion and try to engage actual scientists. Can you give some specific examples or work of Yudkowsky's that makes him worthwhile at all?

fwiw,[1,2,3] he's worked with john baez :P

maybe in the 'league' of, say, ray kurzweil, jaron lanier, stephen wolfram, scott aaronson and/or robin hanson?

but i liked CRS' takedown! "watching self-proclaimed 'rationalists' cowering in superstitious fear..."

And call it Foundation.

or a concent?
posted by kliuless at 3:33 PM on May 19, 2015


I'm sorry to say that it's probably irreversible by now (maybe not physically but certainly politically)...

maybe not politically?
The arguments that convinced a libertarian to support aggressive action on climate
So you’re right, a beast was unleashed. A lot of it comes from talk radio, a lot of it comes from Fox, and that’s where a lot of the conservative base gets their information.

The discussion about how public policy changes — there’s no real orthodox view on that. There are lots of different theories. None of them are proven.

But when it comes to public opinion, there are academic orthodoxies. They survive the test. And those orthodoxies suggest that public opinion follows elite opinion, pure and simple. If elite opinion changes, public opinion changes — it can change fast and it can change dramatically.

Beliefs about climate change — whether it’s happening, whether we should do anything — have been pretty stable. What moves around is right-wing opinion. And it tends to follow the leader. So when Newt Gingrich and John McCain were talking about the need to do something about climate change, what do you know? Republican support for doing something about climate change, including conservative support, shot up...
posted by kliuless at 5:27 PM on May 19, 2015


A cult that attracts smart people (or people who think they're smart) is still a cult. I think Yudkowsky's bullshit is especially attractive to people who have a...healthy regard...for their own intelligence, since it has a comforting reassurance that those who join up with Yudkowsky are logical supermen above the masses, who are all irrational or can't/don't understand.

I think I agree with you that the LessWrong crowd is a bit of a cult, but disagree that the people it attracts are all as egotistical as Yudkowsky. Reading LW gives me the impression there's lots of people there who have relatively low ego but happen to buy into Yudkowsky's "bullshit." For what it's worth, I'm not an LW contributor, but do personally tend toward intellectual arrogance.

Can you give some specific examples or work of Yudkowsky's that makes him worthwhile at all?

Perhaps, but it's subjectively worthwhile in the sense that I've looked at it and thought "despite some of Yudkowsky's other writings this seems fairly legit," and I am not an AI researcher. If you have the patience perhaps read and judge for yourself:

Intelligence Explosion Microeconomics (you might disagree with the premise - that "intelligence explosion" is not likely/impossible - but I don't find too much wrong with where he goes from there)

Complex Value Systems are Required to Realize Valuable Futures (I think given the type of AI Yudkowsky discusses, this is an accurate description of the core problem. But not everyone agrees Yudkowsky's AI is possible.)

There's also the collaboration linked by kliuless - John Baez is definitely not a crackpot.

If you don't agree with his take on AI, you won't find his writing useful at all. I agree with him to the extent of saying his type of AI is possible, which is the only reason I'll pay attention to what he writes.
posted by iffthen at 5:21 AM on May 20, 2015


SpacemanStix: Until that mechanism is discovered, I think it's a false analogy to suggest it has a strong possibility of happening elsewhere, simply because a system is complicated.

I think you're probably correct here, and I overstated the case. (I accept your argument, in other words. Thanks!)
posted by iffthen at 5:31 AM on May 20, 2015 [1 favorite]


good news! "We, the citizens of the world, may be starting to burn less carbon - not more... It seems the big difference is China. They say the Chinese made more electricity from renewable sources, such as hydropower, solar and wind, and burned less coal."

altho...
Greenland icemelt study suggests The Day After Tomorrow has some basis in reality.*

and otoh...
Fossil Fuel subsidies now $5.3 Trillion / yr, says IMF. Most of it in the form of unpriced pollution.*

also btw for those interested in (global) catastrophe response thrillers -- and posthumanity -- check out neal stephenson's newest novel seveneves :P

speaking of which...
Bootstrapping technological complexity
We are awash in some pretty complex technologies, and most of us don't really know how they work. But even more strange than that, we couldn't even get to this current point in complexity all at once. It's required a lot of feedback...

This reminds me a bit of Lewis Dartnell's recent article for Aeon about whether it is possible to reboot modern civilization without using fossil fuels. One of the main points he makes is that there has been a spiral of ever-more sophisticated technologies that have led us to where we are now. To jump to our current place, all at once, seems to be nearly impossible...

A microchip is probably orders of magnitude more complicated than the mechanism within a mechanical watch. But this complexity is hidden. Not only have we bootstrapped complexity, but at every stage, we have shielded it from the users...

This bootstrapping has made us not only more distant from recreating these technologies, but also made us less aware of their many details. I certainly don't have a panacea for this (I examined this in this essay) but the first step is simply an awareness of what has happened. When we take microchips—or really any complex technology—for granted, as monolithic commodities, we lose some of the wonder we should have for them. As well as a recognition of how incredible it is that we have gotten to this level of complexity at all.
that is all!

cheers :D
posted by kliuless at 8:33 AM on May 20, 2015 [1 favorite]


« Older Sucker Punch: The Music Video   |   (⌒▽⌒) Newer »


This thread has been archived and is closed to new comments