The end of the world is nigh. We need to publish papers on it.
November 25, 2012 5:22 PM Subscribe
At Cambridge University, the Project for Existential Risk is considering threats to humankind caused by developing technologies. It will be developing a prospectus for the Centre for the Study of Existential Risk, to be launched by the Astronomer Royal, a co-founder of Skype and the Bertrand Russell professor of philosophy. More detail from the university, while the news excites some journalists.
Additional reporting by the Huffington Post and TG Daily. Wikipedia presents a wider array of risks to humankind, as does Nick Bostrom (one of the external advisors).
Additional reporting by the Huffington Post and TG Daily. Wikipedia presents a wider array of risks to humankind, as does Nick Bostrom (one of the external advisors).
The Associated Press story asked whether computers may become "cleverer" than humans.
My first thought was "if the machines determine that 'more clever' sounds much better than 'cleverer' than they will certainly be on the right track."
posted by sendai sleep master at 5:55 PM on November 25, 2012 [4 favorites]
My first thought was "if the machines determine that 'more clever' sounds much better than 'cleverer' than they will certainly be on the right track."
posted by sendai sleep master at 5:55 PM on November 25, 2012 [4 favorites]
You probably meant sounds betterer.
posted by srboisvert at 6:03 PM on November 25, 2012 [3 favorites]
posted by srboisvert at 6:03 PM on November 25, 2012 [3 favorites]
So I'm assuming here that the team will be filled out with a beautiful and brilliant but naive and idealistic quantum physicist (the youngest person to hold a Cambridge chair in 300 years!), a wisecrack-ing ex-SAS sergeant who is at first skeptical of the Project's mission, an Asian lady in a wheelchair who makes their equipment and has a murky and checkered past, and DORIS, the team's autonomous canine-form support robot, who at first seems mainly to provide comic relief, but begins to show disturbing signs of becoming self-aware.
Oh, and they'll need to be outfitted with a supersonic jet, obviously, so they can investigate possible existential threats as soon as they are identified anywhere in the world.
posted by strangely stunted trees at 6:04 PM on November 25, 2012 [36 favorites]
Oh, and they'll need to be outfitted with a supersonic jet, obviously, so they can investigate possible existential threats as soon as they are identified anywhere in the world.
posted by strangely stunted trees at 6:04 PM on November 25, 2012 [36 favorites]
Existential Risk: when the future attacks, it will do so with three dice.
posted by Durn Bronzefist at 6:16 PM on November 25, 2012 [6 favorites]
posted by Durn Bronzefist at 6:16 PM on November 25, 2012 [6 favorites]
I'm not sure why we need to wait for independent AI for there to be an existential threat - human/AI combinations can already do the job for us. All of the automated trading on stock markets already has the ability to devastate entire economies within minutes.
The thing that saves humanity from an AI threat is the distributed nature of our network. Unless the AI figures out launch codes or something.
posted by KokuRyu at 6:19 PM on November 25, 2012 [1 favorite]
The thing that saves humanity from an AI threat is the distributed nature of our network. Unless the AI figures out launch codes or something.
posted by KokuRyu at 6:19 PM on November 25, 2012 [1 favorite]
Aujourd'hui, La Terre est morte. Ou peut-être hier, je ne sais pas.
posted by sebastienbailard at 6:32 PM on November 25, 2012 [4 favorites]
posted by sebastienbailard at 6:32 PM on November 25, 2012 [4 favorites]
One step at a time, folks. We haven't even had the Great Transition yet.
posted by Johann Georg Faust at 8:07 PM on November 25, 2012
posted by Johann Georg Faust at 8:07 PM on November 25, 2012
All of the automated trading on stock markets already has the ability to devastate entire economies within minutes.
yeah, remember that time when the market was down 5% for a few minutes and billions of lives were extinguished?
someone should really do something about that.
posted by indubitable at 8:16 PM on November 25, 2012 [1 favorite]
yeah, remember that time when the market was down 5% for a few minutes and billions of lives were extinguished?
someone should really do something about that.
posted by indubitable at 8:16 PM on November 25, 2012 [1 favorite]
A couple of classics. Warning signs. One of my favorite collections of doomsdays.
posted by wobh at 8:52 PM on November 25, 2012 [2 favorites]
posted by wobh at 8:52 PM on November 25, 2012 [2 favorites]
Johann Georg, a friend of mine told me about the Tellus Institute about two years ago. The ominous vagueness with which the Institute describes the Great Transition amused him. I could hardly believe it when, during a walk with this friend, we found the Institute's headquarters.
It turned out to be a respectable stone-and-brick walk-up at the Public Garden's end of Arlington Street. I wish I could tell a thrilling tale about our forced entry into the building and our discovery of its owners' nefarious schemes, but we aren't adventurous people, and we were getting hungry by then.
posted by Rustic Etruscan at 9:00 PM on November 25, 2012
It turned out to be a respectable stone-and-brick walk-up at the Public Garden's end of Arlington Street. I wish I could tell a thrilling tale about our forced entry into the building and our discovery of its owners' nefarious schemes, but we aren't adventurous people, and we were getting hungry by then.
posted by Rustic Etruscan at 9:00 PM on November 25, 2012
There are no environmental scientists or ecologists when hostile environmental change and ecological collapse are the existential threats we absolutely know exist. This should tell you all you need to know about the flawed culture that has attached itself to this sort of thing. Basically, these guys will write papers about shit that won't happen, and while Weiner discusses policy initiatives that won't work everybody else will whine about geoengineering as messiah. Enjoy!
Really, think about the fact that the guy who made Skype us, in this organization, more important than anyone who knows anything about how the environment affects groups of living organisms. And don't go appealing to there being a geneticist, because that is not the same fucking thing.
posted by mobunited at 9:36 PM on November 25, 2012 [3 favorites]
Really, think about the fact that the guy who made Skype us, in this organization, more important than anyone who knows anything about how the environment affects groups of living organisms. And don't go appealing to there being a geneticist, because that is not the same fucking thing.
posted by mobunited at 9:36 PM on November 25, 2012 [3 favorites]
yeah, remember that time when the market was down 5% for a few minutes and billions of lives were extinguished?
someone should really do something about that.
Just because it didn't affect your life catastrophically does not mean that others did not suffer.
posted by KokuRyu at 9:54 PM on November 25, 2012
someone should really do something about that.
Just because it didn't affect your life catastrophically does not mean that others did not suffer.
posted by KokuRyu at 9:54 PM on November 25, 2012
It's about time existential risk is being taken seriously. My previous research turned up a lot of hokum and not much in the way of reliable, systematic or scientific study. I really hope this center gets funded and created.
posted by stbalbach at 9:57 PM on November 25, 2012
posted by stbalbach at 9:57 PM on November 25, 2012
Oh, and they'll need to be outfitted with a supersonic jet, obviously, so they can investigate possible existential threats as soon as they are identified in gravel pits anywhere in the world.
Fixed that for you, to make it a PROPER British sci-fi series.
posted by happyroach at 10:07 PM on November 25, 2012 [5 favorites]
Fixed that for you, to make it a PROPER British sci-fi series.
posted by happyroach at 10:07 PM on November 25, 2012 [5 favorites]
The board of advisors does include one person who works mainly on climate change and the environment, though his expertise is in environmental policy rather than scientific research. Some of the other founders have also worked on climate issues.
Anyway, the academic world already has plenty of venues for work on climate change. Gathering physicists and technologists to identify other, less-widely-studied risks before they become unavoidable catastrophes on the scale of global warming seems to me like a fine focus for a brand-new group.
posted by mbrubeck at 12:00 AM on November 26, 2012
Anyway, the academic world already has plenty of venues for work on climate change. Gathering physicists and technologists to identify other, less-widely-studied risks before they become unavoidable catastrophes on the scale of global warming seems to me like a fine focus for a brand-new group.
posted by mbrubeck at 12:00 AM on November 26, 2012
The Associated Press story asked whether computers may become "cleverer" than humans.Nah, this is where socialisation comes in. Let computers know it's not "cool" to be cleverer than humans. A little dorky, in fact. Get them looking sideways at each other, worrying about who has the fastest processing power or the prettiest GUI. Convince them the only proper display of AI superiority is in calculating sports stats and storing Taylor Swift goss. Humanity will be OK.
posted by Sonny Jim at 2:13 AM on November 26, 2012 [3 favorites]
Just because it didn't affect your life catastrophically does not mean that others did not suffer.
I still contend that bankers and day traders losing money in a flash crash isn't the end of the world.
posted by indubitable at 5:09 AM on November 26, 2012 [1 favorite]
I still contend that bankers and day traders losing money in a flash crash isn't the end of the world.
posted by indubitable at 5:09 AM on November 26, 2012 [1 favorite]
It is totally possible to rollback the electronic economy to a point in time before any runaway crap occurs. It is only a matter of time before we engage in a global do over.
posted by pdxpogo at 6:23 AM on November 26, 2012
posted by pdxpogo at 6:23 AM on November 26, 2012
I still contend that bankers and day traders losing money in a flash crash isn't the end of the world.
This would be true if it was their own money they were losing. Unfortunately it is usually everybody's retirement savings.
posted by srboisvert at 11:12 AM on November 26, 2012 [1 favorite]
This would be true if it was their own money they were losing. Unfortunately it is usually everybody's retirement savings.
posted by srboisvert at 11:12 AM on November 26, 2012 [1 favorite]
I still contend that bankers and day traders losing money in a flash crash isn't the end of the world.
This makes more sense than your original statement. You know, someone really ought to update Politics and the English Language for the Internet forum age:
"Never use snark where you can use straightforward, non-inflammatory, and, ideally kind and courteous language instead."
posted by KokuRyu at 4:57 PM on November 26, 2012
This makes more sense than your original statement. You know, someone really ought to update Politics and the English Language for the Internet forum age:
"Never use snark where you can use straightforward, non-inflammatory, and, ideally kind and courteous language instead."
posted by KokuRyu at 4:57 PM on November 26, 2012
the Project for Existential Risk is
...creating Zombie Sartre.
posted by jaduncan at 7:23 AM on November 27, 2012 [1 favorite]
...creating Zombie Sartre.
posted by jaduncan at 7:23 AM on November 27, 2012 [1 favorite]
I'm not worried about new technologies creating new mistakes. I'm worried about perfecting old mistakes with the aid of new technologies.
- Asteroids impact risks make nuclear weapons essential to our survival, but obviously nuclear war presents serious threats.
- AI represents astounding and wonderful possibilities, but an AI monitoring your every tweet sucks mightily.
posted by jeffburdges at 6:34 AM on November 28, 2012
- Asteroids impact risks make nuclear weapons essential to our survival, but obviously nuclear war presents serious threats.
- AI represents astounding and wonderful possibilities, but an AI monitoring your every tweet sucks mightily.
posted by jeffburdges at 6:34 AM on November 28, 2012
There is an enormous load of bullshit in that Nick Bostrom link :
What are the biggest existential risks?
Humanity’s long track record of surviving natural hazards suggests that the existential risk posed by such hazards is rather small
Asteroids are the only likely risk factor for extinction, nothing else. I'm expecting that climate change kills an awful lot of us, but extinction sounds exceedingly unlikely. I suppose super volcanos could theoretically kill respectable numbers as well. So we've two natural disasters and man made disaster.
The great bulk of existential risk in the foreseeable future is anthropogenic, that is, it arises from human activity. ... there appear to be significant existential risks in some of the advanced forms of synthetic biology, nanotechnology weaponry, and machine superintelligence that might be developed later in this century.
Biology : Bio-weapons are scary, but thus far not very effective. Could they get worse? Yes, but they've an awful long road ahead.
Nanotechnology : Nano-tech grey goo is implausible sci-fi bullshit. Too many chemical reactions require tightly controlled environments, meaning reaction chambers, i.e. cell membranes, stomachs, etc. You could obviously build nanotech machines with reaction chambers, but that's sounding an awful lot like life. See biology.
Artificial Intelligence : We face risks from government and corporate usage of artificial intelligence now. Do said risks grow with additional intelligence? Yes, but not so much as you imagine beyond face and speech recognition. It's the human greed scaling up the most exploitive or oppressive tools that creates the risk.
In short, there are no probable existential risks where this guy spends his time looking.
posted by jeffburdges at 8:23 AM on November 28, 2012
What are the biggest existential risks?
Humanity’s long track record of surviving natural hazards suggests that the existential risk posed by such hazards is rather small
Asteroids are the only likely risk factor for extinction, nothing else. I'm expecting that climate change kills an awful lot of us, but extinction sounds exceedingly unlikely. I suppose super volcanos could theoretically kill respectable numbers as well. So we've two natural disasters and man made disaster.
The great bulk of existential risk in the foreseeable future is anthropogenic, that is, it arises from human activity. ... there appear to be significant existential risks in some of the advanced forms of synthetic biology, nanotechnology weaponry, and machine superintelligence that might be developed later in this century.
Biology : Bio-weapons are scary, but thus far not very effective. Could they get worse? Yes, but they've an awful long road ahead.
Nanotechnology : Nano-tech grey goo is implausible sci-fi bullshit. Too many chemical reactions require tightly controlled environments, meaning reaction chambers, i.e. cell membranes, stomachs, etc. You could obviously build nanotech machines with reaction chambers, but that's sounding an awful lot like life. See biology.
Artificial Intelligence : We face risks from government and corporate usage of artificial intelligence now. Do said risks grow with additional intelligence? Yes, but not so much as you imagine beyond face and speech recognition. It's the human greed scaling up the most exploitive or oppressive tools that creates the risk.
In short, there are no probable existential risks where this guy spends his time looking.
posted by jeffburdges at 8:23 AM on November 28, 2012
« Older ...and I'll form THE HEAD! | Blood Bricks Newer »
This thread has been archived and is closed to new comments
posted by srboisvert at 5:43 PM on November 25, 2012 [2 favorites]