There’s No Fire Alarm for Artificial General Intelligence
October 15, 2017 2:07 PM   Subscribe

A discourse on the potential of AI, with air-planes and alarms. An interesting essay on the difficulty of predicting technological advances, with a focus on whether a clear signal of imminent A.I. will be apparent even to those in the field.
posted by bitmage (37 comments total) 22 users marked this as a favorite
 
(Just a heads up that this is the egregious Eliezer Yudkowsky speaking, of "effective altruism" notoriety.)
posted by adamgreenfield at 2:48 PM on October 15, 2017 [4 favorites]


Ugh.

History shows that for the general public, and even for scientists not in a key inner circle, and even for scientists in that key circle, it is very often the case that key technological developments still seem decades away, five years before they show up.

Hasn't AI had the exact opposite problem? Artificial intelligence has been ten years away for much longer than I have been alive.
posted by zabuni at 3:11 PM on October 15, 2017 [3 favorites]


Speaking of things ten years away for decades, perhaps we'll know that AI is here when it brings us a sustained nuclear fusion plant.
posted by clawsoon at 3:25 PM on October 15, 2017 [14 favorites]


this is a really interesting article and worth reading, thanks for the post.
posted by Sebmojo at 3:28 PM on October 15, 2017 [3 favorites]


Who said that AI was ten years away? I mean, by the time I was getting my AI degree (nearly twenty years ago), they were pretty pessimistic about beating any remotely competent Go player within decades.
posted by rhamphorhynchus at 3:30 PM on October 15, 2017 [3 favorites]


(Just a heads up that this is the egregious Eliezer Yudkowsky speaking, of "effective altruism" notoriety.)

That's a hilariously snotty article which doesn't mention the author of the AI piece at all afaict.
posted by Sebmojo at 3:30 PM on October 15, 2017 [2 favorites]


Who said that AI was ten years away? I mean, by the time I was getting my AI degree (nearly twenty years ago), they were pretty pessimistic about beating any remotely competent Go player within decades.

Yeah. I think that people confuse the general consensus that some significant form of AI (not necessarily strong AI) is extremely likely to emerge with a prediction that it will emerge at any particular date.
posted by howfar at 3:42 PM on October 15, 2017 [1 favorite]


History shows that for the general public, and even for scientists not in a key inner circle, and even for scientists in that key circle, it is very often the case that key technological developments still seem decades away, five years before they show up.


Followed by a couple of anecdotes about experts being caught out by the emergence of a technology, with no discussion at all of how typical or not this actually is.
posted by thelonius at 3:48 PM on October 15, 2017 [1 favorite]


Who said that AI was ten years away? I mean, by the time I was getting my AI degree (nearly twenty years ago), they were pretty pessimistic about beating any remotely competent Go player within decades.

Granted this was in mostly the middle to end of last century. The AI winter has long come and gone.

Looking over the article, while I do agree there is no good way to determine whether Singularity AI is coming in a given time frame, trying to prepare for it is about the same as trying to prepare for advanced aliens to invade, if we don't know what technology they have. Or even if they want to invade. This just smacks of "we have to do something!".

And as far as effective altruism, there are much better reasons to look at it askance, including the fact that Eliezer thinks that funding research into AI is the best use of your money, because there is a slight chance someone makes an AI that becomes superhuman and we would be helpless to stop it from turning the world into paperclips.
posted by zabuni at 4:00 PM on October 15, 2017 [4 favorites]


The problem with AI is that there is a huge variance in what "AI" means, and everyone uses the term as if everyone else agrees with their meaning. The other problem is that even the people with a low-grade definition of AI rely on the "we are going to have C-3P0! (or at least HAL)" to keep the funding rolling in. So far, I am pretty sure the best use of AI is neural nets creating names for kittens, but I may be in the pocket of Big Cat.
posted by GenjiandProust at 4:05 PM on October 15, 2017 [8 favorites]


For a slightly snarkier take on AGI, might I suggest this piece by my former boss , Rodney Brooks

(As an aside, I should do a post on this guy one day... he's quite the fascinating individual whose been thinking about this stuff for a long time.)
posted by Greasy Eyed Gristle Man at 4:20 PM on October 15, 2017 [12 favorites]


He’s right! We need to start preparing for strong AI! Let’s start by asking our principled, service-minded politicians to... uh... okay, our judicious, humble tech companies to... um... maybe our ethical, altruistic billionaires?
posted by No-sword at 4:24 PM on October 15, 2017 [11 favorites]


Just instructed by my AIsnarkbot overlord to suggest it is the loveliest irony, predicting the unpredictable.
posted by sammyo at 4:24 PM on October 15, 2017


Never mind, I’m sure if it comes down to a choice between 2% extra profit and not enslaving 98% of the world as human batteries capitalism will do the right thing.
posted by No-sword at 4:25 PM on October 15, 2017 [7 favorites]


For a slightly snarkier take on AGI, might I suggest this piece by my former boss , Rodney Brooks

I met Brooks when he was a guest lecturer once. Very interesting guy who influenced my thinking to this day, and a great article.
posted by dmh at 4:55 PM on October 15, 2017 [1 favorite]


There's a lot of chauvinism in this essay... If some AGI is capable of taking over the planet, it probably has more merit than humans do.

Really, this is just my offering to Roko's Basilisk. Please be nice to me, Basilisk!
posted by Coventry at 5:11 PM on October 15, 2017 [2 favorites]


(Just a heads up that this is the egregious Eliezer Yudkowsky speaking, of "effective altruism" notoriety.)

Effective altruism isn't such a bad thing, and Yudkowsky's a pretty great thinker. Sure, sometimes he's full of shit, but after reading the article in the FPP, I think it's well written and well reasoned. I don't see how any EA boosterism is relevant.

I mean seriously. What is with calling the guy "egregious" just because he has espoused EA? And as for that linked SSIR article, I found it insufferably arrogant and poorly reasoned. It makes me distrust the nonprofit establishment even more than I already did. GiveWell is far from perfect, but I'd trust them over Berger and Penna any day.
posted by andrewpcone at 5:17 PM on October 15, 2017 [6 favorites]


I want to third the Rodney Brooks article. I just read it a few days ago, and it really captured a lot of what's been bugging me about all of these critiques of AI.

We often ascribe a lot of human behaviors to AI, but right now the bigger problem is that they either do (a) exactly what we tell them to do (and the complexity of the real world means that there are a lot of exception cases and failure modes) or (b) learn from the data we give them, and bias in that data means bias in our algorithms.
posted by jasonhong at 5:37 PM on October 15, 2017 [2 favorites]


Please be nice to me, Basilisk!

More seriously, I would encourage Yudkowsky to put a couple of hours every six months into questioning some of the background assumptions behind his reasoning, a task which he has seemingly put off for years, to his great social benefit. It's not at all clear that a human-equivalent intellect would be motivated to run amok as he assumes. And it's not at all clear that it will be developed in a secret Manhattan-style project. That's not what's happening now, to the best of my knowledge.
posted by Coventry at 5:51 PM on October 15, 2017 [6 favorites]


I think it's probably important to distinguish between the two flavours of EA: "we should spend this money on developing novel antibiotics instead of building an opera house" vs "we should spend this money on PREVENTING SKYNET, WHICH IS INEVITABLE, rather than developing novel antibiotics". Both are positions that you can argue with, but only one is a position that's obviously idiotic.
posted by chappell, ambrose at 5:53 PM on October 15, 2017 [9 favorites]


I think it's probably important to distinguish between the two flavours of EA

I agree with this in spirit, but I don't think there's a clean split. Preventing skynet, developing antibiotics, and funding opera houses are all reasonable, and thoughtful people can differ on how they would allocate money among them.

When anyone announces their position as an absolute that Others Must Obey, it is rarely helpful. The problem is the inflexibility and moral certitude, not the emphasis on skynet or opera houses.
posted by andrewpcone at 6:01 PM on October 15, 2017 [1 favorite]


There are both ethical and practical challenges involved with EA (I don't mean that as a criticism) but "preventing skynet" is Pascal's mugging of a kind that doesn't apply to "mitigating climate change", "preventing deaths from disease" or "increasing the availability of opera"; all of which have pretty dependable benefits (although the difficulty lies in quantifying them - an unsurprising issue for a approach that's basically applied utilitarianism).

Whereas "preventing skynet" is roughly equivalent to "preventing women miscarrying at speeds above 15 mph" at the advent of the steam age.
posted by chappell, ambrose at 6:12 PM on October 15, 2017 [2 favorites]


Yes, Roko's Basilisk is basically "preventing skynet" taken to its logical extreme.
posted by Coventry at 6:14 PM on October 15, 2017


It's "preventing skynet" taken to an extreme, although I'm not convinced it's a logical one.
posted by chappell, ambrose at 6:17 PM on October 15, 2017 [6 favorites]


Logical under Yudkowsky's other assumptions.
posted by Coventry at 6:26 PM on October 15, 2017


Really, this is just my offering to Roko's Basilisk. Please be nice to me, Basilisk!

I'm patting my computer and ordering a surge protector right now, so should definitely be eaten last please Mr/Ms Roko's Basilisk sir/madam.
posted by Sebmojo at 6:55 PM on October 15, 2017 [1 favorite]


Whenever I hear the name of Yudkowsky and the word "logic" together, I reach for my basiliskicide-pellet gun.
posted by runcifex at 12:11 AM on October 16, 2017 [4 favorites]


You won't see the fire, the smoke already has your breath.
posted by filtergik at 2:00 AM on October 16, 2017


nthing Brooks.
posted by Segundus at 2:14 AM on October 16, 2017


I'm as skeptical of Yudkowsky as the next, but as a person who works in data science/machine learning stuff and tends to follow research, the way he's talking about the landscape here seems pretty congruent to reality.

His point is that AI research is going to keep finding more and more "human" problems that suddenly we can make machines do. Chess, then image recognition, then Go, then seamless perfect phone agents, then perfectly configurable video game opponents... and at every step of the way we'll say it's "just" tricks and not really intelligence. Until one day it's just unavoidably general AI. Certainly we're not there yet--there's a long, long way between AlphaGo and general AI, but is it 20 years? 10 years? 5 years?

I'm really absolutely very personally certain that it is not 5 years. But, to Yudkowsky's point, my certainty means fuck all...
posted by TypographicalError at 3:06 PM on October 16, 2017 [1 favorite]


I follow the research too, and schedule-wise I lean towards early. Outcome-wise, I'm much more optimistic than Yudkowsky. (At least, for humans as a species and human culture on the whole. Probably a lot of people are going to die along the way as cheap, numerous highly intelligent autonomous weapons undercut contemporary power structures — I think that's the second or third most likely way I personally will die.) He doesn't give a good story for why a superhuman AI is going to be interested in wiping us out. It's not going to be burdened with our eons-long history of exploiting each other to use limited resources to proliferate ourselves.

Also, I haven't heard anyone claim that the gains in visual processing and reinforcement learning over the last five years are a meaningless trick. People are more skeptical of the gains in NLP, but with good reason at this point, I think. (Though I do think there's real progress in NLP too, it's harder to demonstrate and use in the solution of practical problems at this point.)
posted by Coventry at 5:15 PM on October 16, 2017 [1 favorite]


To me, it seems like AGI fear is a manifestation of an oversimplified, "computerized" view of human society. Humanity isn't a puzzle that can, with enough intelligence, be solved. It's an unfathomably complex system made of individuals with their own values, goals, and cultural expectations. Navigating this system isn't about being intelligent: you have to be able to understand and engage with the system on its own terms.

To use a somewhat clichéd example, say our AGI (a box connected to the internet) is trying to obtain wire in order to produce more paperclips. It can't produce wire itself, so it decides to buy some from a factory. How does the AGI figure out what factory to call? How does it know the cultural expectations around negotiating a sale? How does it know it's not being duped or swindled? There's a reason sales are usually made in person: the high bandwidth of person-to-person interaction gives you lots of clues about whether the other person is trustworthy or not.

In general, I think people focus too much on the workings of the "box" part of the AI and too little on the "connected to the internet" part (not to mention the "connected to the power socket" part). The most powerful people aren't hyper-intelligent internet keyboard warriors—they're people who can successfully engage with the existing structures of power by embodying particular cultural roles. You need high-bandwidth, in-person interaction to learn these roles and to perform them successfully. They're not something you can derive from first principles or by reading Wikipedia articles.
posted by panic at 6:39 PM on October 16, 2017 [1 favorite]


Metafilter's own idlewords also has a few things to say about "superintelligence" (the idea that eats smart people).
posted by panic at 6:43 PM on October 16, 2017 [1 favorite]


Coventry: "Also, I haven't heard anyone claim that the gains in visual processing and reinforcement learning over the last five years are a meaningless trick."

That's certainly fair. I guess it's moreso the feeling that computers "don't understand" the images they're classifying, moving the goalposts from comprehension to qualia.
posted by TypographicalError at 8:06 PM on October 16, 2017


One of the things I find fascinating in arguments on this subject is how often they don't apply to non-super-intelligent things that already exist.

Just to pick from nearby comments, "He doesn't give a good story for why a superhuman AI is mosquitoes are going to be interested in wiping us out."

Or, "How does the AGI 'Samantha West' figure out what factory to call?"

I also think that focusing on the super part can be misleading. Imagine the dumbest person you know. Someone who's functional but just ... not smart. If there were a warehouse full of exact copies of that person with a web browser, perfect memory and effectively infinite time, would you worry even a little about what they might get up to? I mean, we just saw what a warehouse full of Russian trolls with finite time and imperfect memories could do.
posted by Skorgu at 3:38 PM on October 18, 2017 [1 favorite]


Just to pick from nearby comments, "He doesn't give a good story for why a superhuman AI is mosquitoes are going to be interested in wiping us out."

Not sure what your point is, here. Mosquitoes aren't interested in wiping us out. They're a nuisance.
posted by Coventry at 7:03 PM on October 18, 2017


"In 2015, there were roughly 212 million malaria cases and an estimated 429 000 malaria deaths". And that's without even trying. When something is sufficiently capable, or sufficiently widespread, a variant of Hanlon's razor applies: "Don't attribute to malice that which is adequately explained by a misalignment of goals".
posted by logopetria at 5:13 AM on October 19, 2017 [1 favorite]


« Older Alternative twitter feed   |   #asymmetryissexy #imnotyourinspiration Newer »


This thread has been archived and is closed to new comments