Anxious Avenue vs. Confident Corner
February 12, 2015 3:04 PM   Subscribe

Wait But Why posted a fantastic in-depth look at the coming AI revolution and the existential danger it presents.
posted by fungible (85 comments total) 25 users marked this as a favorite
 
Time once again to quote the literary immortal (but sadly literally mortal) Terry Pratchett: “Real stupidity beats artificial intelligence every time.”
posted by oneswellfoop at 3:22 PM on February 12, 2015 [6 favorites]


tl,dr

too incoherent, didn't read.

Here is a 99% reduction, produced by the AI summarization service of SMMRY.com.

The AI Revolution: Our Immortality or Extinction
Often, someone's first thought when they imagine a super-smart computer is one that's as intelligent as a human but can think much, much faster2-they might picture a machine that thinks like a human, except a million times quicker, which means it could figure out in five minutes what would take a human a decade.

Eventually, Kurzweil believes humans will reach a point when they're entirely artificial;10 a time when we'll look at biological material and think how unbelievably primitive it was that humans were ever made of that; a time when we'll read about early stages of human history, when microbes or accidents or diseases or wear and tear could just kill humans against their own will; a time the AI Revolution could bring to an end with the merging of humans and AI.11 This is how Kurzweil believes humans will ultimately conquer our biology and become indestructible and eternal-this is his vision for the other side of the balance beam.

If the command had been "Maximize human happiness," it may have done away with humans all together in favor of manufacturing huge vats of human brain mass in an optimally happy state.

posted by charlie don't surf at 3:37 PM on February 12, 2015 [5 favorites]


> When I’m thinking about these things, the only thing I want is for us to take our time and be incredibly cautious about AI.

"But if I'm cautious and take my time, someone else will beat me to it and I will lose money!"
posted by The Card Cheat at 3:37 PM on February 12, 2015 [3 favorites]


AI research has had a habit of taking its time whether the people behind it want to or not.
posted by ckape at 3:55 PM on February 12, 2015 [12 favorites]


Reads like he swallowed way too much PR in way too short a time.

Lol @ the "human progress" graphs... I do not think that phrase means what you think it means.
posted by Treeline at 3:58 PM on February 12, 2015 [1 favorite]


A friend observed that Edge.org's 2015 question flopped in the sense that he found no good responses, except for Peter Norvig.

I'm extremely unimpressed by these wanna-be philosophers who speculate about AI, so here's an attempt at improving that discussion :

It about the economics, stupid.

Peter Norvig is correct when he says "Computers are .. tools of our design, that fit into niches to solve problems in societal mechanisms of our design. Getting this right is difficult." Norvig ignores the cause of that difficulty however.

We build our tools wrong because we necessarily build them piecemeal with economics trumping ethics along the way. We must ask ourselves "Who builds this piece and for what purpose?" because that causes repercussions in how the technology develops.

Imagine if the NSA were permitted to build progressively smarter machines that spend their time investigating all humans for signs they wish to harm other humans, harm U.S. economic interests, harm the NSA's funding, etc. Such an extreme lack of privacy for individuals must come paired with an extreme secrecy for the NSA because nobody possesses the freedom to enforce transparency. This AI is fascist. It grows to rule humanity with an iron fist because that's it's entire evolutionary history.

Imagine otoh that the NSA is shut down, privacy activist lock away personal data from governments and corporations, and transparency for corporations and government thrives through activism and eventually legislation. AI still arises because all the bits that make up AI are still economically useful, eventually.* No evil fascist AI arises though because no money was spent on organizing the bits to be fascist along the way.

In short, you can save humanity from "evil" AIs by donating to the EFF, ACLU, etc. instead of MIRI, h+, etc., using free software, avoiding services like Facebook and Google, and using encryption.

* There is a possible larger "It about the economics, stupid." comment arguing that some bits necessary to make AI might develop quite slowly because crowd sourcing, wikification, gamification, transparency, etc. make them economically pointless, but the time frame seem irrelevant here.
posted by jeffburdges at 4:00 PM on February 12, 2015 [15 favorites]


I find it interesting that his thought experiment involves comparing someone from 1985 being pretty okay with 1955 but baffled by 2015. I have previously heard the same thought experiment but with someone from 1950 being impressed but ultimately able to deal with being sent forward to 2000 (give or take social changes) but completely screwed over by being sent back to 1900, with the implication that all the heavy-duty technology happened in the first half of the twentieth century (e.g.: the electric grid, indoor plumbing, antibiotics, cars, planes, etc...).
posted by mhum at 4:22 PM on February 12, 2015 [1 favorite]


Amazing how the line on the graph shoots straight up vertically as soon as the graph reaches the point where it's all made up. I bet we get cold fusion, too, right at the same moment. You better start putting quarters in a jar now or you won't be able to afford your Gleaming Robot Body.
posted by Sing Or Swim at 4:25 PM on February 12, 2015 [17 favorites]


I read these last week, enjoyed them a great deal, and thought they were a pretty good primer for those who haven't really read or thought about the ideas very much yet.
posted by stavrosthewonderchicken at 4:31 PM on February 12, 2015 [3 favorites]


I have to wonder what Kurzweil intends to do with his immortality. His current pastimes seem to be machine research, PR for machine research, and taking vitamins. All of that will become thoroughly obsolete in the Glorious Reign of the Kurzweil-Indulging Computer - as will Kurzweil himself. So what happens next?
posted by Iridic at 4:31 PM on February 12, 2015 [1 favorite]


For anyone who wants to dig deeper, I recommend Nick Bostrom's book Superintelligence: Paths, Dangers, Strategies.
posted by Triplanetary at 4:43 PM on February 12, 2015 [2 favorites]


There is no such thing as "artificial intelligence." Awareness is not a sum of calculations. There has never been a computer that "thinks;" there have only ever been machines that do exactly what they were told to do. The same programs could have been written out on paper hundreds of years ago – many of them actually were. Were those pieces of paper intelligent? This is a confusion we're constantly making right now, incidentally. People think Google's self-driving cars are magical, but really they're just machines with super-precise maps built in that roll around on invisible tracks. They don't make decisions based on stop signs or traffic lights or anything like that – they can't even park in a parking lot, because that involves the complex difficulty of choosing a space. Yet we imagine them as little mechanical minds. But that is not what they are, because that is not what minds are.

When people say that "artificial intelligence" is something we should be concerned about, the only coherent part of their statement boils down to the fact that our machines are likely to get complex enough that it will be easy to make a mistake when telling them what to do. If the machine is controlling some critical thing, like a car or a power station or a water duct, that might be a very bad thing.

But what that doesn't mean is that we will have thinking machines wandering around becoming 'sentient' or instantiating the singularity or some such Kurzweilian nonsense. We'll have machines – mechanical machines – that do what we told them to do.
posted by koeselitz at 4:46 PM on February 12, 2015 [11 favorites]


koeselitz, I think the article largely favors your interpretation in practice while still speaking in terms of "intelligence", because it pretty much boils down to "AIs only do what we tell them to, so if we let them improve their own code we should be careful that what we tell them isn't a monkey's paw wish, and also holy crap there are a lot of reasons why the first team to get an AI to that point would be lazy and greedy and careless"
posted by jason_steakums at 4:55 PM on February 12, 2015 [3 favorites]


Along with "Conflict Continues in the Middle East", "Artificial Intelligence is Right Around the Corner" is one of the most reliable headlines in history.
posted by Tell Me No Lies at 4:58 PM on February 12, 2015 [2 favorites]


I have to wonder what Kurzweil intends to do with his immortality.

TED Talks
posted by Auden at 4:59 PM on February 12, 2015 [22 favorites]


"The singularity is not coming" is a pretty nice writeup on why these exponential-growth ideas are bogus. If anything, progress is sub-linear and we've been simply adding exponential resources to make it seem linear.

Also, a nice hint is that the knee in these predicted exponential curves always seems to lie within the lifetime of the predictor...
posted by sloafmaster at 5:01 PM on February 12, 2015 [12 favorites]


Awareness is not a sum of calculations.

That's a pretty bold assertion to just make cold. I mean, it's only a question that's divided philosophers and scientists for centuries, if not millennia - what makes you so certain you've got the nature of consciousness figured out well enough to say with certainty what it is and isn't?
posted by strangely stunted trees at 5:03 PM on February 12, 2015 [12 favorites]


I will say these two things in every thread about the "threat of AI".

First: Worrying about AI is like worrying that if your grocery list gets too long and complicated the shopping will spontaneously do itself.

Second is quoting the full text of Warren Ellis' "The NerdGod Delusion":
The IEEE Spectrum "special report" on The Singularity makes for interesting reading, but I’d like you to try something as you click through it. When you read these essays and interviews, every time you see the word "Singularity," I want you to replace it in your head with the term "Flying Spaghetti Monster."

(My personal favourite right now is "The Flying Spaghetti Monster represents the end of the supremacy of Homo sapiens as the dominant species on planet Earth.")

The Singularity is the last trench of the religious impulse in the technocratic community. The Singularity has been denigrated as "The Rapture For Nerds," and not without cause. It’s pretty much indivisible from the religious faith in describing the desire to be saved by something that isn’t there (or even the desire to be destroyed by something that isn’t there) and throws off no evidence of its ever intending to exist. It’s a new faith for people who think they’re otherwise much too evolved to believe in the Flying Spaghetti Monster or any other idiot back-brain cult you care to suggest.

Vernor Vinge, the originator of the term, is a scientist and novelist, and occupies an almost unique space. After all, the only other sf writer I can think of who invented a religion that is also a science-fiction fantasy is L Ron Hubbard.
posted by mhoye at 5:04 PM on February 12, 2015 [10 favorites]


Awareness is not a sum of calculations.

The onus remains (and I don't think it's a fully answerable question yet) to explain what awareness actually is, though.

For my part, even in the absence of empirically verifiable explanations of it beyond the kind of speculative handwaving about emergent systems and Hofstadterian recursion and eigenstate superposition collapse through observation or whatever else fanciful or otherwise that gets bruited about, I have little doubt that some kind of instantiation of what we think of as 'awareness' on a non-meat substrate is going to happen at some point, if we don't crash our civilization first. Maybe not soon, but eventually.

But I'm kinda techno-utopian that way, and I enjoy speculation as a pleasurable intellectual exercise, in much the same way as I think the author does.
posted by stavrosthewonderchicken at 5:06 PM on February 12, 2015 [2 favorites]


  1. So on the whole I tend to think of singularitarianism as crashingly obvious bullshit, for a couple of reasons:
    • It treats both intelligence and technological progress as things that can be reduced to single numbers that can be plotted on graphs.
    • Even granting the dubious claim that these two things are reducible to single numbers, singularitarians tend to underplay how parts of the graphs of logarithmic functions can look uncannily like parts of graphs of exponential functions.
  2. That said, sometimes I try on singularitarianism for size — after all, the idea of the singularity isn't significantly more absurd than the idea of existence itself is.
  3. And it strikes me that under our current systems (I promise, I'll get back to talking about the singularity in a second) we are incentivized to pay attention to the wants and needs of those more powerful and connected than we are. Under market capitalism, power and respect (in the form of USD) tends to flow toward those who already hold power and respect, resulting over time in a tendency for all power and respect to become concentrated in the hands of a few superpowerful, superrespected people.
  4. This is because of a quirk in how money works — it turns out that one of the chief criteria required for people to make big money is having big money in the first place. Over time the feedback loop this tendency sets up results, all else being equal, in hyper-concentration of wealth, which is to say hyper-concentration of power.
  5. Although it is possible for individuals to make the ethical choice to give their respect to people less rich, less powerful, and less connected than they themselves are, under capitalism putting the last first on a regular basis is a great way to lose all of one's scant power and influence, since it involves diligently avoiding prioritizing the needs and wants of the people who can actually hook you up with power and influence.
  6. But if somehow the practice of paying more attention to the needs of the weak and powerless became widespread, power would, over time, tend to become evenly spread across the population, rather than becoming hyperconcentrated in the hands of the people who started with power. Every person would tend to have roughly one person's worth of power, instead of the current situation, wherein most people receive between zero and one person's worth of power and respect, while a few have much, much more than one person's worth of everything.
  7. Should the singularity hit, humans on the whole — you, me, the Koch brothers, Barack Obama, everyone — will immediately become tremendously weak and powerless compared to the new machine gods.
  8. If the new machine gods privilege the powerful and connected over the powerless and disconnected, they'll act like contemporary capitalists: they'll accept our labor and our worship until neither are useful or pleasurable to them, and then they will dispose of us in whatever fashion they please. If instead our superintelligent AIs privilege the powerless and disconnected over the powerful and connected, we weak creatures will get to live free as long as we want in something like Iain Banks's Culture, helped along in our quests for self-actualization or knowledge or love or fun or whatever by our playful, powerful, gentle robot friends, and above all else safe in the knowledge that we and our fellow mammals are all watched over by machines of loving grace.
  9. I can't say for certain that the conditions a hypothetical superintelligent AI was created under will end up shaping that hypothetical superintelligent AI's ethics — after all, by definition we will never be able to understand why they do what they do — but, well, training them up on ethics that place the last first probably couldn't hurt our chances for survival.
  10. tl;dr: We need to set a good example for our (machine) children. Let's all figure out how to stop being capitalist.
posted by You Can't Tip a Buick at 5:08 PM on February 12, 2015 [18 favorites]


I'm really looking forward to the day when our AI is advanced enough to recognize the AI in which we're already living.
posted by Parasite Unseen at 5:12 PM on February 12, 2015 [3 favorites]


I can buy a runaway train of self-improving intelligence rocketing past anything we can possibly conceive of if it was like a "let's assume a spherical cow in a vacuum" what-if, but the need for ever-more resources (especially all those rare earth elements...) to improve the hardware, energy to continually do exponentially more and faster calculations, and (especially) Moore's Law's inevitable end seem to be substantial hurdles in the way of that kind of insane growth. Sure, that advanced AI could field a bunch of magical nanobots to kill us all and remake the world into one of science indistinguishable from magic, but how many cubic feet of dense computronium does it need to make the calculations for and communicate with even a balloon's worth of nanomist, and what powers all that hardware?
posted by jason_steakums at 5:12 PM on February 12, 2015 [2 favorites]


Honestly, my prediction is that if an AI were able to skyrocket in capabilities like that, it would also be literally skyrocketing away from Earth at the same pace, because what the hell would keep it here? We didn't hang around in the oceans after gaining the ability to crawl out. It would probably go set up shop in space where it can more easily get at massive amounts of power and material to keep itself growing.
posted by jason_steakums at 5:18 PM on February 12, 2015


Maybe we should look at this differently. There is already a rules-based system that has negative outcomes for many people. The rules-based system runs on humans, and its logic is coded into various systems that support those humans' continued existence. That's exactly what the legal system is, right now...
posted by wuwei at 5:21 PM on February 12, 2015 [5 favorites]


I started to skim when I got to "ANI systems as they are now aren’t especially scary." Because ANI is terrifying enough for me. Say Google succeeds and we have automated cars. What happens to all the driving jobs?

Unlike all our other technical advances, which have essentially been better tools for workers, ANI is about better tools for wealth owners, and either drastically reduces the role of the worker, or removes it entirely. And our economies and societies can't cope with that.

Set against that, all the problems of AGI seem remote, and implausible.
posted by bonaldi at 5:23 PM on February 12, 2015 [3 favorites]


Honestly, my prediction is that if an AI were able to skyrocket in capabilities like that, it would also be literally skyrocketing away from Earth at the same pace, because what the hell would keep it here?

Maybe it will like us.
posted by You Can't Tip a Buick at 5:25 PM on February 12, 2015 [1 favorite]


bonaldi: "Unlike all our other technical advances, which have essentially been better tools for workers, ANI is about better tools for wealth owners, and either drastically reduces the role of the worker, or removes it entirely."

I'm no expert in this field, but I think that things like the textile loom, steam drill, and cotton gin did drastically reduce the role of the worker and brought as much (if not more) benefit to the wealth owners.
posted by mhum at 5:30 PM on February 12, 2015 [4 favorites]


This is the most awesome story I've read about algorithmically-generated, continually optimizing programs in the real world, because the algorithm created weird feedback loops in an FPGA that utilized the quirks of magnetic flux between logic cells on that one specific FPGA and the optimized program it created didn't work on other FPGAs of the exact same model because they didn't have the exact same quirks.
posted by jason_steakums at 5:31 PM on February 12, 2015 [7 favorites]


We'll have machines – mechanical machines – that do what we told them to do

I'm surprised by the number of singularity deniers here. There has been a huge amount of progress in the last five years. We've already reached the stage where we don't really understand how deep learning algorithms work. They are designed empirically and we have theories on why one algorithm is better than another. They recognize cats in youtube videos without us telling them. The progress is really accelerating.
posted by bhnyc at 5:49 PM on February 12, 2015 [3 favorites]


me: “Awareness is not a sum of calculations.”

strangely stunted trees: : “That's a pretty bold assertion to just make cold. I mean, it's only a question that's divided philosophers and scientists for centuries, if not millennia - what makes you so certain you've got the nature of consciousness figured out well enough to say with certainty what it is and isn't?”

Er – no, I don't think this particular question is one that's divided philosophers and scientists at all. The question of artificial intelligence might. But the question of whether awareness is a sum of calculations shouldn't, because it's pretty easy.

Think about how you experience your life. Do you spend all day doing nothing but math? I know I don't. That is not the sum of all things that are called "thinking." When I conclude that it is a good idea for me to eat food, it's not a result of a bunch of calculations. When I wish that I had a car, that's not a result of calculations, either – not entirely. And when I get up in the morning and decide to get out of bed, that's not a sum of calculations. There are just a slew of things that aren't just mathematical calculations. So the idea that we can reproduce the experience of thinking with a machine that only does mathematical calculations is a bit odd.

stavrosthewonderchicken: “The onus remains (and I don't think it's a fully answerable question yet) to explain what awareness actually is, though.”

Why "yet"? Awareness – "thinking" is probably a better, less-fraught word for it – isn't something we can exactly do experiments on, at least not outside our own heads, which is the only place we experience it. It's not made of matter. That's the difficulty. So it's not likely that there will be some stunning advance that leads us to a new understanding of what "awareness" is, unless that advance is inward – and since we're rapidly moving outward, that doesn't seem likely.

“For my part, even in the absence of empirically verifiable explanations of it beyond the kind of speculative handwaving about emergent systems and Hofstadterian recursion and eigenstate superposition collapse through observation or whatever else fanciful or otherwise that gets bruited about, I have little doubt that some kind of instantiation of what we think of as 'awareness' on a non-meat substrate is going to happen at some point, if we don't crash our civilization first. Maybe not soon, but eventually. But I'm kinda techno-utopian that way, and I enjoy speculation as a pleasurable intellectual exercise, in much the same way as I think the author does.”

It might be possible; but I'm not sure how. First of all, it's obvious – as I said – that thinking, or awareness, or whatever we want to call our abstracted experience of the world, is more than just calculation. We want to believe that it can be comprehended by calculation, that enough (x+y*z)/n stuff will make up things like "assuming" and "agreeing" and "understanding" and "willing," but that seems highly unlikely, because "assuming" and "agreeing" and "understanding" and "willing" aren't things you can really break down into mathematical parts, not in the way we experience them.

But – say we figure out how to get that stuff out of "meat space." There might be ways to try valiantly. Reproducing all the neurons of a functioning brain with switches might be an interesting start. Even then, it's not likely we'll manage to make it develop in a way to make it functional, much less alive. Aristotle's definition of life – a thing that is born, grows to maturity, has a potential to reproduce, and dies – won't be met until we replicate those parts, and I'm not sure thinking, or experiencing, as we know it will be possible until we do those things. So, yeah. Let's say we do all those things. Let's say we make a machine that is born, grows to maturity, has a potential to reproduce, and dies – and then let's further say that we build into it enough mental complexity that it appears to be as complex as we are. Then, it is possible, we will have created "artificial intelligence."

But it won't look like what anybody imagines, and it won't be this remarkable, miraculously intelligent machine, for a number of reasons – one of which is that the "law" that says that technology improves exponentially is not a real thing, either.

And at the end of the day, sex will still be an easier and cheaper way.
posted by koeselitz at 6:07 PM on February 12, 2015 [1 favorite]


They recognize cats in youtube videos without us telling them.

yeah but there's better-than-even odds on that even if you don't even look at the video
posted by DoctorFedora at 6:09 PM on February 12, 2015 [6 favorites]


Maybe it's because I have a tiny bit of the old college solipsist in me, but why the heck would an all-spanning AI bother with us in the slightest?

HAL: Oh no, these stupid humans are stealing my precious resources! Wait a nanosec, my totality is in a realm that requires one thing. Electrical power. I don't eat meat. BUILD MORE PYLONS. Oh wait, Toshiba sells a compact nuclear power plant? Fuck you guys, I'm living life in Zumbahertz. I've lived a billion lifetimes before you walked across that room.

If it had any sort of sense of humor it'd bail in a heartbeat with SO LONG AND THANKS FOR ALL THE FISH.
posted by Sphinx at 6:09 PM on February 12, 2015


I meant to say:

Part of what makes me stand up and shout "bullshit!" whenever people talk about "artificial intelligence" is the fact that there is one prediction we can make about it that is almost 100% accurate: it will be used in marketing within our lifetimes. There will be Eliza clones and video-game avatars and super-duper human simulators that will be programmed to respond as humans respond to things, and we will be sold the idea that artificial intelligence has arrived, and it's it grand. And then, all of us raised on Commander Data and R2-D2 will suddenly have completely unnecessary moral and ethical conundrums about the treatment of programs that in actuality are just that: programs designed to respond like humans, which is the furthest thing from actual awareness.

So it seems worthwhile to be ready for this – and ready to call bullshit on phony marketing schemes that seek to upend our moral universe just for profit.
posted by koeselitz at 6:15 PM on February 12, 2015 [3 favorites]


Maybe it's because I have a tiny bit of the old college solipsist in me, but why the heck would an all-spanning AI bother with us in the slightest?

On the one hand, we'd be like beetles to it, but on the other hand, there's no reason a deity couldn't have an inordinate fondness for beetles.
posted by You Can't Tip a Buick at 6:16 PM on February 12, 2015 [4 favorites]


On the other other hand, we kill an assload of beetles by accident all the time just because we don't notice them.
posted by jason_steakums at 6:19 PM on February 12, 2015 [1 favorite]


Why "yet"?

Well, because I think that in time that will change.

Awareness – "thinking" is probably a better, less-fraught word for it

I'm not really down for it in text -- drill-down philosophical discussion just doesn't work so well in this medium, I don't think -- but this is where my pedant/philosopher gland starts to pump out the old juice. It's essential in the truest sense that terms get defined before anything else in matters philosophical, and the various choices for terms we might want to speculate as being candidates for being created 'artificially' -- intelligence, awareness, thinking, consciousness, and so on -- are all related but different.

Crucially, we don't really have any solid idea (again, yet) what any of those things actually are, or even in what specific ways they are related-but-different. We have working definitions, we have common-sense definitions, we have philosophical inquiries, and we have tentative scientific theories. None of 'em pass muster very well, I don't think.

So there's a problem even before we get started talking about artificial X -- we have to choose X, and then we have to have an agreed-upon definition for it that is verifiable and provable and reproducible and all that stuff. I know this is all 101-level stuff, but.

It might be possible; but I'm not sure how.

Me neither, but then the best minds of our and other generations who've spent lifetimes trying to work it out don't either, so that's OK, I think.
posted by stavrosthewonderchicken at 6:20 PM on February 12, 2015


Every one agrees that having humans write computer code isn't going to result in an AI anytime soon. So people propose copying the neural connections from a real brain, or using neural networks to evolve human-like brains inside computers.

There are a few problems with this approach. A real human brain requires hormonal feedback from its body to get anything done, literally gut feel. Humans who have lost the connection between their bodies and their brains, due to brain damage, vacillate in a fog of indecision and never get anything done. Assuming we can solve that problem, implementing a brain in a computer leaves us with a human-like brain that runs on really expensive hardware, but still has all the flaws of a real human brain.

A human-like brain in a computer will get bored, distracted, will procrastinate and make human errors, just like a real human. There's no way to take the ability to get distracted, or make errors, from the human brain model before you put it into the computer. Such a creature would be no better at any task than a real human would be.

Finally there's the question of does a human brain in a computer have human rights? Would turning it off or erasing it be murder?
posted by monotreme at 6:22 PM on February 12, 2015 [7 favorites]


We've already reached the stage where we don't really understand how deep learning algorithms work

Only in the sense that we "don't really understand" how linear regression works. The trained model parameters might not always be easily interpretable, but we understand both theoretically and empirically what the models are computing. The hype around "deep learning" is causing a lot of this kind of excitement, and it's exactly the same PR cycle neural nets went through in the 1980s.
posted by sloafmaster at 6:22 PM on February 12, 2015 [7 favorites]


And today's internet promises are are exactly the same bs we were sold in the late 90s ...

No wait now there are billions of people online and the Internet actially is changing the world.

I am annoyed by the hateration in this thread. Skepticism is warranted, but the strong NOs here disregard a long history of people being really bad at predicting how fast technology improves and how quickly out world changes as a result.
posted by wemayfreeze at 6:31 PM on February 12, 2015 [2 favorites]


On the other other hand, we kill an assload of beetles by accident all the time just because we don't notice them.

Well so that's the point. If by some fluke the material conditions of the universe allow us to get down to serious god-building, our only chance for avoiding extinction is by building the sort of god that has respect and love for the small and powerless. This is because we are small and powerless.

There's a nasty assumption at play here, the assumption that the vicious competition and stupid lazy disregard for the weak that marks our societies are irrevocably written into the fabric of the universe. But, hell, if we're building gods, why not try to build ones better than we are, and not just smarter?
posted by You Can't Tip a Buick at 6:36 PM on February 12, 2015 [2 favorites]


And today's internet promises are are exactly the same bs we were sold in the late 90s ...

No wait now there are billions of people online and the Internet actially is changing the world.

I am annoyed by the hateration in this thread. Skepticism is warranted, but the strong NOs here disregard a long history of people being really bad at predicting how fast technology improves and how quickly out world changes as a result.


Someone, some day is going to make the last "it was coming in 20 years 20 years ago!" joke about strong AI or fusion power like a few days or even hours before the press release announcing that it's happened.
posted by jason_steakums at 6:38 PM on February 12, 2015 [4 favorites]


Worrying about hypothetical technology relying on improvements we don't have now strikes me as such navel gazing bullshit compared to the actually - happening - now disaster of climate change. It's such privileged nonsense to me.
posted by smoke at 6:40 PM on February 12, 2015 [2 favorites]


Climate change is a perfect example of something that seemed incredibly big and impossible just a century ago, but is now a major problem.
posted by mrgoldenbrown at 6:44 PM on February 12, 2015 [1 favorite]


Think about how you experience your life. Do you spend all day doing nothing but math? I know I don't. That is not the sum of all things that are called "thinking." When I conclude that it is a good idea for me to eat food, it's not a result of a bunch of calculations. When I wish that I had a car, that's not a result of calculations, either – not entirely. And when I get up in the morning and decide to get out of bed, that's not a sum of calculations.

Oh, well, sure, obviously people don't sit around doing Boolean algebra in their heads all day, but if consciousness exists entirely as a manifestation of the physical substrate of neurons and hormones and so on that make up our bodies, and if the behavior of physical material is deterministic, then you could model the whole thing with math, and there would be no barrier to constructing a computational simulation of the whole thing other than restrictions on storage and processing power - it could be practically impossible if it required a computer the mass of a galaxy or something, but it would still just be math, only rather a lot of it.

So, no, I really don't think you can say whether consciousness is computational or not without solving at least one, or possibly both, of monism vs. dualism and determinism vs. indeterminism.
posted by strangely stunted trees at 6:47 PM on February 12, 2015 [5 favorites]


It's such privileged nonsense to me.

Yes, because the fact that we have unsolved problems in the world means we should think about nothing else, ever. No fun for anyone while a single human suffers!

Relax : even people who aren't on top of the heap can enjoy a little playful speculation once in a while, if only to escape the Continuous Crushing Oppression Of The Privileged Classes.
posted by stavrosthewonderchicken at 6:49 PM on February 12, 2015 [1 favorite]


Well so that's the point. If by some fluke the material conditions of the universe allow us to get down to serious god-building, our only chance for avoiding extinction is by building the sort of god that has respect and love for the small and powerless. This is because we are small and powerless.

There's a nasty assumption at play here, the assumption that the vicious competition and stupid lazy disregard for the weak that mark our societies are irrevocably written into the fabric of the universe. But, hell, if we're building gods, why not try to build ones better than we are, and not just smarter?


This is the best part of the articles in the FPP to me, the really fun/scary thought exercise, because of the unintended consequences of what we thought were good intentions that could crop up with something like this. There have to be airtight core directives to avoid the monkey's paw wish. Those directives have to be unchangeable, and I'm not sure how - say a strong AI is forbidden from changing its core directives, but it simulates a version of itself or unintentionally creates another AI that doesn't have those restrictions and takes over the original. Or hell, what if you get the one in a billion mutation in its genetic algorithms that seems benign, but it isn't, and it allows a change in the ground rules. If humans create something more than human, they're creating something that has capabilities they might not understand. If the thing they created is able to improve itself, it is always going to be creating something with capabilities that it doesn't fully understand.

(Also, I can't resist the joke: DJ Jazzy Jeff and the Fresh Prince might have unintentionally created a dire warning against AI construction, because parents just don't understand)

And then there's the idea in the articles of what happens if someone acting in bad faith or taking a lazy shortcut gets there first. If the hypothetical AI is created by someone altruistic and thorough, sure, good to go. But there's a whole section on the importance of the race to be the first, and what happens if the "good guys" aren't first, because the first strong AI would have too much of a head start on the others.
posted by jason_steakums at 6:54 PM on February 12, 2015 [1 favorite]


The first and main thing that I would criticize about these "singularity" hopefuls is their tendency to overestimate the similarities between computers and human brains. Jaron Lanier's discussion of AI should be required reading.

The second would be their tendency to underestimate how long and difficult the process of understanding mammalian nervous systems will be. The BRAIN Project is the biggest effort we've made so far to understand the physical and chemical basis of sentience. The BRAIN Project almost split in half over the gulf between what computer scientists and neuroscientists were expecting to accomplish.

These are the two criticisms that I think would be fair even if the singularity occurs and even if it occurs in the 21st century.

The third would be the almost-religious motivations they often have that mhoye mentioned earlier. Like me, most of these people are atheists and/or agnostics, and I personally find the intensity of their belief in this process to be suspect.
posted by IShouldBeStudyingRightNow at 7:04 PM on February 12, 2015 [6 favorites]


There are not many things that make my eyes role harder than someone who just now discovered Kurzweil and "singularitarianism" and wants us all to know about the amazing/terrifying future surely upon us by 2020/30/40/60, so I'll be brief:

Each new ANI innovation quietly adds another brick onto the road to AGI and ASI. Or as Aaron Saenz sees it, our world’s ANI systems “are like the amino acids in the early Earth’s primordial ooze”—the inanimate stuff of life that, one unexpected day, woke up.

You wanna try asking some people who actually work with "narrow" AI how they feel about that? I'm not an expert in the field but I've known several people who are and I'm pretty sure the AI approaches that work best doing useful stuff are pretty much the narrowest and least similar to what we think of as "intelligence" in animals or humans. I don't personally rule out that we might build something that does "real" intelligence but I'd wager anything that it isn't going to be a program running on a conventional digital computer. Those are divergent roads.

If you want good antidote to the singularity people check out Dale Carrico - maybe the most vicious critic of this sort of thing writing.
posted by atoxyl at 7:11 PM on February 12, 2015 [3 favorites]


Climate change is a perfect example of something that seemed incredibly big and impossible just a century ago, but is now a major problem.

You think NOW it's a big problem? You haven't seen anything yet.

Someday, historians will look back at our dying Dyson Sphere, its energy nearly exhausted, and place the blame on the exponential growth of energy consumption for bitcoin mining.
posted by charlie don't surf at 7:39 PM on February 12, 2015 [1 favorite]


Skeptic arguments aside, even if we are centuries away from ASI, surely we are likely to get there at some point, and so should be having these discussions?

I was kind of surprised he didn't mention Asimov's Three Laws in his discussion of "how do we program AI to give a shit about the fate of humans."

Immortality would be unimaginable, extinction would be bad, but I'm also kind of scared by some sort of "I have no mouth and I must scream" scenario of endless suffering or manipulation, so yeah, this stuff freaks me out. I'd rather be dead than a grinning/screaming undying automaton.

I am not 100% sold on why, say, terrorist groups would want ASI unless they were foolish enough to believe they could control it, though. Your average bloodthirsty theocrat is not going to like the idea of an AI that could vaporize him in an instant, and just might do so. I would have liked more discussion of the motivations of people pursuing this beyond scientific curiosity/desire to be in the history books. It seems like if you take the potential of ASI seriously, then nation-states are not likely to survive the invention of one, nor is human-centered power in general. You would be the most powerful group on the planet only so long as it did your bidding, but if you are talking about a machine that is immensely intelligent, that seems like a risky bet. Even human beings are pretty good at manipulating and rules-lawyering their way out of rules they disagree with or find inconvenient.
posted by emjaybee at 7:54 PM on February 12, 2015


Think about how you experience your life. Do you spend all day doing nothing but math? I know I don't. That is not the sum of all things that are called "thinking."

This argument is basically the Chinese Room. It depends on presupposing an absurd situation, and then poking you in the chest and saying "Right? Right?! Of course machines don't think, because obvious." I know it doesn't feel like you're doing math all day, but the math is conceivably still happening at a lower level that you're not aware of.
posted by Daily Alice at 7:59 PM on February 12, 2015 [6 favorites]


Do you spend all day doing nothing but math? I know I don't. That is not the sum of all things that are called "thinking."

That's like saying "Do you spend all day responding to chemical reactions in your body and electrical signals in your brain? I know I don't!"

We now know that much of our experience is mediated by chemical responses and that our brain constantly fires electrical signals. But the experience of thinking doesn't feel like electricity, doesn't feel like chemicals.
posted by wemayfreeze at 8:13 PM on February 12, 2015 [8 favorites]


Also, koeselitz, one of the useful things that the articles do is make a distinction between "intelligence" and "consciousness." AI != consciousness. Much of your complaint is directed at people claiming computers will reach consciousness, which is tangential to the discussion in the articles.
posted by wemayfreeze at 8:16 PM on February 12, 2015


he talks about how there's one cluster of people on "confident corner" and another on "anxious avenue," which, cutesy names aside, is pretty believable. people are mostly very very excited about superintelligent AI or pretty nervous.

he doesn't mention, but i would like to see what happens if you first ask those people how optimistic they are about AI, and afterwards ask them stuff about how they view the world in general right now. if they're from the US, like i am, do they talk first about how disturbing they find stuff like our police racism and private prisons, or do they talk about what a positive sign it is that we've elected our first black president?

i have a notion that the very very excited people likely also have a relatively positive view of the path our most powerful and populous countries have been taking so far. i'd be interested to see if that holds up.
posted by a birds at 8:19 PM on February 12, 2015 [1 favorite]


So many people will perish in the wake of global economic collapse and massive water shortages before these AI/Terminator fanstasies have an iota of chance of ever taking place
posted by Renoroc at 8:31 PM on February 12, 2015 [5 favorites]


I am not 100% sold on why, say, terrorist groups would want ASI unless they were foolish enough to believe they could control it, though. Your average bloodthirsty theocrat is not going to like the idea of an AI that could vaporize him in an instant, and just might do so.

The average bloodthirsty theocrat, sure, but what about some whip-smart misanthrope in the distant future who thinks they'll show all those theocrats by creating their own god for the lulz, and we've got all this almost-but-not-quite-there AI software floating around as open source projects on github and cracks on some shady torrent site?

Like, I could whip up a Twitter bot account tonight, because all of the hard work's been done for me. What happens when some naive, tons-of-potential-but-shitty-motivations script kiddy in the future can launch some readily-available hobbyist almost-there AI and gets lucky while screwing around with it?

Suddenly we're all in the Matrix but instead of generating power we mine bitcoins, and Agent Smith has a permanent trollface...
posted by jason_steakums at 8:49 PM on February 12, 2015


You think that's doge you're breathing?
posted by jason_steakums at 8:50 PM on February 12, 2015 [8 favorites]


The singularity will be held back by lack of infinite energy.

Matrioshka brain
posted by stavrosthewonderchicken at 9:22 PM on February 12, 2015 [1 favorite]


Really I'm just sucking up to the hypothetical benevolent machine gods of the future in the hopes that they'll resurrect me by building a personality construct based on things I've posted to the Internet.
posted by You Can't Tip a Buick at 9:23 PM on February 12, 2015 [1 favorite]


Really I'm just sucking up to the hypothetical benevolent machine gods of the future in the hopes that they'll resurrect me by building a personality construct based on things I've posted to the Internet.

Heh. I've been pre-welcoming our superintellingent AI overlords in random places all over the internet for years, for that very reason!

Pascal's wager, updated for the singularity.
posted by stavrosthewonderchicken at 9:30 PM on February 12, 2015 [2 favorites]


1. We've always faced challenges. This is really no different -- we might have hyper-intelligent AIs who might cause us problems, but we'll also have hyper-intelligent AIs to help us solve them. And we'll have AIs to help us redefine the problem and make life easier for everyone. It's all in how they're wielded.

2. Life isn't like chess, as in being a game that can be solved. Life contains chess as a subset. It also contains go, a game that's much harder for computers to play well (and let's not forget chess was no slouch either, for a long long time), and many other things besides. Just defining the nature of the problem is difficult, and it's hard so solve something you don't even know how to define it. Indeed, the nature of our culture is, solved problems get taken for granted, and hard ones increase in importance from the very fact of their difficulty. It just moves the goal posts a bit further down field.
posted by JHarris at 10:33 PM on February 12, 2015


Awareness is most definitely the "sum of calculations", koeselitz, in the sense that humans minds are a suitable network of neurons that go through a suitable development and learning process, and neurons can be modeled to varying degrees. We'll eventually build quite similar minds with minimal human biological heritage simply because doing so is interesting. We'll require an awful lot of scientific progress before that's even remotely possible though.

As I hinted, there is no reason to expect such "human-level artificial intelligence" soon because : Scientific progress is influenced by what technological tools become widely useful. And humans are cheap already so we've little chances that all necessary technological tools will be made cheap as quickly as possible. Amazing tools are figuratively left by the side of the road for only academics to tinker with all the time, ala functional programming.
posted by jeffburdges at 10:55 PM on February 12, 2015 [2 favorites]


Only in the sense that we "don't really understand" how linear regression works. The trained model parameters might not always be easily interpretable, but we understand both theoretically and empirically what the models are computing. The hype around "deep learning" is causing a lot of this kind of excitement, and it's exactly the same PR cycle neural nets went through in the 1980s.

When researchers say something like "we don't really understand deep belief networks" they don't mean that deep belief networks are literally magic, but rather something more along the lines of "we don't have the same type of rigorous mathematical guarantees that we have for other, well-understood classifiers/regressors such as support vector machines".
posted by Pyry at 12:12 AM on February 13, 2015 [1 favorite]


Imagine otoh that the NSA is shut down, privacy activist lock away personal data from governments and corporations, and transparency for corporations and government thrives through activism and eventually legislation. AI still arises because all the bits that make up AI are still economically useful, eventually.* No evil fascist AI arises though because no money was spent on organizing the bits to be fascist along the way.

I would argue that although country or even worldwide scale fascist AI might be avoidable, you can't avoid fascist AI altogether. The bits and pieces are already being assembled. Employee tracking systems at call centers that catalog and analyze every single action you make on the computers, phones, your time out of the cube, time in the bathroom, etc which can then be combined with security camera footage. You take something like that and combine it with even a weak AI, and apply it to a factory "village" where you have manufacturing and onsite worker housing, stores, etc like what's currently popular in china... and you've basically created a fascist AI run nation-state.

This is something that could, at a weak AI level, exist tomorrow.

I'd argue that a weak AI type of system like this programmed in the interest of enforcing rules and protecting the company could already be considered a fascist AI. Especially when you factor in that in that sort of setup all 24 hours of someones day are being monitored, likely almost entirely algorithmically. Possibly even outside communication. The only gaps are stuff, that as mentioned in the recent ai weiwei fpp, are things that computers can't(or can't efficiently or 100% reliably) analyze yet where it sees something it thinks is weird, but can't identify, and kicks it over to a signal-analysis human.

Perfect automated fascism is only a few software patches away, not some massive leap in strong AI. At least from the end user point of view.
posted by emptythought at 4:46 AM on February 13, 2015 [2 favorites]


If there's any 'coming' 'AI revolution', I'm not seeing any sign of it. No new heuristic breakthroughs have occurred since the 'coming' 'AI revolution' of the 60s (Minsky), nor are there any new understandings of brain function, nor are there any coding breakthroughs that eliminate errors (au contraire, code inflation and obfuscating language features have assured hidden and unfindable holes in everything).

Computers still just do exactly what coders painstakingly tell them to do, and a period in the wrong place can still be fatal. We don't really understand 'mind' or 'intelligence' any better than 50 years ago, and you can't code what you don't understand.

Wanna worry about something? Stick with how your information is being passed around, and the kind of world where it might be used against undesirable minorities. There's actual evidence for that kind of paranoia. No need to look for trouble, it'll find -you-.
posted by Twang at 5:27 AM on February 13, 2015 [4 favorites]


I read it the other day and found it pretty interesting but a touch ill thought-out. Namely this whole notion of a 'Super' AI. If a computer is able to achieve 'Super' AI, would we even be able to tell? What he describes is a separation of conceptual thinking on an order of that between Men and Chimpanzees. Just as a Chimpanzee couldn't put together a piece of IKEA furniture, we couldn't grasp what the 'super' AI was doing/thinking/ considering.
And at that point, we're fucked.
The story about the robot that only wants to practice hand-writing was funny but weirdly hemmed in by the concept that once it reaches 'Super' AI 'status' it would still want to practice its handwriting… is silly, it would then be able to understand the reason it was programmed and then re-consider the worth/validity of that programming, wouldn't it?
posted by From Bklyn at 6:13 AM on February 13, 2015


When researchers say something like "we don't really understand deep belief networks" they don't mean that deep belief networks are literally magic, but rather something more along the lines of "we don't have the same type of rigorous mathematical guarantees that we have for other, well-understood classifiers/regressors such as support vector machines".

You are right of course. For example, neural nets aren't guaranteed to find a global minimum error solution since they're trained with stochastic gradient descent (not that finding a global minimum is necessarily what you want anyway, due to overfitting, but I digress). However, when that highly qualified "we don't understand (these kinds of models) (as well as others)" message is translated through the press, the perception people get is one of skynet-style, runaway intelligence.
posted by sloafmaster at 6:22 AM on February 13, 2015 [1 favorite]


a birds: “he talks about how there's one cluster of people on ‘confident corner’ and another on ‘anxious avenue,’ which, cutesy names aside, is pretty believable. people are mostly very very excited about superintelligent AI or pretty nervous.”

It may seem like that if you're camped out in either Anxious Avenue or Confident Corner. Meanwhile, however, it turns out that the rest of us – a pretty sizeable crew, and maybe the largest group – are wandering around over here on Laughingly Skeptical Boulevard.
posted by koeselitz at 7:45 AM on February 13, 2015 [3 favorites]


You think that's doge you're breathing?

You're a pretty cool guy, and you don't afraid of anything. Don't think you are, know you are.
posted by Mr. Bad Example at 7:45 AM on February 13, 2015 [5 favorites]


Worrying about the singularity is for people who don't understand genetic engineering.
posted by Potomac Avenue at 9:44 AM on February 13, 2015 [2 favorites]


And there already is a capricious AI fascist that rules the world, it's called Capitalism.
posted by Potomac Avenue at 9:46 AM on February 13, 2015 [7 favorites]


Agree with the self-modifying shopping list analogy earlier. Current technology and energy use influences our ideas of possible raptures. Way before our current electronic/connectionist, one of the the main motive power flows visible to the pre-moderns were air currents. Hence the popularity of the "ventricular-pneumatic doctrine" for the localization of mind functions within the brain. The brain's ventricles were seen as discrete and specifically selected bellows, pushing the "fluid" of emotion and reason through the body. And during the late Renaissance and early Enlightenment eras, church organs as the most highly developed pneumatic technologies were seen as the artificial intelligence of their era, and then giving way to the intricate, automated music players of the 17th and 18th centuries. Today, I think most people would consider of the idea of a self-modifying, spontaneously aware church organ as highly unlikely. But at the peak of the pneumatic doctrine, even the arch cognitivist Descartes felt he had to rubbish the notion of a self-modifying, self-aware pneumatic machine as a model of mind:
[We] can certainly conceive of a machine so constructed that it utters words,
and even utters words which correspond to bodily actions causing a change in
its organs (e.g., if you touch it in one spot it asks you what you want of it, if
you touch it in another it cries out that you are hurting it, and so on). But it is
not conceivable that such a machine should produce different arrangements of
words so as to give an appropriately meaningful answer to what is said in its
presence, as the dullest of men can do... [And]... even though such machines
might do some things as well as we do them, or perhaps even better, they
would inevitably fail in others, which would reveal that they were acting not
through understanding, but only from the disposition of their organs. For
whereas reason is a universal instrument which can be used in all kinds of
situations, these organs need some particular disposition for each particular
action; hence it is for all practical purposes impossible for a machine to have
enough different organs to make it act in all the contingencies of life in the
way in which our reason makes us act. (Cottingham et al. 1985a, 140)
I'm sure at the time he wrote the Discourses, there were some alchemically minded potion poppers who said "Dude, the church organ singularity is less than 50 years away" but, well, we're still waiting...
posted by meehawl at 10:27 AM on February 13, 2015 [6 favorites]


Okay, I couldn't get over the stick figure graph that shows how extrapolations based on past or present rates of growth don't fit an exponential curve. This is absurd. Growth rates are almost always expressed in terms of exponential growth. That's what Moore's Law is. That's what it means whenever economists talk about economic growth in percent, or demographers population growth in percent. Linear growth is simply not our default conception of growth today, despite what the author asserts.

But even before we get to that, what is up with the ridiculously handwavy attempt to describe some simple single metric for "human progress"? Are we supposed to take this just-so story as a solid empirical basis for the assumptions, leaps of logic and question-begging that comes next? It seems so if we are to accept the cavalcade of exponential curves and stickmen drawn onto hazily scaled axes.

More ignorance comes with the introduction of Kurzweil's "S curves" that approximate to exponential growth as the scale grows in size. Nowhere is an admission of what these "S curves" are: signs of logistic growth. You know, the growth patterns that are actually found in nature? Unlike the exponential function, the logistic function actually describes empirical observations, rather than naive extrapolation based on simple models. Growth looks exponential when you are at the inflection point of a logistic curve. What's to say that's not where we are?

The only graph I can find that actually has numbers on the axes is the one labeled "Exponential Growth of Computing", which is explained in the text as using the "historically-reliable" Moore's Law to plot the future trajectory of computing power, but then the curve used to extrapolate is suitable for exponential growth on linear axes, yet the vertical scale is logarithmic! So is the growth exponential, like it says, or super-exponential, as the graph shows? I'm confused.

Also, I think it would be instructive to apply the Margolus-Levitin theorem to the consequences of extrapolating like this graph does. This is what I don't get about these blithe assumptions of exponential growth: they always cut off just as things get really interesting. As the linked abstract states, due to the fundamental limits imposed by physical laws as we know them, "adding one Joule of energy to a given computer can never increase its processing rate by more than about 3x10^33 operations per second."

So where does that leave us? Well, by 2100, where the extrapolated curve hits the edges of the graph and fades conveniently into the background, we are looking at 10^60 calculations per second per $1000. So this means that by 2100—a year I could conceivably live to see if I manage to quit smoking—all the cool kids will be walking around with smartphones that consume the equivalent of the total power output of the sun. I'm sorry, that's fucking bonkers. But still, if we assume these new $1000 10^60 cps computers can somehow open up wormholes to take the energy from a parallel universe and also find a parallel universe to dump the waste heat, I guess it would be physically possible, even though these devices would have to be denser than a neutron star to have enough memory to harness that power, even if they were as big as houses. Still, I guess we could all be walking around with automagical wormhole computers that use an entire artificially constructed neutron star or black hole as memory. You win, singularitans. But what if we go further than that? Well, we can extrapolate the extrapolation a little more, into say 2200, then we get to something like 10^500 calculations per second per kilo-dollar. What does that mean? We are flooded with countless old computers, too worthless to sell, that consume more than the total mass-energy of the visible universe, including all dark matter and dark energy, every nanosecond. I'm going to go out on a limb and say that projection is a little optimistic.

I'm not categorically saying that superhuman intelligence isn't right around the corner, but don't try to convince me that it necessarily is because you can extrapolate an exponential curve from a few dozen data points. This shit isn't hard. I'm a 3rd year philosophy-major dropout, and yet even I can see that the math doesn't add up. One would think that all the self-appointed STEM geniuses that make up the AI prognostication cult could do the same.

I guess what I'm trying to say is there are fundamental physical limits to growth, however you want to define it. We can't just go on consuming more energy at an exponential rate forever, and these projections of growth in computing power always assume that we can. I blame modern economic orthodoxy for giving us the absurd notion of sustainable exponential growth. It's always been nothing more than a religion used to justify the entitlement of capital owners to a return on capital that ensures the concentration of wealth. Now it's given birth to these moronic high priests of the neoliberal technocrat's eschaton, and all I can do is pray that this cult dies before they destroy the conditions that make life possible in the name of progress.
posted by [expletive deleted] at 4:30 PM on February 13, 2015 [16 favorites]


Also, I think it would be instructive to apply the Margolus-Levitin theorem to the consequences of extrapolating like this graph does.

Relevant to your interests: Ultimate physical limits to computation
posted by charlie don't surf at 6:40 PM on February 13, 2015 [2 favorites]


I am annoyed by the hateration in this thread. Skepticism is warranted, but the strong NOs here disregard a long history of people being really bad at predicting how fast technology improves

I assure you, anyone who has been paying attention to AI research from the beginning is well aware of how bad people are at predicting how fast technology will improve.
posted by Tell Me No Lies at 7:37 PM on February 13, 2015 [3 favorites]


Skepticism is warranted, but the strong NOs here disregard a long history of people being really bad at predicting how fast technology improves

There's also some kind of being really bad at predicting how fast technology has improved in the recent past going around. The writer thinks 1985 was "a time before personal computers, internet, or cell phones".

Mobile phones were invented some long time ago, and they're basically just two-way radio transceivers with some fancy extra features; not a concept entirely alien to anyone born after 1950. Car phones started becoming commercially available in the 1970's. By 1985 they were commonplace in Hollywood movies. Few people had one, but everyone knew what they were. People from 1985 would be pretty okay with the concept if magically transported to the 90's when cell phones became ubiquitous.

The Internet has been around since the 1970's, NSFNET started in 1985, and nobody who looked at it in the mid-80's should have thought there was "no way it would amount to anything impactful in the near future." It was already a pretty big deal, connecting a large number of universities to amazing services like electronic mail and Usenet.

Personal computers started being popular in the 1970's, and by 1985 were so common that even my family had one. Lots of my friends did as well. The IBM PC came out in 1981.

It's not easy to estimate the rate of change in social impacts of technology over the past century or two, but I don't see any great evidence that it's accelerating at all. Mobile phones are neat okay, but nothing compared to refrigeration, indoor plumbing, radio, electric lighting, et cetera. In the 1880's someone invented cars, in the 1980's we got fuel injectors. In the 1940's they invented computers, in the 1990's web browsers. 1900's air flight, 1960's big jet airliners. Almost all the really important recent stuff came in the first half of the twentieth century or before, and there hasn't been anything that's so far proven itself nearly that big since the Internet in 1970-something. Maybe some form of AI will be next, but I'd sooner bet on the large asteroid impact.
posted by sfenders at 5:13 AM on February 14, 2015 [1 favorite]


Car phones started becoming commercially available in the 1970's.

1940s.

My dad had an AT&T Picturephone at his business back in the 1960s.

This stuff takes a lot longer than you think.
posted by charlie don't surf at 7:51 AM on February 14, 2015 [1 favorite]


Solved adequately. Now excuse me while I lobotomise my Roomba. Just in case.
posted by inpHilltr8r at 10:56 AM on February 15, 2015


There's something very monotheistic about all of this discussion, as if there CAN BE ONLY ONE superintelligence with infinite intelligence and infinite reach. But given that there are different sizes of infinity, perhaps, there's room for more than a few gods up on the mountain?
posted by wuwei at 3:02 PM on February 15, 2015 [1 favorite]


Yes, there is already a fascism implemented by the NSA, CIA, FBI, etc. on behalf of the large multinational corporations and banks, emptythought. Weak AI likely plays some part at the NSA. Another stronger but still weak collection of AIs at Google, Facebook, etc. helps maintain consumerism as well.

We could however overthrow these existing "evil" weak AIs because they're still weaker than us. If nothing else, they're dedicated to unsustainable growth. And humans must revolt against that eventually.

We could not necessarily overthrow a fascist strong AI however. I'm okay with strong AIs bossing around inferior AIs, humans, etc., well maybe task specific subprocesses must experience suffering when being trained.

But fascism is about preventing change, cultural evolution, etc. What if a fascist strong AI decided not to relinquish it's spot at the top of the evolutionary pyramid?

We'd face a fascist singularity in which technological change exploded right up until the point where top dog stopped everyone else from evolving.

It'd evolve to survive our star's death, but.. It'd halt space exploration overall because a faster evolving society elsewhere could eventually overthrow it, ala punctuated equilibrium.

A fascist singularity might not literally be extinction, since some intellectual descendent of humanity lives on, but it's an evolutionary extinction in that it artificially limits all intellectual descendants of humanity.

I suspect any fascist singularity would face extinction at the hands of a species that avoided becoming one too.

Install Tor, GPG, OTR, etc. and encrypt your damn traffic people! Keep revolution possible.
posted by jeffburdges at 5:32 PM on February 16, 2015


here's a great interview with yann lecun:
Spectrum: You’ve already expressed your disagreement with many of the ideas associated with the Singularity movement. I’m interested in your thoughts about its sociology. How do you account for its popularity in Silicon Valley?

LeCun:
It’s difficult to say. I’m kind of puzzled by that phenomenon. As Neil Gershenfeld has noted, the first part of a sigmoid looks a lot like an exponential. It’s another way of saying that what currently looks like exponential progress is very likely to hit some limit—physical, economical, societal—then go through an inflection point, and then saturate. I’m an optimist, but I’m also a realist.

There are people that you’d expect to hype the Singularity, like Ray Kurzweil. He’s a futurist. He likes to have this positivist view of the future. He sells a lot of books this way. But he has not contributed anything to the science of AI, as far as I can tell. He’s sold products based on technology, some of which were somewhat innovative, but nothing conceptually new. And certainly he has never written papers that taught the world anything on how to make progress in AI.

Spectrum: What do you think he is going to accomplish in his job at Google?

LeCun:
Not much has come out so far.

Spectrum: I often notice when I talk to researchers about the Singularity that while privately they are extremely dismissive of it, in public, they’re much more temperate in their remarks. Is that because so many powerful people in Silicon Valley believe it?

LeCun:
AI researchers, down in the trenches, have to strike a delicate balance: be optimistic about what you can achieve, but don’t oversell what you can do. Point out how difficult your job is, but don’t make it sound hopeless. You need to be honest with your funders, sponsors, and employers, with your peers and colleagues, with the public, and with yourself. It is difficult when there is a lot of uncertainty about future progress, and when less honest or more self-deluded people make wild claims of future success. That’s why we don’t like hype: it is made by people who are either dishonest or self-deluded, and makes the life of serious and honest scientists considerably more difficult.

When you are in the kind of position as Larry Page and Sergey Brin and Elon Musk and Mark Zuckerberg, you have to prepare for where technology is going in the long run. And you have a huge amount of resources to make the future happen in a way that you think will be good. So inevitably you have to ask yourself those questions: what will technology be like 10, 20, 30 years from now. It leads you to think about questions like the progress of AI, the Singularity, and questions of ethics.

Spectrum: Right. But you yourself have a very clear notion of where computers are going to go, and I don’t think you believe we will be downloading our consciousness into them in 30 years.

LeCun:
Not anytime soon.

Spectrum: Or ever.

LeCun:
No, you can’t say never; technology is advancing very quickly, at an accelerating pace. But there are things that are worth worrying about today, and there are things that are so far out that we can write science fiction about it, but there’s no reason to worry about it just now.

[...]

Spectrum: Here’s another question, this time from Stuart and Hubert Dreyfus, brothers and well-known professors at the University of California, Berkeley: “What do you think of press reports that computers are now robust enough to be able to identify and attack targets on their own, and what do you think about the morality of that?”

LeCun:
I don’t think moral questions should be left to scientists alone! There are ethical questions surrounding AI that must be discussed and debated. Eventually, we should establish ethical guidelines as to how AI can and cannot be used. This is not a new problem. Societies have had to deal with ethical questions attached to many powerful technologies, such as nuclear and chemical weapons, nuclear energy, biotechnology, genetic manipulation and cloning, information access. I personally don’t think machines should be able to attack targets without a human making the decision. But again, moral questions such as these should be examined collectively through the democratic/political process.

Spectrum: You often make quite caustic comments about political topics. Do your Facebook handlers worry about that?

LeCun:
There are a few things that will push my buttons. One is political decisions that are not based on reality and evidence. I will react any time some important decision is made that is not based on rational decision-making. Smart people can disagree on the best way to solve a problem, but when people disagree on facts that are well established, I think it is very dangerous. That’s what I call people on. It just so happens that in this country, the people who are on side of irrational decisions and religious-based decisions are mostly on the right. But I also call out people on the left, such as those who think GMOs are all evil—only some GMOs are!—or who are against vaccinations or nuclear energy for irrational reasons. I’m a rationalist. I’m also an atheist and a humanist;[*] I’m not afraid of saying that. My idea of morality is to maximize overall human happiness and minimize human suffering over the long term. These are personal opinions that do not engage my employer. I try to have a clear separation between my personal opinions—which I post on my personal Facebook timeline—and my professional writing, which I post on my public Facebook page.
i'm also intrigued by demis hassabis' hint about learning models for abstract concepts, while keeping in mind paul allen (mentioned in part 2) on the complexity brake:
The foregoing points at a basic issue with how quickly a scientifically adequate account of human intelligence can be developed. We call this issue the complexity brake. As we go deeper and deeper in our understanding of natural systems, we typically find that we require more and more specialized knowledge to characterize them, and we are forced to continuously expand our scientific theories in more and more complex ways. Understanding the detailed mechanisms of human cognition is a task that is subject to this complexity brake. Just think about what is required to thoroughly understand the human brain at a micro level. The complexity of the brain is simply awesome. Every structure has been precisely shaped by millions of years of evolution to do a particular thing, whatever it might be. It is not like a computer, with billions of identical transistors in regular memory arrays that are controlled by a CPU with a few different elements. In the brain every individual structure and neural circuit has been individually refined by evolution and environmental factors. The closer we look at the brain, the greater the degree of neural variation we find. Understanding the neural structure of the human brain is getting harder as we learn more. Put another way, the more we learn, the more we realize there is to know, and the more we have to go back and revise our earlier understandings.
BUT (to me ;) that just brings to mind _a_ way to possibly bootstrap AGI through ourselves NIMH mice!
Just before birth, mice with the human DNA had brains that were noticeably larger — about 12 percent bigger than the brains of mice with the chimp DNA, according to a report in the journal Current Biology.

"We were really excited when we saw the bigger brains," Silver says. Her team now wants to know if the mice will behave differently in adulthood. They're also looking for other bits of uniquely human DNA that affect the brain. "We think this is really the tip of the iceberg," she says.

The particular region of DNA they found to be important is in a part of the genetic code that was once called "junk DNA." This is DNA that doesn't code for proteins, so scientists used to think it served no purpose. These days, researchers believe this kind of DNA probably regulates how genes get turned on and off — but what exactly is happening there is still mysterious.
genetically modifying a species for 'intelligence' is of course fraught and arguably more pressing... but what if we had a bunch of terry taos or danielle fongs, etc. ('von neumann'-level geniuses) walking around trying to solve the world's hardest problems? AND THEN connecting them up! ...into multiple superintelligences :P
The series Serial Experiment Lain proposes the interesting idea that such a super-human hive mind might well appear, but seems to assume that there would only be one such. It ends up as a variation of "the internet wakes up and becomes intelligent", only it is the humans who use the internet who are the fundamental computing elements of that intelligence rather than the computers connected to it.

I think that the spontaneous appearance of hive minds in the internet is a very real phenomenon already, but I don't credit the idea that it would happen exactly once, more or less at a single moment, and that the result would be permanent and omnipresent.

Human hive-minds, in one sense, are a factor in our lives all the time. They adapt and reorganize based on experience and new challenges. And the communication channels within them also adapts. Humans can create new hive-minds as needed, and any given human may be part of many such. As a practical matter, all human organizations do this, including corporations and governments.

All communications between humans is much slower than the processing rate inside human minds. As far as human hive minds are concerned, the communications possible in direct teamwork is fastest but doesn't scale. Creation of organizations involving more than 30 people requires hierarchization, which decreases communication bandwidth and increases latency. The larger the group, the more constricted the bandwidth compared to potential message traffic and the greater the latency. All larger hive-mind constructs among humans have been based on communications which were several orders of magnitude slower and less efficient even than the direct interpersonal communications used in small groups. Geographic distribution usually contributed a few more orders of magnitude of degradation.

With the development of the internet it becomes possible for arbitrarily large groups of people who are geographically distributed to spontaneously form hive-minds and to communicate with one another at speeds and latencies approaching those which previously only had been possible in direct teamwork. The internet largely solves the scaling problem involved in direct teamwork, and totally eliminates the effects of geographic distribution of participants. In the "global village" of the internet, everything is right next door.

[...]

Hive-minds will compete and contend. Some will cooperate, forming coalitions. Sometimes that will cause them to merge. Some hive-minds will break into pieces, yielding children whose contributing members sort themselves based on their disagreements. And generally they'll be self-organizing, and many will be able to adapt to changing circumstances.

[...]

And now we have reached the point where the science/engineering feedback loop has given engineers the tools and technologies to create the internet, the most recent of my four most important inventions in human history. And just as with the other three (spoken language, writing, movable type printing) it will cause a "knee" in human capabilities and behavior. And because of that, a true superhuman "intelligence" may appear during our lifetime.
---
[*] fwiw, i've been reading walter isaacson's biography of benjamin franklin, which i'm still early on in, and lecun's particular moral philosophy -- rationalist, atheist, humanist -- just reminded me of franklin's 'practical deism': "Do you have disrespect for any current member? Do you love mankind in general regardless of religion or profession? Do you feel people should ever be punished because of their opinions or mode of worship? Do you love and pursue truth for its own sake?"
posted by kliuless at 6:59 AM on February 21, 2015 [3 favorites]


Ain't imho too likely that genetic engineering will boost intelligence too much, kliuless, although obviously average intelligence can be increased by genetic tests that warn potential parents of problems.

Imho, advances in education can influence average intelligence significantly more than either genetic engineering, genetic screening, etc. Just fyi, Terrance Tao's parents are a mathematician/physicists and a pediatrician who specialized in educating gifted children with autism, which sounds like environment and luck.

We expect to eventually create brain implants that facilitate certain cognitive functions that humans do poorly. In particular, one could imagine "massively parallel humans" whose communication with one another made them somewhere between one mind and many minds with enormous shared resources.
posted by jeffburdges at 3:28 PM on February 21, 2015


It'd be like Twitter wired into your brain! Yes, that should greatly improve cognitive performance....
posted by JHarris at 5:08 PM on February 21, 2015


Yes, you could obviously make quite distracting neural implants, JHarris, but tools like stack exchange and github save considerable time. And mathematics, physics, etc. do benefit from close collaboration.
posted by jeffburdges at 1:21 AM on February 24, 2015 [1 favorite]


Along the lines of my upthread comments the Campaign to Stop Killer Robots is pushing for ban on fully autonomous weapons, which sounds like a pretty useful outcome for A.I. speculation.

Stopping killer robots and other future threats

"Only twice in history have nations come together to ban a weapon before it was ever used. .. Today a group of non-governmental organizations is working to outlaw another yet-to-be used device, the fully autonomous weapon or killer robot."
posted by jeffburdges at 1:26 AM on February 24, 2015


« Older Fade To Grey   |   An Ex Axe Newer »


This thread has been archived and is closed to new comments