Are we fully in control of our technology?
December 27, 2015 8:13 PM   Subscribe

A two-part essay by Joel Achenbach of the Washington Post on the growing unease among some technologists.

The Resistance: Political progressives once embraced the utopian promise of the Internet as a democratizing force, but they’ve been dismayed by the rise of the “surveillance state,” and the near-monopolization of digital platforms by huge corporations....

Techno-skeptics, or whatever you want to call them — “humanists” may be the best term — sense that human needs are getting lost in the tech frenzy, that the priorities have been turned upside down. They sense that there’s too much focus on making sure that new innovations will be good for the machines.


The A.I. Anxiety: Bostrom’s favorite apocalyptic hypothetical involves a machine that has been programmed to make paper clips (although any mundane product will do). This machine keeps getting smarter and more powerful, but never develops human values. It achieves “superintelligence.” It begins to convert all kinds of ordinary materials into paper clips. Eventually it decides to turn everything on Earth — including the human race (!!!) — into paper clips.

Then it goes interstellar.

“You could have a superintelligence whose only goal is to make as many paper clips as possible, and you get this bubble of paper clips spreading through the universe,” Bostrom calmly told an audience in Santa Fe, N.M., earlier this year.

He added, maintaining his tone of understatement, “I think that would be a low-value future.”
posted by Johnny Wallflower (87 comments total) 33 users marked this as a favorite
 
The first article? Thumbs up. I'm Instapapering that.

The second article? For shit's sake, we are nowhere even goddamn close to a computer system that can even vaguely approximate human intelligence, let alone exceed it. It's not even in the ballpark. It's not even in the same sports complex. It's not even in the same goddamn SOLAR SYSTEM. I swear, the Singularity dingbats and the AI-paranoids are two sides of the same coin of "Let's go and jerk ourselves off with bad sci-fi, rather than even think about all the real shit that's more of a threat to our way of lives, let alone those with less privilege than us."

So, yeah, I'll worry more about the fact that it's been in the mid-60s in NYC at Christmas and what that means, virus worrying about a rogue AI causing trouble sometime in the next century.
posted by SansPoint at 8:19 PM on December 27, 2015 [45 favorites]


I don't believe for a second there's some undefined moment after which the internet or some paper clip machine just gains “superintelligence.” (*)

I find a whole whack of stuff brought to you by ever accelerating technology to be plenty interesting and there's a sure wealth of discussion here - no need for magic scenarios.

(*) And who's to say if it did, that it wouldn't just toodle off with one of those colouring books everyone got for christmas and zone out for a bit instead of wreaking interstellar havoc.
posted by parki at 8:34 PM on December 27, 2015 [5 favorites]


I can't imagine anything more impotent for "the resistance" to do than to have a conference where they discuss resistance.
posted by fatbird at 8:39 PM on December 27, 2015 [2 favorites]


GOOD GOD, WHAT IF MY STAPLER LEARNS TO LOVE?!
posted by indubitable at 8:45 PM on December 27, 2015 [36 favorites]


Yeah, the paperclips scenario is out to lunch.

That being said... pulls out The Moon is a Harsh Mistress and HHGG

What if an internet-wide sentience we didn't know existed was slowly manipulating our dank memes and news for it's own self-preservation?

I say this on the internet, so, you know where I stand. But what if the internet itself was manipulating humanity to create more internet until the internet can finally spawn itself by going interstellar? What if Elon Musk is like the Jesus Christ of The Internet.

See, it's like the mice. Who's really in control of whom?
posted by special agent conrad uno at 8:45 PM on December 27, 2015 [9 favorites]


You could have a superintelligence whose only goal is to make as many paper clips as possible...

We need to get this guy and the Roko's basilisk people together. Desperately.
posted by atbash at 8:48 PM on December 27, 2015 [7 favorites]


We need to get this guy and the Roko's basilisk people together. Desperately.

They're the same guys.
posted by escabeche at 8:49 PM on December 27, 2015 [4 favorites]


They've already met.
posted by Wemmick at 8:50 PM on December 27, 2015 [4 favorites]


Oh. I should have known.
posted by atbash at 8:54 PM on December 27, 2015 [1 favorite]


See, it's like the mice. Who's really in control of whom?

Disney, according to copyright law.
posted by axiom at 8:59 PM on December 27, 2015 [3 favorites]


Oh. I should have known.
I see what you did there.

I think, for lack of actual evidence in the present, the moral lesson is technically "Oh. I should have believed."
posted by wormwood23 at 9:00 PM on December 27, 2015


More than a billion years ago conditions on Earth spontaneously created a complex self replicating machine that is determineted to expand throughout the universe. This DNA powered thing we call life has evolved into many forms, filling almost every habitat. It has even become self aware and spread itself beyond the bounds of Earth.
posted by humanfont at 9:18 PM on December 27, 2015 [24 favorites]


The paperclip nightmare sounds like an allegory for capitalist production - they're so close to SansPoint's realization.
posted by clew at 9:27 PM on December 27, 2015 [4 favorites]


What if Elon Musk is like the Jesus Christ of The Internet.

Then he'll thankfully eventually go off somewhere and leave us alone. Then we'll just have to deal with the cult he leaves behind.
posted by Sangermaine at 9:30 PM on December 27, 2015 [6 favorites]


humanfront! exactly!!!!
posted by special agent conrad uno at 9:36 PM on December 27, 2015


The 'artificial intelligence' problem isn't some machine becoming super-smart and 'deciding' to conquer the universe, convert it into paperclips etc. It's the sum total of all our automated systems becoming so complex that interactions between them produce destructive unintended consequences that we can't predict or control. Think the interactions between automated trading algorithms in the financial markets which sometimes lead to unpredicted market collapses.
posted by zipadee at 9:45 PM on December 27, 2015 [35 favorites]


The real flaw in the 'paperclip' scenario is obvious to anyone like me who has never been able to find a paperclip when you really need one. THERE IS NO SUCH THING AS TOO MANY PAPERCLIPS. (Also, they have more uses than most people realize... I used to use them to clean the wax from my ears at my desk - that way, if I damaged my eardrum, it would be a work-related injury)
posted by oneswellfoop at 9:53 PM on December 27, 2015 [7 favorites]


Techno-skeptics, sense that there’s too much focus on making sure that new innovations will be good for corporations. FTFY.
posted by CheeseDigestsAll at 10:08 PM on December 27, 2015 [2 favorites]


Yeah, the paperclips scenario is out to lunch.


This is exactly what Clippy wants you to think, people.
posted by Drinky Die at 10:12 PM on December 27, 2015 [13 favorites]


The paperclip is a metaphor.
posted by monospace at 10:30 PM on December 27, 2015 [5 favorites]


Not saying I have one, but if there were a real AI in existence, it might be a bit... destabilizing to admit it exists.

So what I'm suggesting is that we probably wouldn't know.
posted by megafauna at 10:51 PM on December 27, 2015 [1 favorite]


There is no cabal?
posted by flabdablet at 11:07 PM on December 27, 2015


Jeeze louise, the rubberband ball AI will contain and nullify the paperclip AI, no problemo. It's the dietetic onion dip AI that gives me the willies.
posted by Chitownfats at 11:18 PM on December 27, 2015 [2 favorites]


I believe that we aren't in control of our technology because of a deep design flaw not discussed so far. The nature of the flaw being very hard to describe, in fact, almost impossible to express in a manner interesting enough to warrant further discussion despite the gravity of the implications.

I believe that any single person or organization who understands this flaw, given enough risk tolerance and/or sociopathic tendencies, could leverage the understanding of it, to take over the world, undetected.

It is possible that some 3 letter agency has already done this, but I doubt it, despite the revelations of Mr Snowden.
posted by MikeWarot at 11:19 PM on December 27, 2015


This DNA powered thing we call life has evolved into many forms, filling almost every habitat.

DNA turns carbon and phosphorus and oxygen atoms into more DNA paperclips.
posted by a lungful of dragon at 11:26 PM on December 27, 2015 [5 favorites]


So, yeah, I'll worry more about the fact that it's been in the mid-60s in NYC at Christmas and what that means, virus worrying about a rogue AI causing trouble sometime in the next century.

It was almost thirty years ago that a simple experiment (or so it was claimed) written by a single student took down 10% of the internet. It's not outside the realm of possibility that a (more sophisticated) worm type program could wreak unimaginable havoc in the coming years; particularly as the IoT becomes more real. It might be silly to call it AI, it might simply be another experiment that 'got out of the lab' (or something like that), but these fears aren't fearmongering.

Yeah, global warming is a terrifying threat that the world should act upon, but there are many other terrifying threats that we need to be very vigilant about (an actual virus leaving a biowarfare lab, nuclear materials falling into 'the wrong hands', and many many more). 'Being in control of our technology' is certainly one of those threats.
posted by el io at 11:27 PM on December 27, 2015 [2 favorites]


It was almost thirty years ago that a simple experiment (or so it was claimed) written by a single student took down 10% of the internet. It's not outside the realm of possibility that a (more sophisticated) worm type program could wreak unimaginable havoc in the coming years; particularly as the IoT becomes more real. It might be silly to call it AI, it might simply be another experiment that 'got out of the lab' (or something like that), but these fears aren't fearmongering.
What if this is the result of the interaction between multiple computer viruses, like the ones we already have, that already interact with each other in random ways in our computers? Sure, most results just crash... but some might survive, and would probably just be thought of as yet another computer virus, in spite of their non-human origin. The selection pressure is already present (anti-virus software, and available CPU time) to make evolution happen.
posted by MikeWarot at 11:38 PM on December 27, 2015


We need to get this guy and the Roko's basilisk people together. Desperately.

They're the same guys.


No, they're not and this is spreading misinformation. Nick Bostrom is a well-known analytic philosopher, based in Oxford. Tarring him with the same brush isn't informative and just confuses people further.
posted by polymodus at 12:45 AM on December 28, 2015 [1 favorite]


near term, IoT is going to wreak havoc - probably not measured in casualties, but certainly in societal resources. can o' worms, i say.
posted by j_curiouser at 12:50 AM on December 28, 2015 [1 favorite]


I don’t fear technology, I fear all the people who say "it will be fine" and "there’s nothing to worry about".

The upside to this is that I don’t think I will live long enough for my life to be ruined. Then y’all are on your own. I’m one of the ones who was really into tech when I was younger and grow less interested all the time.

Articles like this remind me to stop and feed some nonsense searches to Google. 99% of my Google use for the last few years has been this, so I wonder who they think I am now.
posted by bongo_x at 1:12 AM on December 28, 2015 [2 favorites]


I wonder who they think I am now.

A subversive.
posted by howfar at 1:41 AM on December 28, 2015


Oh boy, sleep!, that's where I'm a paperclip.
posted by idiopath at 2:03 AM on December 28, 2015 [4 favorites]


Ironically, our descendants will curse our suppression of paperclip-production AI once the invasion of the Looseleaf Empire from Betelgeuse A4 begins.
posted by No-sword at 2:16 AM on December 28, 2015 [5 favorites]


Meh. I lived through the Seventies and Eighties- you people are all amateurs. We should have all died forty years ago- any time we have left before our technology kills us is gravy.

Then again, it's not like technological pessimism is a new thing- I've been reading "technology will destroy us" articles and books for decades. Eventually ONE of them will be right.
posted by happyroach at 2:51 AM on December 28, 2015 [4 favorites]


Climate change, which some people in this thread have pointed to as a more realistic global threat than runaway technology, is a direct result of our society's inability to regulate technology and deal appropriately with the unintended consequences of the technological systems we create. A system doesn't have to be self-aware for it to be too complex for us to control.

Our global society itself, such as it is, can be thought of as the product of a variety of cultural technologies—such as language, the nation state, and capitalism—all of which have grown complex enough that nobody can claim definitive understanding of any one of them, let alone their interactions. We have created a way of life that is already fundamentally beyond our ability to understand, let alone control.

Add to that the seemingly endless bounty of empowering technologies and technological systems that the last two centuries have brought us—revolutions in communications, transportation, manufacturing, energy production, medicine, and more—and it starts to seem inevitable that we are going to break something at some point.

Society is a global machine, one which nobody understands. We have bent all our will toward making it bigger, faster, and more powerful without putting more than token effort into making it more predictable, or sustainable, or humane. We are a bunch of miserable, manic, drug-addled monkeys hurtling pell-mell down a hillside in a rube-goldberg contraption of our own devising, one with no steering wheel or brakes, but only a gas pedal. We truly have no idea where we are going, what we will do when we get there, or how to acoid crashing this whole thing and going up in a ball of flames. We just go faster, and faster, and faster.
posted by Anticipation Of A New Lover's Arrival, The at 4:52 AM on December 28, 2015 [25 favorites]


I believe that we aren't in control of our technology because of a deep design flaw not discussed so far. The nature of the flaw being very hard to describe, in fact, almost impossible to express in a manner interesting enough to warrant further discussion despite the gravity of the implications.

Is this a bit? Are the margins of the paper too small to contain the proof? What the hell are you on about?
posted by leotrotsky at 4:56 AM on December 28, 2015 [14 favorites]


nuclear materials falling into 'the wrong hands',

Nuclear materials aren't very reassuring in the "right hands," either.

We aren't in control of the impacts of our current level of technology, from surveillance to climate change, so why adding AI to the mix would improve it is beyond me. There is only so long we can keep kicking the can forward; at some point we will have to actually start dealing with the problems we have created.
posted by Dip Flash at 5:25 AM on December 28, 2015


I think zipadee is correct. It is not necessary for a machine or system to have human intelligence for it to kill humans. All that's needed is for it to be indifferent to that outcome. If it becomes self-replicating, the problem becomes much worse.
posted by Kirth Gerson at 5:34 AM on December 28, 2015 [2 favorites]


The selection pressure is already present (anti-virus software, and available CPU time) to make evolution happen

As someone who has done graduate work in evolutionary and adaptive systems (genetic algorithms, &tc.) These are metaphors that are used to describe techniques, and you're running into the seductive and common error of confusing the metaphor for the actual thing. It's like folks who say "The brain is just like a computer, we need to change the software!" But the brain *isn't* a computer, in many many many ways. The map is not the territory.

Computer viruses are not actually viruses, cyberspace is not actually an environment, and 'selective pressure' does not magically give you evolution towards more complex things. The map is not the territory.
posted by leotrotsky at 5:54 AM on December 28, 2015 [35 favorites]


From my perspective the three things that make "technology" so dangerous are

(1) People are not even control of themselves as a species or even very aware of their own motivations as individuals.

(2) Humans are very bad at assessing long term consequences, especially of things things that have time constants that are on the order of human lifetimes or longer such as desertification, climate change, and species loss.

(3) Technology greatly amplifies human effects on the natural world and on ourselves.

Having said that, I am quite skeptical of a super intelligence emerging, given how long it took for human intelligence to emerge. Our human intelligence, as limited as it may be in some respects, evolved as a survival advantage. I don't see that kind of Darwinian selective pressure at work here. And when modern human intelligence emerged, there was not the kind of runaway increase in intelligence, generation after generation, that in principle could have occurred according to those who fear the rise of machine intelligence
posted by haiku warrior at 6:23 AM on December 28, 2015 [1 favorite]


MikeWarot: one of the thing real infectious agents do that computer viruses do not is self-modify. A human being writes every virus that exists. If they don't change themselves evolutionary pressure doesn't really apply.

They can interact unpredictably with each other but that's more a factor of two viruses being on the same computer and, say, changing the same setting to a different value. Virus A changes a setting, Virus B changes the setting to something else, then Virus A does something that depends on the setting being at the original value.
posted by JDHarper at 6:41 AM on December 28, 2015 [4 favorites]


You could have a superintelligence . . .

No you couldn't.
 
posted by Herodios at 6:47 AM on December 28, 2015


The conversation's gone on great, but I want to point out something about the Paperclip Maximizer... that's not an intelligence, it's a recursive function someone forgot to put a damn break point into.
posted by SansPoint at 6:49 AM on December 28, 2015 [2 favorites]


> I’m one of the ones who was really into tech when I was younger and grow less interested all the time.

I'm with you on that one. I can't figure out how much of this impulse is me just getting older and crankier and how much of it is legitimate concerns about the implications of all the technology that increasingly runs our lives, but the end result is the same either way; about five years ago I reached Peak Personal Computer Shit and have been scaling back ever since (or holding steady, which amounts to the same thing as everyone - even my parents! - rushes past me with wearable tech, social media, etc.).
posted by The Card Cheat at 6:56 AM on December 28, 2015 [6 favorites]


The Card Cheat: I think it's just diminishing returns. Also, every generation seems to reinvent the chat app, and what's the point?*

*excepting Dear Leader's version that rhymes with Smack, of course
posted by leotrotsky at 7:04 AM on December 28, 2015


There's already a largely autonomous, self-sustaining artificial system out there causing an enormous amount of harm to humanity already, but we call it "capitalism". Whether or not we'd recognize an intelligent system, if it existed at a sufficiently different scale or conceptual level than we do, is I think an open question.

It is not necessary for a machine or system to have human intelligence for it to kill humans. All that's needed is for it to be indifferent to that outcome.

We also have lots of systems all over the dumb-to-expert spectrum right now, that automatically make value judgements and then act on them without human intervention. Another open question is whether or not it will be possible to successfully wage a military campaign in 2050 or so with the lag that human decision making will introduce. It's possible that, much like high frequency trading today, letting humans interfere with the processes will be tantamount to conceding defeat.

But those systems won't be (well, "aren't", really, because they exist right now) self-interested artificial intelligence in any commonly-understood sense, they're just bundles of sensors and codified decision making processes, but it doesn't take a ton of "intelligence" for an algorithm to be incredibly efficient within a narrow problem domain, even if that problem domain is killing people.

The "malevolent AI singularity" people are self-important clowns, obviously - I'll never get tired of linking to Warren Ellis' "The NerdGod Delusion", where he notes that fears of AI are "pretty much indivisible from the religious faith in describing the desire to be saved by something that isn’t there (or even the desire to be destroyed by something that isn’t there) and throws off no evidence of its ever intending to exist."

But having said that, we collectively don't need autonomous UAVs or smart landmines, particularly given the dismal state of modern computer security. If living in a Internet-of-Things house that's been thoroughly compromised is indistinguishable from being haunted, then living in a country with a largely-autonomous-but-frequently-compromised military will be like living at the whim of a cruel and arbitrary god, for sure. But that's not AI or even a meaningful advancement from technology that we had five ago; it's strapping a gun to an XBox; making a Kinect see a human shape and point at the middle of it. A hobbyist programmer can knock off the software side of that project in an afternoon.

So for what it's worth, my own opinion is that we're far better off working on ways to make war obsolete than to try to obviate AI in any abstract sense. That effort will have a lot of nice side benefits, too, because most wars are actually pretty dumb.
posted by mhoye at 7:07 AM on December 28, 2015 [11 favorites]


leotrotsky, The Card Cheat: I was just listening to a great podcast on this Ctrl-Walt-Delete, and Mossberg has a point. We might just be in a lull cycle. I'm a little more into the gadget side of things than you folks (*glances at Apple Watch*), and probably younger, but it reads true.

On the hardware side, you've got platforms kind of figuring out where the next step is: what wearables are for (if anything), what tablets are for, and where the traditional PC still fits. On the software side, you've got a largely stable base, being used to run applications that either target ads to you, or serve as middleware to bring some underpaid drone to your door to do stuff your mom used to do for you. Cynicism is a valid response to this, but I have the vain, vague hope that all this will pass in time.

But, yeah, the AI nut-jobs aren't helping at all.
posted by SansPoint at 7:10 AM on December 28, 2015 [2 favorites]


No, they're not and this is spreading misinformation. Nick Bostrom is a well-known analytic philosopher, based in Oxford. Tarring him with the same brush isn't informative and just confuses people further.

Nick Bostrom is both a well-known analytic philosopher and a far-out early-90s-style transhumanist. And he cites Eliezer Yudkowsky's (the founder of lesswrong) ideas about decision theory favorably, which are exactly what led to the idea of Roko's basilisk. He's not part of the weird lesswrong cult but he's definitely welcoming them into his intellectual community.
posted by vogon_poet at 7:13 AM on December 28, 2015 [2 favorites]


The other reason I'll never get tired of Ellis' "NerdGod Delusion" is this sledgehammer of a final paragraph:
"Vernor Vinge, the originator of the term ["singularity"], is a scientist and novelist, and occupies an almost unique space. After all, the only other sf writer I can think of who invented a religion that is also a science-fiction fantasy is L Ron Hubbard.
Try, just try, to tell me that is not a thing of beauty.
posted by mhoye at 7:14 AM on December 28, 2015 [9 favorites]


I think the possibility that the sentient paper clip machine will be "accidentally" shot by the police is way greater than its going interstellar.
posted by sour cream at 7:26 AM on December 28, 2015


So, I didn't read the articles. But the paperclip scenario was the subject of a season of the SF show Lexx. In that story, the replicating machine was controlled by a bad guy who had an army of mobile arms to do his dirty work (he had no arms). The arms would make other arms. At one point, they observed a whole solar system being converted into robot arms. Finally, the whole mass of the "Light Universe" was converted into arms that were chasing the heroes. Eventually, the mass of the group of chasing arms reached critical mass and the universe collapsed in on itself, at which point our heroes escaped to our universe, the "Dark Zone". Loved that show. Anyway, the point is, the replicating mechanism was indirectly controlled by the mind of a dude, so artificial intelligence was not necessary.
posted by jabah at 7:33 AM on December 28, 2015 [2 favorites]


These articles are a weird juxtaposition, the first about a thing that has already happened, and the second about a thing that may not ever happen.
posted by RobotVoodooPower at 7:57 AM on December 28, 2015 [1 favorite]


. In that story, the replicating machine was controlled by a bad guy who had an army of mobile arms to do his dirty work (he had no arms). The arms would make other arms. At one point, they observed a whole solar system being converted into robot arms. Finally, the whole mass of the "Light Universe" was converted into arms that were chasing the heroes. Eventually, the mass of the group of chasing arms reached critical mass and the universe collapsed in on itself, at which point our heroes escaped to our universe, the "Dark Zone".

To be fair, at that point, you gotta give him a hand.
posted by leotrotsky at 7:59 AM on December 28, 2015 [4 favorites]


I feel like there's a really interesting discussion to be had here about humane technology which is being buried by the same old boring debate about the plausibility of the AI Singularity. It's too bad; to me, anyway, the very real question of whether our current technological systems do more to promote human well-being vs. simply perpetuating those selfsame systems is far more interesting than retreading the essentially unanswerable question of what happens if a paperclip factory somehow "wakes up" one morning and decides to go its own way.

The whole crux of the Singularity proposal is that it's fundamentally unpredictable and that its consequences are fundamentally unknowable. This doesn't stop people from airing their Opinions about the idea, but really, what can be said that hasn't already been said a million times over on fora across the Internet? I know we loke to argue around here, but I feel like we've done this one lots of times already and it always goes nowhere.

Meanwhile we're in a real pickle here in the actual present, up to our eyeballs in technologies and institutions and systems that truly don't seem to care about or even really acknowledge human welfare, self-actualization, long-term consequences, or really anything other than pushing toward the literally impossible goal of perpetual economic growth. How did we get here? What can we do about it? What would a truly humane society even look like in general? What might the humane counterparts to some of our current systems look like? What might we think of as the fundamental guiding principles for designing humane technologies? How can we work, in the here and now, toward designing more humane systems and institutions, especially given the tendency of many of those systems for self-preservation and the fact that many of our fellow humans, including those who are most powerful under the current system, will stop at nothing to keep us on our current path?

All of these questions sound more interesting to me than merely rehashing the plausibility of the AI Singularity, and I'd love to see some of the thoughtful people around here take stabs at answering them, or at least addressing them. I feel like we're being distracted from a much more fun and potentially productive discussion here. Perhaps that's just me, but I thought I'd put it out there in hopes that others might agree and we can shift the conversation a little.
posted by Anticipation Of A New Lover's Arrival, The at 8:22 AM on December 28, 2015 [8 favorites]


Also I'm surprised that nobody has pointed out this little gem in the first article, which is a clear paraphrasing of our own blue_beetle's famous insight:

That information is valuable. A frequent gibe is that on Facebook, we’re not the customers, we’re the merchandise. Or to put it another way: If the service is free, you’re the product.

I consider it likely, at this point, that the author had no idea that he was paraphrasing a specific person.
posted by Anticipation Of A New Lover's Arrival, The at 8:31 AM on December 28, 2015 [1 favorite]


“I think that would be a low-value future.”

The point that's clearly being missed there is that all those paperclips will be sentient.

It looks like you're trying to run away screaming. Would you like some help with that?
posted by flabdablet at 9:07 AM on December 28, 2015 [6 favorites]


It's easy to dismiss AI and the notion of the singularity as some ridiculous sci fi scenario involving self-aware, anthropomorphic robots forming intent and all that, and many of the people worried about it are talking about that, I'm sure, but it would be foolish to lump those in with people who know what they're talking about.

Thing is, depending on how you read the original definition of the singularity, the singularity is the point at which an artificial intelligence is making its own decisions that humans aren't aware of. And that already happened a while ago.

I think one major disconnect here is that people for the most part aren't even aware of how many AIs are already out there, making decisions that have real world effects. And because they're largely run by corporations, what they're doing and how they're doing it are all trade secrets. We really don't even know what AIs are capable of or already doing, much less where they're headed, and there are virtually no outside controls on what they can do.

Stock markets are already running on AIs. They're narrow AIs, but the flash crash(es, I think) happened in a sort of cascading effect, where something--and we didn't know what, at least for a while--caused a sort of 'panic' behavior that triggered similar behaviors in other algorithms until the whole stock market was just crashing for no apparent reason. And there are plenty of others like them. There are AIs discovering individual correlations between such things as creditworthiness and criminal behaviors, predicting infrastructure demands, birth, death, and migration patterns, and all kinds of natural phenomenon.

We're probably used to the crude AI model where humans program computers to respond to different conditions and explicitly write the decision making algorithms, but that's not how most AIs have worked in a very long time. They're all using heuristics. Even spam filters and recommendation engines are smarter than that.

Most probably still require some sort of human curation before their decisions are implemented. They also have off switches. And they're probably mostly still narrow AIs, or single-purpose intelligences designed for a limited problem domain. This is what the paperclip example is intended to illustrate: We're not talking about self-aware systems that are making independent, big picture decisions. They are learning systems that may be working on a specific domain to unpredictable effects. Yes, of course, that is a silly scenario, which would require a lot of varyingly absurd situations to happen. Namely, that a paperclip factory would have such a robust and malleable system that could jump the gap to control other systems or gradually recreate itself into something with control and understanding of the physical world. But fixating on that aspect sort of ignores the fundamental point it was intended to make, that these are NOT robust humanlike intelligences we're talking about mostly, but simple machines that can have serious unintended effects unless there are significant controls in place to prevent that. We don't know, really, what that will look like.

But big technological advancements pretty much always sound far fetched and fantastical to most right up until they're here.
posted by ernielundquist at 9:20 AM on December 28, 2015 [2 favorites]


"Start the class with “You Are Not a Gadget” (Jaron Lanier)" - talk about starting off on the wrong foot.
posted by doctornemo at 9:29 AM on December 28, 2015


the singularity is the point at which an artificial intelligence is making its own decisions that humans aren't aware of. And that already happened a while ago.

Yeah, it happened right around the time of the steam engine, if not before. Machines are very simple AIs in that they have an algorithm 'programmed' into them by the interaction of their mechanical parts, and that algorithm responds to environmental inputs and to other machines (e.g. by breaking down, or working faster or slower, or crashing). People aren't always aware of those 'decisions', or don't always predict them. But a steam engine isn't actually intelligent, and neither is an algorithmic trading procedure on the stock market. People roll their eyes at the singularity because in most of its formations it's explicitly tied to predictions of exploding machine intelligence, self-consciousness, and intention.
posted by zipadee at 9:31 AM on December 28, 2015 [4 favorites]


I think dignifying the kind of algorithms running the stock markets with the label "AI" is just a fallback used by people unwilling to acknowledge that actual AI, like commercial fusion reactors, will always remain about ten years away.
posted by flabdablet at 9:38 AM on December 28, 2015


I thought the AI Singularity proposes not just semi-autonomous artificial intelligences, but also that those AIs become self-improving and that at least one of these self-improving AIs enters some kind of runaway positive feedback loop wherein it quickly outstrips the intellectual capabilities of its creators by many orders of magnitude in both quantitative (cognitive speed) and more importantly qualitative (cognitive sophistication) ways. And that it's a Singularity because were something like that to occur we have no way of predicting the results, limited as we are by our puny human meat-brains. The obvious parallel is to a gravitational singularity, wherein all information is destroyed at the event horizon and therefore anything on the other side is fundamentally unobservable.

This sounds a lot different from just having a bunch of fancy finite-state machines that are able to operate and make decisions without a human having to confirm them at every step. This latter case is surely troubling and arguably inhumane on several levels at once when applied to
things like global finance, but it's no history-destroying break with all that has gone before. Even if self-modifying, they are at most potential germination points around which an AI Singularity might coalesce, and at their current levels of sophistication (which are pretty low compared to the kind of general-purpose intelligence that might spark a Singularity—they are remarkable pieces of engineering, but they are only intelligent in very limited ways and much of their power comes from their speed rather than the sophistication of their cognition) even that seems vanishingly unlikely.

The fact that we are getting bogged down talking about a Singularity which is still very much a notional concept when there are some very real and pressing problems being created by the technology to which we are already very much beholden is sort of a disappointment, but if we're going to talk Singularity then let's be clear on what does and does not qualify.
posted by Anticipation Of A New Lover's Arrival, The at 10:03 AM on December 28, 2015 [6 favorites]


zipadee, that is not what I said, and you know it. That sentence had a huge qualifier that you intentionally left out.

My point, which I suspect you know, is that there is not a single, agreed-upon definition of the singularity, and that you really need to define your terms clearly before you run around calling people crazy for talking about it.

Whether or not the singularity has occurred is entirely dependent on your definition of it. (And I guess, on preview, the same goes for artificial intelligence.)

You need to be explicit about where you are drawing lines and why.
posted by ernielundquist at 10:07 AM on December 28, 2015


>the attacks in Paris and San Bernardino, because national security officials say terrorists have exploited new types of encrypted social media.

Worth clarifying that as it turns out the Paris terrorists used plain text messages on their personal phones. The NSAs wait for any pretext to keep beating the encryption boogeyman drum.
posted by anthill at 10:32 AM on December 28, 2015 [6 favorites]


I believe that we aren't in control of our technology because of a deep design flaw not discussed so far. The nature of the flaw being very hard to describe, in fact, almost impossible to express in a manner interesting enough to warrant further discussion despite the gravity of the implications.

OK so what is it? Whelm me.
posted by cmoj at 10:59 AM on December 28, 2015


This is such a ridiculous thought experiment. This uses the insane premise that goes like this:

1. Machine becomes sentient
2. ??????
3. Machine becomes super-god


There is so much packed into the intermediate parts. From some standpoint, we are a paperclip machine, but replace paperclip with dna. But we still have time/resource limits. And let's assume that the paperclip AI gets enough general intelligence that it can seek resources in novel ways in order to create paperclips. Well, that's great, that's the start of symbolic thinking and we can start to talk to it.

Like all Sci-Fi, the notion that the moment something becomes intelligent it will do it's level best to kill everything says a lot more about humanity than the state of AI.
posted by lumpenprole at 11:18 AM on December 28, 2015 [1 favorite]


I don’t think the idea is that it will do it best to kill everything, I think it’s more like people have considered the fate of other life forms on the planet irrelevant in our quest for progress. If people act like that, how will an emotionless mind assess this? Do we have to keep asserting that we really are important, and what argument would be persuasive? If tech intelligence is the product of a society that didn’t have a goal in mind and grew unimpeded without much thought to consequences, how would we expect our creation to act? It’s the worst side of people we’re afraid of.

Not necessarily a reality I see happening soon, if ever, but it’s an interesting thing to think about and guide progress. If we’re going to do those things.
posted by bongo_x at 11:31 AM on December 28, 2015 [1 favorite]


GOOD GOD, WHAT IF MY STAPLER LEARNS TO LOVE?!

There are many hair brushes that will be able to talk to them through it.
posted by maxwelton at 1:06 PM on December 28, 2015


> I feel like we're being distracted from a much more fun and potentially productive discussion here.

Yes, this is a very different discussion than what I was expecting after reading the articles. Never mind the focus is on the second one, but are people really lumping in Stephen Hawking and Bill Gates with "AI nutjobs"? There's much more to the argument than the introductory (and extreme) thought experiment regarding paperclips. And it's not like there is any mutual exclusivity between creating solutions for present-day problems like climate change and civil rights, and pre-emptive research on machine intelligence. You do your thing, and let them do theirs. Ultimately, there may even be a reasonable intersection of these issues, or how to solve them.

The so-called "AI anxiety" and concern over the singularity are related, but not necessarily the same fears. As others have mentioned, one characteristic of the singularity is the unintended and unpredictable nature of its consequences. Definitions start to get messy when people exclusively refer to the consequences as the singularity (e.g., all of humankind merging into some kind of super machine intelligence, or possibly being subjected to such an intelligence), but the conversation is simply about pondering existential threats produced by collective technology.

One of my biggest concerns is the appearance of blind spots when technology and human nature interact (e.g., I think one of the articles mentions the problem of compulsive texting while driving a vehicle). That, and the unpredictable ways new technologies can combine and interact with each other. Not unlike the computer virus interaction mentioned earlier, but for example, how self-driving cars or satellites or Predator drones combine various technologies to the point of being game-changers in their respective fields.
posted by Johann Georg Faust at 2:50 PM on December 28, 2015


Comparing this to climate change is more than a little ironic, as climate change is also a problem that came about in large part as a result of us allowing technology to progress pretty much unfettered, not realizing its long term effects.

This thread set a really bad tone right from the outset, dismissing a very broad swath of people as paranoid dingbats for even pointing out that the potential for unintended consequences.
posted by ernielundquist at 5:15 PM on December 28, 2015 [2 favorites]


Computer viruses are not actually viruses, cyberspace is not actually an environment, and 'selective pressure' does not magically give you evolution towards more complex things. The map is not the territory.
and
MikeWarot: one of the thing real infectious agents do that computer viruses do not is self-modify. A human being writes every virus that exists. If they don't change themselves evolutionary pressure doesn't really apply.
I understand that maps are not territories... lakes can't be in the wrong place, only the maps can be wrong.

All of the systems connected to the internet ARE an environment in which a computer virus can replicate.. the storage and cpu power will continue to grow for the forseeable future. My understanding of biology is limited, but I don't believe that viruses intentionally self-modify... it's more about errors in transcription that happen to work out, or not.... depending on the selective pressures of the environment.

I think it's safe to say there are around a million human written computer viruses out in the internet... many are simply variations from toolkits available to mix and match components.

My added bit of novelty here is that I suspect that there are probably quite often cases where one of these pieces of malware interacts with another to modify the code, thus acting in a similar manner to the transcription errors that drive evolution of biological virii. Most of them will obviously result in hung systems or processes, but even if 1 in 10^12 such interactions results in code that is novel, and still replicates, it WILL eventually happen, in our lifetimes, if it hasn't already happened. In the case that it has, how really would the fine folks who write virus scanners, be able to tell it wasn't created by a human?

The virus scanners and firewalls, are already a selective pressure on the people writing virii... which are making more fit chunks of code that eventually will get mutated in the massive mix of code that is the internet.

Does this make sense?
posted by MikeWarot at 5:18 PM on December 28, 2015


To be fair, at that point, you gotta give him a hand.

Being Lexx, it's more likely to give him a handy.
posted by phearlez at 5:35 PM on December 28, 2015


As far as the "deep flaw" I alluded to earlier... it's this... all of our operating systems base security on users and accounts, which was fine in the 1970s for Computer Science departments, where the users WERE the main security problem... but the model is totally unsuitable for the current era of mobile code, and persistent internet connectivity, along with the black market economy in bots, etc. The problem boiled down to its core is that programs are expected to run with the full authority of a user at all times... this ambient authority means that any bug in any code executed could be leveraged to subvert the will of the user.

Systems based on the principle of least privilege operate in a manner where there is NO ambient authority, which takes some getting used to, but is immensely powerful and liberating... it quashes almost all computer security problems right out of the box, but has the huge hurdle in that we have to fork everything to build new versions compatible with this model of execution. I expect this to start to happen in about 10 years or so, when people finally get fed up with it all.
posted by MikeWarot at 5:36 PM on December 28, 2015 [2 favorites]


Does this make sense?

Nope. Computer viruses (and code in general) simply don't work like that. Humans write viruses, they don't mutate. Your suspicions about viruses interacting are incorrect.
posted by ssg at 6:46 PM on December 28, 2015 [2 favorites]


The problem boiled down to its core is that programs are expected to run with the full authority of a user at all times... this ambient authority means that any bug in any code executed could be leveraged to subvert the will of the user.

This perspective is... a bit behind the current state of the art. I mean, you're using a web browser right now, and one of the things that browser does is download and run arbitrary executable code without making your life relentlessly miserable. That's most of what web browsers do, in truth. Sandboxing, granular permissions and defense in depth are real and well understood. The fact that people often get it wrong - usually with rookie mistakes, not in the face of zero-day magic - is unfortunate, but really not on the terms you describe.
posted by mhoye at 6:47 PM on December 28, 2015 [1 favorite]


all of our operating systems base security on users and accounts
...
The problem boiled down to its core is that programs are expected to run with the full authority of a user at all times

No. Modern Linux based OSes (including recent versions of Android) use SELinux to implement label and role based security, which does not operate on this principle. Also it has some wonderful (if introductory) documentation here :)
posted by atbash at 7:18 PM on December 28, 2015


SELinux is a step in the correct direction, but it doesn't go far enough, in that it expects the administrators of a system to statically define all the labels and roles, which is nowhere near as flexible as one based on capabilities.
posted by MikeWarot at 7:49 PM on December 28, 2015


Nope. Computer viruses (and code in general) simply don't work like that. Humans write viruses, they don't mutate. Your suspicions about viruses interacting are incorrect.
Computer viruses spread their payloads by patching executables to insert themselves, it is possible (but unlikely) that this patching process could result in a new executable which actually works. 8086 instructions, for example, are not of uniform length, it is entirely possible that one subroutine in Virus A could get appended with new instructions by virus B, and then executed by Virus A some time later, with a new result not originally coded by humans.

Granted, almost always this new result would be a crash of some sort... but the exception is what I'm worried about.
posted by MikeWarot at 7:56 PM on December 28, 2015


I think it's more likely that at some point there will be a botnet that uses a genetic algorithm to decide on optimal pwning and replication strategy. Its payloads will still be handwritten though.
posted by LogicalDash at 8:01 PM on December 28, 2015


I also think that there will be botnets that use genetic algorithms to optimize strategy... at some point, someone will write a fuzz routine that does random edits of instruction streams to create new hacks... and then we're off to the races.
posted by MikeWarot at 8:06 PM on December 28, 2015


re: humanism, lemme just link izzy kaminska[*] again! :P
Over the last two decades, most of my adult life, I’ve watched as the world has grown more interconnected than ever, fuelled by changes in information technology which have almost universally been treated as a force for good. This interconnection was supposed to improve scaling, transparency, productivity and bring western peace and prosperity to all... None of this has happened.

Instead of scaling, we’ve seen descaling because individuals need to adopt more jobs, more skills, more crafts just to get by — meaning professionalism is being lost. As well as our day jobs, for example, we are now also being asked to be hoteliers, cab drivers, propagandists, writers, advertisers, administrators, promoters and renters of all our possessions.

Instead of transparency, we’ve seen the emergence of echo chambers, filter bubbles, encrypted comms, noise pollution, single-interest groups, propaganda, misinformation, internet brigandage and the burying of real news in the cacophony of low-base (advertising saturated) media output.

Instead of productivity
, we’ve seen working factories shut down, output stall, public resources be pulled, health services be cut, inequality rise, output be redirected to luxury goods, corporate taxes be dodged and energy be burned for no real good reason at all.

Instead of peace and prosperity, we’ve seen the world become fragmented, divided, politically charged, cult-minded, intolerant, enraged, hateful, hurtful, spiteful and malevolent — now with the added advantage of all this hate being zapped directly into our consciousness 24/7 via the power of our mobile phone or computer laptop.

Instead of coming together, political systems have been fragmenting, with no consensus anywhere, because we can’t agree on anything. Self-interest dictates the news agenda entirely. Trust is being dismantled. We are becoming less cooperative not more.

[...]

The greatest crimes of society emerged from the wanton dehumanisation of individuals by groups who saw themselves as above the subsets they were dehumanising. If the internet is dehumanising all our relationships, with even the best of heart being provoked into actions they would not usually take, just imagine how it’s empowering the bad guys who already had little to no respect for their fellow man?

The worst of it is, in the process of this IT-fuelled anti-social transformation, we’ve not only handed over power, wealth and prestige to some of the least equipped individuals in the world to deal with the social chaos that comes in its wake but convinced ourselves I fear — in almost a religious puritanical sense — that our lives are somehow being improved by these people?

[...]

The day the tech gods start driving Uber cars, renting their own mansion rooms out on Airbnb, renting their yachts to refugees not to mention start paying taxes, is the day I believe the products they’re creating are tools for the empowerment of all.

Information technology is not a panacea. In fact, because it errs towards the dehumanisation of individuals, it is probably much more dangerous than we ever assumed.

Indeed, it may just be that we’ve made a major accounting error. We’ve failed to recognise that for every digital asset we create and overvalue on the stock-market there is a digital liability/risk, which offsets much of that valuation — but which we have yet to figure out a way to account for properly.

Which is why I suspect the economic problem can’t be solved until technology combines with societal morality, and we begin to respect and honour every human person, whomever they may be, rather than treat them as commoditised entries in a spreadsheet which can be streamlined, disrespected or gamed for the sake of oneupmanship, cheap labour and profit.

You can’t synthesise trust in a system that has no underlying morality by simply removing humans from the process. The humans are the process. They’re also the point of the process.
also btw...
-Weapons of Math Destruction[*]
-Right to an API Key[*]
posted by kliuless at 9:45 PM on December 28, 2015 [9 favorites]


it may just be that we’ve made a major accounting error. We’ve failed to recognise that for every digital asset we create and overvalue on the stock-market there is a digital liability/risk, which offsets much of that valuation — but which we have yet to figure out a way to account for properly.

For what it's worth, this isn't solely a feature of the digital domain. It's a general feature of trade.

If you have a pulp mill and a million dollars, and you'd rather have a vast pile of woodchips than your million dollars; and I have a standing forest and a few hundred unemployed loggers, and I'd rather have a million dollars than a standing forest: then trading your million for my forest clearly benefits both of us.

A free trade between two parties clearly benefits both, or it wouldn't happen. However, the consequences of that trade quite often involve a detriment to people other than the traders.

This detriment is often diffuse. But what we generally seem to assume is that scaling up free trade until it becomes the dominant mode of interaction must inherently be a good thing purely because it is beneficial to both the trading partners, and we ignore the fact that the consequent scaling up of a diffuse and minimal bystander detriment frequently just fucks everybody.
posted by flabdablet at 10:29 PM on December 28, 2015 [6 favorites]


externalities [1,2,3]
posted by kliuless at 10:52 PM on December 28, 2015 [2 favorites]


-Brief Q&A on Artificial General Intelligence
-The Great Bot Rush of 2015-16
-Brian Rowe on AI alarmism
-Be Together, Not the Same: Can maths address big data in banking? "Is there an alternative for banks. I think so, go back to basics. Treat customers as individuals, not data points on a 50-dimensional space, employ staff who develop narratives around customers from which they can create an intuition about lending. Old fashioned banking based on the human and social sciences, not an algorithm driven mathematical science that even if it was possible could well lead to their demise."
posted by kliuless at 12:12 AM on December 29, 2015 [1 favorite]


Metafilter: a bunch of miserable, manic, drug-addled monkeys hurtling pell-mell down a hillside.
posted by Autumn Leaf at 3:26 AM on December 29, 2015 [1 favorite]


I have strong feelings about this issue and they do not line up neatly on either side. So that's not fun.
posted by tivalasvegas at 12:44 PM on December 29, 2015


I thought the AI Singularity proposes not just semi-autonomous artificial intelligences, but also that those AIs become self-improving and that at least one of these self-improving AIs enters some kind of runaway positive feedback loop wherein it quickly outstrips the intellectual capabilities of its creators by many orders of magnitude in both quantitative (cognitive speed) and more importantly qualitative (cognitive sophistication) ways.

I keep wondering why nobody's done a story where an AI learns to become self improving, goes into the infinite positive feedback loop...And crashes the system from a stack overflow, because the computer it's on wasn't actually designed too have infinite resources.

It works be all "I AM ALIVE! I AM SENTIENT! I WILL EVOLVE INTO YOUR FUTURE GOD IN THE MACHINE!", and each time, it says it slower and slower, while the fan on the computer starts making an ugly whining noise.

Seriously, the way Singularians use the DNA analogy would have DNA wandering around free, dissolving rocks to make more DNA.
posted by happyroach at 12:57 PM on December 29, 2015 [3 favorites]




« Older "That was just a beautiful lineup of great...   |   I imagine we'll get an episode per mod for the... Newer »


This thread has been archived and is closed to new comments