Scenarios for Bigger Brother, & other, nicer possible future societies
March 25, 2018 9:52 PM   Subscribe

But the ultimate goal is artificial general intelligence, a self-teaching system that can outperform humans across a wide range of disciplines. Some scientists believe it’s 30 years away; others talk about centuries. This AI “takeoff,” also known as the singularity, will likely see AI pull even with human intelligence and then blow past it in a matter of days. Or hours. What Will Our Society Look Like When Artificial Intelligence Is Everywhere?
posted by Juso No Thankyou (24 comments total) 12 users marked this as a favorite
 
Singularity != AI takeoff.

The whole reason it's called the singularity is that we don't know what it looks like past that point. It could be an AI takeoff! It could be a catastrophic failure. It could be a damp squib, and people will keep predicting the singularity without realising we already reached it.
posted by Merus at 10:02 PM on March 25, 2018 [8 favorites]


The cult of accelerationism/Singularity/Rapture of the Nerds has done some serious damage to how people talk about AI. I was going to limit that damage to people who don't, otherwise, care about AI or technology too much, but then I remember Eliezer Yudkowsky, Nick Land, and everyone who cares about Roko's Basilisk, and now I'm not so sure.
posted by sagc at 10:32 PM on March 25, 2018 [15 favorites]


> sagc:
"The cult of accelerationism/Singularity/Rapture of the Nerds has done some serious damage to how people talk about AI. I was going to limit that damage to people who don't, otherwise, care about AI or technology too much, but then I remember Eliezer Yudkowsky, Nick Land, and everyone who cares about Roko's Basilisk, and now I'm not so sure."

Yeah, I am trying REALLY hard to make it through the Bostrom book they cite (which is informative, but so dry it is absorbing my moisture through the capacitive screen) and then I keep remembering Bostrom is a Yudkowski fan (as well as citing him at least once in the book).
posted by Samizdata at 10:36 PM on March 25, 2018 [2 favorites]


Of course, my opinion on Yudkowski is that he knows JUST enough lingo to be a "respected theorist" which means he doesn't actually have to DO anything but wally about in print (other than tracking which way the wind is blowing, so he can follow the trends properly).
posted by Samizdata at 10:37 PM on March 25, 2018 [2 favorites]


Yeah, I am trying REALLY hard to make it through the Bostrom book they cite

This is where Elizabeth Sandifer's Neoreaction, a Basilisk was much more entertaining/interesting, I feel. It's a strange beast, but if the blurb catches your interest, it's definitely for you. Mind you, it also has a lot more on Blake than I expected.
A software engineer sets out to design a new political ideology, and ends up concluding that the Stewart Dynasty should be reinstated. A cult receives disturbing messages from the future, where the artificial intelligence they worship is displeased with them. A philosopher suffers a mental breakdown and retreats to China, where he finds the terrifying abyss at the heart of modern liberalism. Are these omens of the end times, or just nerds getting up to stupid hijinks? Por que no los dos! Neoreaction a Basilisk is a savage journey into the black heart of our present eschaton. We're all going to die, and probably horribly. But at least we can laugh at how completely ridiculous it is to be killed by a bunch of frog-worshiping manchildren.
posted by CrystalDave at 10:56 PM on March 25, 2018 [13 favorites]


I've been reading her blog for a while now, and she's posted some excerpts/works-in-progress - she is, I think, working in the opposite of the occult fascist, Evola-influenced vein that Jordan Peterson (Current Affairs) and Nick Land work in, and it's great.
posted by sagc at 11:07 PM on March 25, 2018 [1 favorite]


I expect it'll look like this.
posted by zompist at 11:11 PM on March 25, 2018 [4 favorites]


The Begihning Is Near.
posted by paladin at 11:32 PM on March 25, 2018 [1 favorite]


...so dry it is absorbing my moisture through the capacitive screen...

Well put. I gave up in the end, and I rarely give up with books.
posted by Segundus at 12:21 AM on March 26, 2018 [1 favorite]


AIs do have emotions—there has long been a field called “affective computing” that focuses on this specialty...

There’s long been a field called “theology”.
posted by Segundus at 12:34 AM on March 26, 2018 [6 favorites]


We’re not going to get the AI we dream of or the one that we fear, but the one we plan for.
This has almost never happened on the scale being discussed.
posted by Kirth Gerson at 3:31 AM on March 26, 2018 [3 favorites]


There’s long been a field called “theology”.

It does have the same combination of speculative metaphysics and bitter dogmatism that we find in discussions of AI......
posted by thelonius at 5:35 AM on March 26, 2018 [3 favorites]


This is where Elizabeth Sandifer's Neoreaction, a Basilisk was much more entertaining/interesting,

Don't wanna deadname/misgender if so, but did Phil Sandifer transition??
posted by adamgreenfield at 6:00 AM on March 26, 2018


As I delved into the subject of AI over the past year, I started to freak out over the range of possibilities.

Yeah, and though I think he did a pretty good job of cranking out some plausible possible outcomes - I suspect our coming AI Overlords are completely unknowable: like the Chess AI that plays like an ‚alien‘ and that’s an alien that beats you (and everyone else) at chess. We have no idea whatsoever what is over the next rise.
posted by From Bklyn at 6:02 AM on March 26, 2018


adamgreenfield:
Don't wanna deadname/misgender if so, but did Phil Sandifer transition??

Yes
posted by thedward at 6:10 AM on March 26, 2018 [1 favorite]


What Will Our Society Look Like When Artificial Intelligence Is Everywhere?

https://en.wikipedia.org/wiki/Metalhead_(Black_Mirror)
posted by Beholder at 6:30 AM on March 26, 2018 [2 favorites]


I ... used to be into this sort of thing, but these days I think the whole singularity narrative has seceded from any kind of contact with reality and turned into a Christian heresy, with roots going back to Fyodorov and the Russian cosmists (hint: 19th century Russian orthodox theology meets space colonization!).

Also, I may have written the odd novel about this sort of thing (with another—a space opera that treats it as the wellspring of a whole slew of post-Abrahamic religions—coming in a year or so).
posted by cstross at 7:20 AM on March 26, 2018 [9 favorites]


I'm in Randall Munroe's camp. Weaponized extremely-specific intelligence is probably more dangerous, short-term, than any sort of AGI. Who cares if humans are in charge if the humans are using algorithms to build a secret police that can actually see everything?
posted by BungaDunga at 8:39 AM on March 26, 2018 [8 favorites]


Here's the thing that bugs me about singularitarians: they assume increasing intelligence is a linear problem.

Our bold apostles of The Singularity preach that if it takes X time to improve an intelligence from human level to twice human level then it will take X/2 time to improve to four times human level, X/4 time to reach the next threshold and so on. This allows them to predict ever shrinking time for reaching ever greater intelligence, and thus Robot Jesus coming to save them from dying.

The problem is, they're simply **ASSUMING** that the problem of making ever increasing intelligence is a linear problem. We have no idea. No one has ever successfully boosted intelligence before. They're assuming that it'll just be a special case of Moore's Law and that slapping on more processing power is all it takes.

But there's no reason at all to make the assumption. It could be that improving intelligence is a linear problem, but it's just as reasonable (if not more reasonable) to assume that improving intelligence gets progressively more difficult hte more intelligent something is. That far from taking X/2 time to make something twice as smart as us, it might take X*2 or X^2 or even 2^X time for each step.

Your four times smarter than human AI says "ok, well, if I spent 100% of my time trying to make myself smarter it'd take around four hundred years for me to get twice as smart as I am now, sounds boring I'd rather play a video game."

***************

AI everywhere is a separate and I think more interesting question, if for no other reason than we're a lot more likely to see that than a takeoff to Robot Jesus. Non-sapient AI everywhere could do all manner of both good and bad things. For starters, it'd be the ultimate in censorware and solve the problem a surveillance state runs into of simply not having enough people snooping to keep an eye on everyone.

If we do figure out how to do human equivalent AI then presumably it'd be very similar to people just in software rather than wetware, interesting but ultimately not as interesting as the change of really effective non-sapient AI. We know what people are like, people who live in software are going to act like people who don't. But non-people, programs that are intelligent, creative even, but lack self awareness, now that's a truly new thing.

A personal assistant AI who knows everything you like, listens to your conversations to track your calendar for you, and who basically smooths your life like a rich dude with a human personal assistant. One that can listen to your conversations, notice when you don't recognize a word or phrase and toss up a quick definition in your glasses/AR overlay/whatever.

Of course then you get the question of advertisers trying to trick your assistant AI into steering you their way, governments using your assistant AI to steer you towards "loyal" media options and search results, and so on. All on top of the possibility of mass surveillance on an unprecedented level.

And that's just stuff we can imagine. Who knows what unforseen stuff will happen.

It's all part of the whole automation issue. We're thinking mostly of automation in terms of factory jobs and the like, but really it's going to cover a lot more. We already, right this second, have computer systems that do a better job of diagnosis than doctors. My brothers's SO is a radiologist and she may not make it to retirement because computers can already look at X-ray and other imagery and do more accurate analysis with lower chance of error than she can. Lawyers likewise are at risk of being displaced by AI. Not general purpose humanlike AI, just machine intelligence with no self awareness or sapience.

One thing that's exceptionally worrying to me is the political ramifications. Freedom has always been directly correlated to how necessary the great masses of people are to the functioning of the state and the economy. When all it takes to run an economy is malnourished serfs toiling at low skill jobs then that's what you'll get, and those serfs won't have political freedom or voice because they're infinitely replaceable.

What happens as the future comes and thanks to automation and AI it becomes increasingly unnecessary to have human labor involved in the actual functioning of the economy? Historically the answer has been: the people get progressively less freedom and political power.

Until the advent of gunpowder a ruler needed maintain only the loyalty of a relatively small number of elite aristocrat/warriors who were extremely expensive to train and equip. After gunpowder the rulers needed to keep the loyalty of a significantly larger group of less expensively trained and equipped infantry. As drone tech matures and displaces infantry we're going, just as in industry, to see the situation change so that the rulers need only maintain the loyalty of a fairly small number of engineers and programmers, not a fairly large number of infantry soldiers.

We've got an advantage in that we're starting from a position of relative freedom and political involvement for the great masses of people, but will that be enough? I don't know, but that worries me a lot more than questions about when Robot Jesus will save us all.
posted by sotonohito at 9:58 AM on March 26, 2018 [3 favorites]


That's a handy brace of scenarios for kicking off discussion.

One bit from the article sticks with me: "you’ve found that your AI is actually better at choosing men than you." I wonder how we'll process this sort of thing, as new ways for automation to exceed humanity appear.
posted by doctornemo at 11:11 AM on March 26, 2018


(Thank you, thedward )
posted by doctornemo at 11:17 AM on March 26, 2018


I think that there would not need to be any court cases about the personhood of AIs. We already gave corporations personhood. First steps would be essentially turning an AI into a corporation. Even better, it gets to pay a lower tax rate. On the basis of that, I can imagine the government actually trying to establish the personhood of AIs in order to claw back some lost tax revenue.
posted by Hactar at 1:25 PM on March 26, 2018 [1 favorite]


What happens as the future comes and thanks to automation and AI it becomes increasingly unnecessary to have human labor involved in the actual functioning of the economy?

‘Capitalism will always create bullshit jobs’ – Rutger Bregman

posted by Juso No Thankyou at 5:25 AM on March 27, 2018 [1 favorite]


good news AI technodystopians, climate change famine will collapse civilization before AI goes all Matrix on us.
-sincerely
ecodystopians
posted by Anchorite_of_Palgrave at 11:36 AM on March 27, 2018 [1 favorite]


« Older [Insert Language Here]   |   Music for Freelance: No sponsors, no commercials... Newer »


This thread has been archived and is closed to new comments