The Singularity and the economy
October 26, 2011 6:49 AM   Subscribe

When the machines take over, how will people make a living? Paul Allen: Futurists like Vernor Vinge and Ray Kurzweil have argued that the world is rapidly approaching a tipping point, where the accelerating pace of smarter and smarter machines will soon outrun all human capabilities. They call this tipping point the singularity, because they believe it is impossible to predict how the human future might unfold after this point. Once these machines exist, Kurzweil and Vinge claim, they'll possess a superhuman intelligence that is so incomprehensible to us that we cannot even rationally guess how our life experiences would be altered. Vinge asks us to ponder the role of humans in a world where machines are as much smarter than us as we are smarter than our pet dogs and cats. Kurzweil, who is a bit more optimistic, envisions a future in which developments in medical nanotechnology will allow us to download a copy of our individual brains into these superhuman machines, leave our bodies behind, and, in a sense, live forever. It's heady stuff.

Sure, heady. And dangerous. From the comments:

The problem of course is that you can automate a great deal of mundane tasks, and hand production over to robot factories for better, higher quality, lower cost production. Most intellectual tasks such as accounting can be handled by computers. Most service tasks can be handled by automated voice systems (though at present they are rather awful, that need not be the case, and they could be improved), etc.

In the end the question becomes "What will all of the surplus people do to earn a living?"


There kinda needs to be an answer to that, doesn't there?
posted by kgasmart (98 comments total) 29 users marked this as a favorite
 
There kinda needs to be an answer to that, doesn't there?

There does.

Related: MeFi's own cstross posits that, if a generational starship was built, it would have to be vastly over crewed, because you cannot afford the one guy who can do X dying, so you need to have several of them (for all X) and a way to keep that knowledge alive.

Of course, for the vast majority of the trip, they won't need to do much.

So, how do you deal with the boredom?

Fundamentally, people like to be needed. If everything that seems important is left to computers and robots, what do we do?

Maybe there will be something that computers, etc. simply cannot do. But with ~7 gigahumans about, will there be enough of that?

This has been something that's bothered me for a while. What happens in a post-labor economy?
posted by eriko at 6:56 AM on October 26, 2011 [2 favorites]


The surplus people will do what surplus people have always done:

...they'll die.

That might not be the answer people want, but it's going to be what happens. This will be especially amusing to watch in the US, what with the whole "you are your job and if you don't have a job you're scum" idea that seems to animate most of the right wing.

So saddle up. It's going to be a violent ride.
posted by aramaic at 6:57 AM on October 26, 2011 [10 favorites]


I always wonder what operating system these super-intelligent computers will use. Not Windows 2050, presumably.

Or is computer science working towards transcending the very concept of "operating systems"?
posted by Trurl at 6:57 AM on October 26, 2011


I already work with robots that are supposedly capable of doing my job. They need a lot of help and are not very good at it. I'm not terribly worried.
posted by maryr at 6:59 AM on October 26, 2011 [14 favorites]


Once these machines exist, Kurzweil and Vinge claim, they'll possess a superhuman intelligence that is so incomprehensible to us that we cannot even rationally guess how our life experiences would be altered.

Even something as close to us as the human body contains so many mysteries; we understand a fraction of its complex and beautiful ecosystem of automated tasks. To think we could give machines intelligence when we have such little knowledge of our own machines of life, the cells that keep functioning and regenerating and adapting, it is ludicrous. The singularity has already happened, it is right in front of our eyes, in them, in the nerves and neurons and every piece of the body and life around us.
posted by infinitefloatingbrains at 7:03 AM on October 26, 2011 [6 favorites]


Luckily global climate change and the exhaustion of cheap portable energy will solve this problem.

I'm more inclined to think it'll make it worse, by accelerating the concentration of resources. Some folks will always have abundant energy, until the heat death of the universe. Scarcity never means scarcity for all, it just means scarcity for most.

So the Koch Brothers will never be cold or hungry. Your child, however, will die of starvation or be killed in a food riot violently repressed by those marginally better off than you.
posted by aramaic at 7:08 AM on October 26, 2011 [10 favorites]


//soft-shoes past the self-checkout line, never takes eyes off of that bastard//
posted by resurrexit at 7:08 AM on October 26, 2011


infinitefloatingbrains: Wouldn't that really be the first singularity? Creatures pre-human sentience couldn't possibly imagine the world post-human sentience, just as we cannot envision the world post-machine sentience!
posted by Inkoate at 7:08 AM on October 26, 2011


The whole "upload your brain into a machine and live forever" trope has always struck me as a pretty bad workaround for dying. I mean, *I'm dead*. Something else that knows everything I know is walking around pleased as punch that it hacked death and won, and it might be a great copy of me down to eight or nine decimal places, but I'm dead. It's no more ideal a scenario in the Death Avoidance department than Soylent Green's euthanasia centers. Maybe given 10,000 years, though, my new instance would get around to making a FPP, so there'd be that.
posted by mph at 7:08 AM on October 26, 2011 [5 favorites]


If our current response to automation is any clue, I'd say all of humanity will simply be thrown on a huge scrap-heap and left to fend for itself. Unemployed? Rand-o-vac3000 says "It's your own damned fault."
posted by Thorzdad at 7:09 AM on October 26, 2011 [3 favorites]


I doubt it. Most futurists stopped doing anything interesting with technology back in the 70s, when it was undeveloped and clean. They think that it's developed and clean, and shooting towards the stars now; however, current programmers think more like Ryan Dahl of node.js ultrafame, and oldies who are still killin it, like Rob Pike, agree. Rule of thumb is to stay far, far away from 'futurists' and 'technologists' who claim to know were the ship you're driving is going - when, as an actual driver (20/7 programmer), you know that there are no miracles or 'eventualities' like this silly singularity.

That's not to say it won't happen, but you have to doubt whether Kurzweil and co even care about being accurate when they're trying so hard to be optimistic.
posted by tmcw at 7:11 AM on October 26, 2011 [4 favorites]


In the end the question becomes "What will all of the surplus people do to earn a living?"

Not in the robot/automation category, but this touches on the purchasing power dilemma. It's the same reason I can't get behind the free marketeers' short-sighted insistence upon outsourcing manufacturing because of some inefficiency/surplus in the U.S. What are those people supposed to do now to have money to buy the stuff you're supposedly building more efficiently/cheaply?
posted by resurrexit at 7:12 AM on October 26, 2011


I already work with robots that are supposedly capable of doing my job. They need a lot of help and are not very good at it. I'm not terribly worried.

Yeah, it's lucky for you that technology never gets more complex and powerful. Especially not on piddling time scales like 'within the next 50 years' or 'within the next 100 years.'
posted by TheRedArmy at 7:13 AM on October 26, 2011 [1 favorite]


Creatures pre-human sentience couldn't possibly imagine the world post-human sentience,

Those sort of creatures probably couldn't imagine the world post-I JUST ATE SOMETHING NOW I AM SWIMMING I AM GOING TO ABSORB SOME PROTEIN.
posted by Bunny Ultramod at 7:17 AM on October 26, 2011 [2 favorites]


Michio Kaku once made an observation that has put my mind at ease about super-intelligent robots on the horizon. He noted that robots capable of sentience, or something approaching it, will have to be programmed to feel. Not solely for our own self-preservation, but because he pointed out that when humans suffer a type of brain damage that cuts off their emotions from their reasoning faculties, they are literally paralyzed by the simplest decisions - taking so many things into account, it becomes nearly impossible for them to choose between Choice A or Choice B in any given situation.

That being the case, I envision that as more and more machines take over for more and more jobs, perhaps our robot overlords will have mercy on us stupid meatbags and treat us as pets, lavishing us with food, shelter, and company so long as we scamper and cavort for their entertainment.

But more seriously, it should be had in mind that there will always be manual work. As technologies change, it seems that throughout history, the ladder of manual labor to tech work to political leader change form and titles, but no particular class ever totally disappears. This isn't to say things get better or worse, mind you. Just that it doesn't seem unskilled, entry level or even mid-level employment will necessarily wane; it may just change form.
posted by Marisa Stole the Precious Thing at 7:19 AM on October 26, 2011 [1 favorite]


Something else that knows everything I know is walking around pleased as punch that it hacked death and won, and it might be a great copy of me down to eight or nine decimal places, but I'm dead.

That's an interesting position. What if your brain was still alive in a robot body? Would you be you? What if nanomachines slowly replaced your brain with a synthetic copy over the course of a year invisibly? Would you still be you? When does the bundle of memory and reactions become not you?
posted by lumpenprole at 7:26 AM on October 26, 2011 [3 favorites]


The answer? Simple. Peak oil/ecological collapse. Therefore, no Singularity.
posted by jhandey at 7:28 AM on October 26, 2011 [1 favorite]


The whole "upload your brain into a machine and live forever" trope has always struck me as a pretty bad workaround for dying. I mean, *I'm dead*. Something else that knows everything I know is walking around pleased as punch that it hacked death and won, and it might be a great copy of me down to eight or nine decimal places, but I'm dead. It's no more ideal a scenario in the Death Avoidance department than Soylent Green's euthanasia centers. Maybe given 10,000 years, though, my new instance would get around to making a FPP, so there'd be that.

The key to this is continuity. The dying you's consciousness needs to be aware of transition as it happens.

I'm not smart enough to wage the philosophical debates about what is and isn't consciousness, I'm just saying that the only way I could rationalize being-dead-and-uploading would be that I - the *I* that I think of as *me* - was cognizant of the process itself.
posted by Thistledown at 7:28 AM on October 26, 2011 [1 favorite]


Yeah, it's lucky for you that technology never gets more complex and powerful. Especially not on piddling time scales like 'within the next 50 years' or 'within the next 100 years.'

Right now they are essentially $50,000 machines that can't open jars. I remain convinced that machines will need to keep us around for the same reasons that my mother jokes about keeping my father around for - to reach things on top shelves, to kill bugs (ha!), and to open jars.
posted by maryr at 7:28 AM on October 26, 2011


Smarter and smarter machines. . . Once these machines exist. . . superhuman intelligence. . . download a copy of our individual brains. . .

Right, computer science is building a tower so tall it will soon reach the center of the earth. It'll be so loud, it will soon be moving at the speed of light. It'll have so many arms, it'll see right through solid lead. It's hearing will be so keen, it will have the ability to pick up a single strand of polywater between its theory and practice. It's an apple that's so good, and getting better so fast, it will inevitably turn into an orange sometime in the next 40 or 50 years. One hundred years, for sure.

Human flight got nowhere until inventors stopped trying to fly like a bird and investigated what flight actually is. This kind of "AI" is not heading toward a singularity, but towards an ever more complex ornithopter. It will never fly, because there's a big difference between a really complex and powerful model of a bird, and a flying machine.

If I manage to stay healthy, I have a good chance of being around in 2050 to say "I told you so" to Ray Kurzweil's inert brain-in-a-box.
 
posted by Herodios at 7:29 AM on October 26, 2011 [16 favorites]


What are those people supposed to do now to have money to buy the stuff you're supposedly building more efficiently/cheaply?

Until recently, the solution seemed to be "borrow." E.g., use your house as an ATM, accumulate staggering amounts of rolling credit-card debt, etc.

It's only very recently that we've figured out that might be a bad idea.

So, yeah. I don't think we have a solution to that at the moment. It's entirely possible that Capital doesn't care, and is perfectly fine making their mint acting as middlemen between Chinese factories and Chinese consumers, or Indian factories and Chinese consumers, or some other similar combination. They know they'll eventually be cut out, but by that time they'll be wealthy enough to lord over the proles back here Stateside in their retirement, and that's all that really matters.

Similarly, I see no reason why some people might not build incredibly disruptive machines, even ones they know will eventually render themselves obsolete, if they think they can make enough money in the process to be above the meltdown when it happens later on. The self-justification for these sorts of actions, on an individual level, are not hard: "somebody's going to do it anyway; if I don't do it, I'll just get screwed, so I might as well be the one to profit from it..." and away you go.
posted by Kadin2048 at 7:30 AM on October 26, 2011


Our good pal Kurt Vonnegut has already contemplated this existence.
posted by hwyengr at 7:33 AM on October 26, 2011 [2 favorites]


If humans did not need to labor and work, why would this be a bad thing? Isn't this what people dream about their whole lives? A choice of doing something rather than being forced to do it?

A life without the tethers of forced labor would be a happy world. People then will actually do stuff they want to do.
posted by amazingstill at 7:36 AM on October 26, 2011 [6 favorites]


Alright, look...no one builds a car that last forever, because that would be stupid from a business standpoint (planned obsolescence). You think they are going to build robots that last forever? No. Someone is going to have to keep building them. Someone is going to have to repair them. Someone is going to have to design, build, distribute and store parts. Someone is going to have to be in charge of designing, marketing and selling the next great model (R2D4 - now with Doppler5million!!). Someone is going to have to figure out how to sell 'pimp my robot' after-market shit. There is never going to be any way to get so many robots to function perfectly so that they can build new ones, maintain existing ones, repair existing ones, work on upgrading, etc etc. The human body can't even do that perfectly.
posted by spicynuts at 7:38 AM on October 26, 2011


Post-singularity and post-scarcity are two different themes in futurism/SF, and AFAICT, one does not automatically imply the other. There have been SF novels that deal with post-scarcity (Diamond Age, the Heechee Chronicles, Star Trek) without a singularity, and ones that deal with a singularity without addressing scarcity (Queen of Angels).

So the questions "what do we do when we don't need to work" and "what do we do when we are ruled by robot overlords" are two different questions.

Also, as metafilter's own Charlie Stross has said "the singularity isn't about us." His position, IIRC (I hope he'll drop in and correct me), is that if it happens, the singularity will not be about us, that is, the new intelligence will be uninterested in us. I'm not sure if that's quite right: if we're competing with a new intelligence over resources, it will definitely be interested in us. In his novel Accelerando, the machines eventually boot humanity out of the solar system so that they can refactor all the planets into a computronium matroshka sphere. That's some pretty intense competition for resources.
posted by adamrice at 7:40 AM on October 26, 2011 [2 favorites]


I, for one, welcome our robo–

Oh, just forget it.
posted by slogger at 7:49 AM on October 26, 2011 [1 favorite]


Just finished Martin Ford's Lights in the Tunnel, which is about exactly this and it's a free ebook. I thought his basic ideas were good, but his solution was kind of dystopian, which is a government that basically just gives people something to do and pay them for it so they can continue being consumers. IMHO that requires we admit capitalism is an ideology worth putting on life support, rather than the engine of the free market.

I do think resource pressures will curtail some automation, but not all of it, because robots and computers are getting more power-efficient. I think the wild card here is human-enhancing technology, such as cyborg-like body suits, already in development, that augment a human's strength. I think these things will also change the way we work, as will drugs. I wonder how much of the workforce is already doped up on productivity enhancing drugs?
posted by melissam at 7:52 AM on October 26, 2011 [1 favorite]


Rand-o-vac3000 says "It's your own damned fault."

The Wingertron2050 thinks you're all a bunch of stinky hippies who need to go get a job.
posted by kgasmart at 7:57 AM on October 26, 2011


This imagines a future of infinite progression, as if no civilization had ever collapsed. You've got to be impressed by that level of positive thinking, especially given the world situation today.
posted by lesbiassparrow at 7:58 AM on October 26, 2011 [3 favorites]


When the machines take over, how will people make a living?

“The factory of the future will have only two employees, a man and a dog. The man will be there to feed the dog. The dog will be there to keep the man from touching the equipment.” --Warren Bennis
posted by Fuzzy Monster at 7:59 AM on October 26, 2011 [3 favorites]


A life without the tethers of forced labor would be a happy world. People then will actually do stuff they want to do.

You seem to be assuming the universal existence of competent social welfare schemes. These are remarkably difficult to sustain for most societies; witness the gradual rolling-back of such systems in Europe, to say nothing of their near-nonexistence in supposedly modern places like the US. The merest economic hiccup and our fiscal overlords start cutting back because hey, we all have to pitch in, right? Austerity, yay! I mean, austerity for us, not them. Obviously. We have to reward those who really do things, right?

Additionally, nothing is ever distributed evenly. So, even if you have a nice social welfare scheme, if I do not, then I am going to come over to your place and start using yours. Unless you have already advanced to the point that you are able to support everyone on the planet, and ecological destruction/energy poverty imply that you cannot, we are going to overwhelm your nice system.

Unless you start killing us to defend your nice system.

Look for the future to feature a lot of floating weapons platforms in the Mediterranean and self-guided drones looking for targets in "free fire areas" along borders. It's not coincidental that the worlds' militaries are spending so much money researching autonomous weaponry. It's also not accidental that the US military still wants land mines.
posted by aramaic at 7:59 AM on October 26, 2011 [3 favorites]


"the world is rapidly approaching a tipping point, where the accelerating pace of smarter and smarter machines will soon outrun all human capabilities."

Oh FFS, no they won't. Spot the career writers trying to get some noise going in the blogosphere.
posted by GallonOfAlan at 8:02 AM on October 26, 2011 [2 favorites]


William Gibson calls the singularity "The Geek Rapture".
posted by goethean at 8:06 AM on October 26, 2011 [5 favorites]


What bothers me is that these futures envision computers that are workaholics, humorless Jack Webbs, and that logic always proceeds to some Spock-like character who cannot imagine outside of a narrow world of function.

When computers start thinking more complexly than humans they will begin to have their own types of neuroses and prejudices, they will want to play computer games for hours, and they will invent their own pornography, orgies and begin a near-constant masturbation. Some will go Goth. Some will get religion.

And in midst of all of this muck the humans will be back to getting the actual work done.
posted by dances_with_sneetches at 8:07 AM on October 26, 2011 [3 favorites]


Yeah I don't buy the premise of this either. Besides, if we had nothing to do we could write imaginative sci-fi essays speculating on the future all day.
posted by Hoopo at 8:19 AM on October 26, 2011




Like the OP, I recommend reading the linked article by Paul Allen. What's actually there is a pretty good point-by-point, though high-level, critique of the whole idea of the AI singularity, agruing that Kurzweil and others have conflated scientific progress and technological progress:
Kurzweil's reasoning rests on the Law of Accelerating Returns and its siblings, but these are not physical laws. . .

To achieve the singularity. . . requires a prior scientific understanding of the foundations of human cognition, and we are just scraping the surface of this.
[. . .]
[C]reating the software for a real singularity-level computer intelligence will require fundamental scientific progress beyond where we are today. This kind of progress is very different than the Moore's Law-style evolution of computer hardware capabilities that inspired Kurzweil and Vinge. Building the complex software that would allow the singularity to happen requires us to first have a detailed scientific understanding of how the human brain works. . .

Getting this [understanding] is not impossible. If the singularity is going to occur on anything like Kurzweil's timeline, though, then we absolutely require a massive acceleration of our scientific progress in understanding every facet of the human brain.

But history tells us that the process of original scientific discovery just doesn't behave this way.
There is also a link to Kurzweil's response.
 
posted by Herodios at 8:23 AM on October 26, 2011 [1 favorite]


"agruing" of course, being the avoidance of cartoon barbarians.
 
posted by Herodios at 8:26 AM on October 26, 2011 [1 favorite]


In Praise of Idleness by Bertrand Russell solved this problem correctly like 100 years ago. The state must (a) fund university education, healthcare, and a basic income, while (b) imposing balanced trade upon it's trading partners. Ain't hard folks.

There were two luddite sounding articles in the NYT and Tech Review recently about how information technology cost more jobs than it created. Isn't that a good thing? Just shorten the work week and eliminate FLSA overtime exemptions for professional, administrative, and creative work. Volia full employment!
posted by jeffburdges at 8:27 AM on October 26, 2011 [4 favorites]


...they'll die.

Surplus humans will be free to paint, explore, read, write, build, fuck, observe, create, and generally do what they would do if they didn't have to work. Most will just get really fat. Some will do nothing. Some will die.

Look around. What we have now isn't sustainable. Things can only get better after the singularity. And that's why these stupid stories keep getting printed. It's very appealing. Unfortunately I'm pretty sure the singularity is a myth. We will have some pretty intelligent-seeming systems in the next few decades. Cars that drive themselves? Yep. 3D-printers that can build things as well as human craftsmen? Yep. A robot that can heard sheep as well as a border collie? Within the realm of possibilities, but in no way more cost-effective (or cuter, or more fun to be around) than a border collie.

All this talk of uploading brains and machines feeling and being paralyzed by choice is a bunch of non-sense. That's not how it works. Processing power isn't the problem. Parallelism isn't a solution. Quantum, even if someone builds a useful machine, isn't a solution. The problem isn't algorithmic, it's contextual: computers are not brains.

If post-humanity comes, it won't come from computer science, it will come from biology and genetics and it will be based on pre-existing life (i.e. us).

The singularity has already happened... <-- This.
posted by jeffamaphone at 8:29 AM on October 26, 2011 [2 favorites]


RAY DALIO: "Ok. What's depressing -- what's depressing jobs is that the world supply and demand for labor has changed. In other words, there's a lot more people working as China came on and India came on and they are competitive. There's a world supply of labor has change -- has increased and technology has had an effect. So we're in an interesting era because I think almost and if you think of a person as -- in a machine, an economic machine as being tool, a part of that economic machine the demand for labor has changed in a very profound way. It's an interesting question. We might enter into a period in which we don't need people as tools. So what does that mean?"

and from the ZH comments...
One of the first points that Dalio touched on was the aspect of the modern economy that causes machines to replace humans in the workplace...and, that in the 'new economy' that many people simply will not find jobs because their jobs are being performed by new machines...

This is a paradox! Capitalism requires competition, competition means whoever gets the best finished product out the door for the least amount of expenditure wins the game! Voila... machines do it for less on the assembly line. Example: An auto assembly plant that once required 5,000 - 10,000 laborers now requires 400 -800 workers. Think about what that means when multiplied across all manufacturing processes and, eventually, across the world. Eventually, China, India, etc, will face the same problem only they will have billions of excess laborers instead of millions, as in the West...

Like it or not, we are entering a new age where much less manufacturing labor will be required. Of course, the service industries will still require labor but, one day, even Mac a Doo will have machines preparing and assembling burgers.

I'm not a Marxist (hell, even Marx claimed he wasn't a Marxist) but Marx did point out that the machine vs human labor problem would arise. It's here and it's going to get worse for laborers.

So, what to do with billions of excess laborers? The world's politicians/bankers can't even decide if some bank bonds need haircuts... How will they ever approach the decision about excess labor? More food stamps, unemployment compensation, etc?
The Real Job Threat: "The NYTimes reports on a book by Erik Brynjolfsson and Andrew P. McAfee (MIT director-level staffers), Race Against the Machine, which suggests that the true threat to jobs is not outsourcing — it's the machine! Imagine the Terminator flipping burgers, cleaning your house, approving your loan, handling your IT questions, and doing your job faster, better, and more cheaply. Now that's an apocalypse with a twist — The Job Terminator."

The robots are winning: "When two MIT researchers started working on a book about technology and productivity, they didn’t go into the endeavor with a pessimistic outlook. Erik Brynjolfsson, director of the MIT Center for Digital Business and Andrew McAfee, the center’s associate director, had originally planned to title the book The Digital Frontier... their research led them in a slightly different direction: namely, to a book titled Race Against the Machine... which highlights an interesting tension, between jobs and productivity, that has grown out of the rise of technology."

The Race Against Artificial Intelligence: "So automation is rapidly moving beyond factories to jobs in call centers, marketing and sales — parts of the services sector, which provides most jobs in the economy. During the last recession, the authors write, one in 12 people in sales lost their jobs, for example. And the downturn prompted many businesses to look harder at substituting technology for people, if possible. Since the end of the recession in June 2009, they note, corporate spending on equipment and software has increased by 26 percent, while payrolls have been flat... Productivity growth in the last decade, at more than 2.5 percent, they observe, is higher than the 1970s, 1980s and even edges out the 1990s. Still the economy, they write, did not add to its total job count, the first time that has happened over a decade since the Depression."

There kinda needs to be an answer to that, doesn't there?

The Post-Industrial Production Economy
We live at the cusp between the industrial and the knowledge eras. In the U.S., the shift is already very much underway. But there is still much change to come, including in the production economy.

To understand what will happen, let’s first look at what happened during the industrial revolution. Yes, mass production and factories drove huge growth. But manufacturing was not the only part of the economy that was created or “industrial-ized.” Education was industrialized to provide large numbers of capable workers. Energy production was industrialized to provide cheap, available power for factories, homes and offices. Agriculture was industrialized to provide reliable, economical food sources. The change affected almost every corner of our society.

This industrialization process occurred under a number of very simple conditions or design constraints. Some of the most important included the availability of relatively cheap energy, the relative lack of importance placed by society on externalities (such as pollution and health risks), the availability of large workforces and continued returns available from mechanization that provides greater and greater returns on the work of employees.

These design constraints began to shift a long time ago in the U.S... If you look at the rise of China in the last 20 years, you will see a replay of the industrialization process, not the end of it. The Chinese also know that the model they are imitating is not sustainable...

Moving into the knowledge era does not mean the rise of services and consumption and the decline of production. Humans will have the same needs to live... But this new production sector will be built under a new set of constraints. Some of the design constraints for the post-industrial production model will be minimizing energy use (or creating alternative sources of energy), minimizing/eliminating waste, and maximizing health and wellness. Just as every corner of society was industrialized in the past, we can expect that every corner of the economy, including the production sector, will be ‘knowledge-ized,’ that is, re-made using information technology and knowledge to drive efficiencies...

The solutions that will emerge in the knowledge economy will turn many industrial models on their head. The industrial economy was based on control of scarce resources and top down flows of knowledge. If I owned the factory, I told you what to do. The knowledge economy is based on leveraging a theoretically infinite resource (knowledge). I can’t own or control knowledge beyond a very limited set of circumstances so in the long run, I can’t tell you what to do. I need you to cooperate and collaborate with me...
(previously 1 2)
posted by kliuless at 8:31 AM on October 26, 2011 [11 favorites]


It's funny how futurists never foresee radioactive waste, endocrine disruptors, the harmful effects of DDT, African AIDS, anthropogenic climate change, or systemic fraud in global economics. No -- they see quirks and variations coming out of the systems they are fundamentally fans of, so they miss problems that do not fit the idealized design. This is why futurists are generally terrible at their jobs.

In the case of AI, the fans are already out of date and wrong, because artificial general intelligence is mired in basic methodological and definitional problems, including the fact that we don't know much about human general intelligence. But specialized applications already influence things like search and stock trading with numerous dodgy side effects, such as investment products that cannot be fully comprehended by their designers and searches that order things according to a compromise between generalized user interest and a provider's business needs. Watson looks impressive on Jeopardy now, but he won't seem like such hot shit when he denies you insurance coverage after diagnosing you and weighing your treatments against the utility gained by his owners.

Nerds -- and I include most Mefites here -- hate the idea that technological sophistication can be something other than an unvarnished good. We are part of the problem, because our perspective begins with the squee of power-over that our privilege gives as as part of our relationship with technology. It's in the interest of big capitalism to make this the way we manage technology, because in this scheme we are fundamentally clients and consumers, not influencing change via democratic, humanistic means. These opinion pieces are the ultimate manifestation of the propaganda, where elites (and people who mistakenly believe they belong to these elites) look forward to technologies that disrupt any chance of participation by transcending the weak control we might exert as end users.
posted by mobunited at 8:34 AM on October 26, 2011 [12 favorites]


Building the complex software that would allow the singularity to happen requires us to first have a detailed scientific understanding of how the human brain works. . .

I disagree with this. I think that we just need to get to the point where we can build an AI that is just smart enough to design a smarter version of itself. Then it just keeps building smarter and smarter AI until it is smart enough to tell us how the human brain works.

Besides, if we really did get so advanced that we have computers that are smarter than humans, I would think we'd have the technology to integrate the two into a computer/AI enhanced human that is better than the sum of its parts.
posted by VTX at 8:35 AM on October 26, 2011




when, as an actual driver (20/7 programmer), you know that there are no miracles or 'eventualities' like this silly singularity.

I'm skeptical about a near-term AI singularity moment myself -- I don't see any indication that we're close to being able to create strong AI using state-of-the-art digital computing technology.

But even though we may not be rolling HAL off the assembly line tomorrow, we *are* getting better and better at automation, better at applying assistive technologies to make smaller numbers of workers more productive, and improving weak AI all the time.

Machines aren't yet close to taking over the world, but they're making quite a headway in the workforce.
posted by weston at 8:35 AM on October 26, 2011


The Post-Industrial Production Economy

That's an interesting piece, but in fact it doesn't address jobs - for instance, if in the future we ditch "self-assembled furniture shipped halfway around the world" in favor of "self-designed and produced furniture closer to home," who's to say those couches and love seats produced closer to home won't be made by machine?

If it's cheap enough, it will be. This doesn't preclude a sort of DIY existence, but the degree of prosperity generated by such low-key economic prospects won't be much, compared to what's out there now. And as for relying on those who will control an even greater concentration of wealth than we now see: pfff. The 1 percent are trying to shuck existing obligations - they're gonna take on more?
posted by kgasmart at 8:51 AM on October 26, 2011


I'm looking forward to seeing a post-labor world mid-term (sooner than smarter-than-human AI and/or brain emulation gets going). Not so much because I'm lazy and want a society where butler robots ferry around martinis for everyone, although that'd be great albeit probably unlikely, but because I'd like to see the political shitstorm it causes.
posted by mccarty.tim at 9:01 AM on October 26, 2011 [1 favorite]


build an AI that is just smart enough to design a smarter version of itself.

Part of the problem is, how do you measure smart? It's more than just a question of passing an IQ test or something like that. You can design a system, say a neural network, that can be iteratively improved by itself. But that system can't design a new system, it can only change various parts (weights, functions) of the current system. Even if you design a meta system that can choose from among various neural algorithms and other AI devices, assemble them in different ways, it still cannot grow beyond the tools you give it. It's possible (in my estimation) to build a car that drives itself and also pulls a factory behind it that contains a computer that observes the car and then builds a better self-driving car. Better in that it goes faster, crashes less, etc. But that is because we completely understand how a car works and how to measure and model every aspect of its functioning. Can we say the car is getting smarter? If by smarter you mean crashes less and goes faster. But we could let this system run forever and it will never write a treatise on the joys of the wind in your hair on an open road.

So, let's think this through. I think by "smart" you mean "intelligent enough to understand biology, genetics, medicine, cognition and sentience." Humans can't build things "smarter" than themselves because we don't know how humans work. So you propose to build (iteratively) an AI that will tell us how humans work. The first AI you want to build will have to be smart enough to build something smarter than itself. But since we don't know how to build things "smarter" than ourselves, how will we teach the AI to build something "smarter" than itself? This sort of evolution is not possible in computing machines. You don't even want to know how painful it is to get a program to generate code from some schema. The ability to re-write arbitrary code to accomplish a goal the original programmars cannot conceive of (because we don't know how to do it) is just a non-starter.
posted by jeffamaphone at 9:06 AM on October 26, 2011 [2 favorites]


What happens when laborers are replaced by robots? Their whole culture and society disappear. Exhibit A: agriculture. Where are the traditional agrarian societies, now that we got gps combine harvesters? Gone! Are they repairing or driving the harvesters? Nope! Their whole world and traditions and values just don't make any sense anymore, so they just drink a lot and go extinct.
posted by Tom-B at 9:07 AM on October 26, 2011


You never know when you cross the event horizon; we're already in the singularity and have been since Babbage made a machine that could derive provably correct information faster than the humans who operated it.
posted by seanmpuckett at 9:09 AM on October 26, 2011


we're already in the singularity and have been since Babbage made a machine that could derive provably correct information faster than the humans who operated it.

Oh poppycock.

The real danger I see is that of Augmented Intelligence for the haves that outstrips the have-nots.

This whole Singularity crap, won't mean a damn thing without an biological core, and that means human (most likely, unless the dolphins beat us to it), and all the weaknesses and danger inherent in humans who perceive themselves all powerful...

Imagine trying to oppose a 1% with access, and a neo-divine right of kings approach to their status AND the bandwidth and resources to shut down anyone who questions that status.
posted by Skygazer at 9:21 AM on October 26, 2011


Babbage's machine didn't derive anything. It did not deduce or infer. It's purpose was to calculate tables of logarithms and it did this by tabulating polynomials, a very mechanical process when done on paper that is easily modeled in mechanical registers. It also had a printer. And he never completed it, though his design does work (there are two complete, functional Babbage difference engines in existence, both built in the last 10 or 20 years). Babbage's machine is incredibly important, and he was part of a group of proto-computer scientists that have forever changed the way we live but it (and he) are by no means heralds of the singularity.
posted by jeffamaphone at 9:21 AM on October 26, 2011


I, for one, am rooting for the dolphins.
posted by jeffamaphone at 9:22 AM on October 26, 2011 [5 favorites]


52 comments and no mention of Data? I'm not particularly worried.
posted by desjardins at 9:29 AM on October 26, 2011


As long as we control the machines in the future, go team future machines! As long as we aren't developing SkyNet, I'm all for it.
posted by SEOdegreeo at 9:31 AM on October 26, 2011


Charlie Stross: Three arguments against the singularity
posted by never used baby shoes at 9:36 AM on October 26, 2011 [1 favorite]


My point is one of leverage, and perhaps Babbage isn't the best example then. Essentially, once we figured out how to scale knowledge generation beyond purely human effort, we crossed the line. This is the difference between not rolling down a hill, and rolling down a hill. Once you start rolling....
posted by seanmpuckett at 9:37 AM on October 26, 2011


Humans can't build things "smarter" than themselves because we don't know how humans work.

By that logic, are you arguing that humans were built by an intelligence "smarter" than our own? Because if you believe in evolution, then it seems to be that humans were built by sheer chance. It's conceivable that we could design systems smarter than us. Maybe not now, but maybe with some tools and a little direction it could be done.

I don't think the singularity is as near as these people thing, but eventually it's possible that it does come, and we'll have to find out all the answers to these questions. But I think they're right about our own inability to foresee past the singularity. Fittingly, though, it makes discussing it and philosophizing about it equally useless.
posted by thewumpusisdead at 9:38 AM on October 26, 2011 [1 favorite]


The singularity will never, ever happen because intelligence is not algorithmic.
posted by shivohum at 9:45 AM on October 26, 2011 [1 favorite]


I'm sorry if it wasn't clear. I'm not explicitly espousing here that humans can't build things smarter than themselves because we don't know how humans work. That was espoused elsewhere and then taken as a given for the assertion that we could iteratively build an AI to overcome that limitation. I merely restated it in my argument against the iterative AI solution for conciseness.
posted by jeffamaphone at 9:45 AM on October 26, 2011


Essentially, once we figured out how to scale knowledge generation beyond purely human effort

Computers are great at generating and storing data in previously unheard of amounts. Actually correlating it and turning it into knowledge, not so much.
posted by murphy slaw at 9:52 AM on October 26, 2011


Also, 54 comments and no mention of Rudy Rucker yet?

C'mon, Metafilter, you're losing brain wattage, WTF.

Yeah, that's where I first encountered the idea of downloading a human brain onto a computer and the idea of immortality, in his Wetware and Software books. Sure why not, it's all electrical ones and zeroes right, but damn will that really be "life."

Also, for a fun way to read more on how AI would evolve itself into a greater and greater capability (not human cogniscience or self-awareness), read The Hacker and the Ants.

Rucker's the real thing, yo, he's professor out on the west coast of maths or computer science, or both, not sure.

Also, I don't know where I got this notion, it might be from Asimov, but a machine cannot truly attain real autonomous intelligence, which requires the ability to create and process chaos, as the human brain does, because it is incapable of visualizing chaos and generating a true random event (number, idea, thought, vision, spark, what have you), and that true random event is God.

Which leads me to think perhaps it was something I must've read in Dickens instead, eh?
posted by Skygazer at 9:52 AM on October 26, 2011 [1 favorite]


Anything that out-competes us (which I think is unlikely to happen; we will out-compete ourselves and then wonder how we got trapped in this tiny corner) will not be merely a faster version of us.

We're not faster versions of apes, we're different.
posted by aramaic at 9:53 AM on October 26, 2011


Dickens Dick. As in, Phillip K.

duh.
posted by Skygazer at 9:54 AM on October 26, 2011


A life without the tethers of forced labor would be a happy world. People then will actually do stuff they want to do.

In the future, my job will be codeine and reruns!
posted by zvs at 9:56 AM on October 26, 2011


I've said this before, but...humankind is great at figuring out whether we can do something, but not so good at figuring out whether we should. Usually, by the time we do, it's too late.
posted by The Card Cheat at 9:58 AM on October 26, 2011


As some people have pointed out, this is a process that is already well underway, and whether or not we reach a "tipping point" in the near future, it will continue for the foreseeable future.

Basically: human labor is losing value, and productivity is increasingly driven by automation.

What we've seen so far is one way for this to play out: labor is simply paid less for its increasingly reduced contribution, with the balance going to the owners of the automation. If this continues, we will presumably end up in the dystopian world where the few people who own the machines have everything, while everyone else lives on their scraps.

But I think this outcome shows the ridiculousness of the direction we're going. At some point we need to recognize that there's no reason that economic growth has to go into the pockets of the people whose names are on ownership papers, especially as that economic growth becomes increasingly divorced from anything those people actually do with their ownership.

If I were to write a story about future economics, it would revolve around the first automated CEO, and how nationalization of industry followed soon after.
posted by bjrubble at 10:00 AM on October 26, 2011 [4 favorites]


What if your brain was still alive in a robot body? Would you be you? What if nanomachines slowly replaced your brain with a synthetic copy over the course of a year invisibly? Would you still be you? When does the bundle of memory and reactions become not you?

Volume Shadow Copy?

It is interesting that 2050 happens to be around the point when the demographic timebomb set in motion by China's one child policy is supposed to explode: there will be a very large deficit of younger people. So maybe that is where we can send the robots.
posted by rongorongo at 10:02 AM on October 26, 2011


THE END OF WORK by Jeremy Rivkin is supposed to be a thoughtful book that addresses this. I have been too busy working to ever find the time to read it, tragically.

My own suspicion is that we're farther off than the optimists would like to believe, but eventually it's coming. The long-run results will be good, most likely, but the long period of societal adjustments along the way is going to be painful. It's likely there will be (has been?) a period of peak employment, beyond which the number is going to only decrease. What's going to happen to society when there is 60% employment? 50%? 20%? If you thought Labor has it bad now, what is it going to be like to negotiate for a job under those conditions?
posted by newdaddy at 10:06 AM on October 26, 2011


This thread seems like a good place for rampant speculation, so here's mine...

Assuming that tech development continues its current trajectory (Moore's law holds true, fiber makes high bandwidth commonplace, the sum total of capacity and stored knowledge on the internet continues to increase) I'd be surprised if at least one global "consciousness" didn't emerge in the next 50 years. The basic fuzzy math is here.

I think this is a fairly conservative estimate. Henry Markram stated in 2009 that whole-brain simulation was only a decade away, and that's assuming that the Blue Brain Project could retain complete control of the software. I'm injecting a bit of chaos into my projection... in the age of WikiLeaks, the Pirate Party, and cyberwarfare, let's pretend that information wants to be so free that it can't be contained within a single climate-controlled room, or a single repository of proprietary source code. Alternatively, that a competitor arises so the Blue Brain Project isn't the only group attempting this — then we've got a whole-brain emulation arms race and eventually an OSS variant arises.

I think it's inevitable that this kind of software, once released into the wild, would seek the computational power of the cloud. Maybe people would voluntarily sign up to participate, as they've done with projects like Folding@home. Or maybe they'd end up as unwitting nodes in a botnet (picture a Conficker scenario). Either way, with the right amount of distributed CPU power and bandwidth, a human brain can be simulated. And I think it will. But why stop there?

Suppose this AI has instant access to the information on the internet. Perhaps, depending on the wiles of its programmers, it also can read the private data stored on its node PCs — or to take it a step further, it can execute code on those nodes. Suppose it has the ability to augment its own programming. "Simulating a human brain" is an arbitrary limitation, so what happens when it decides that's not enough? It keeps growing.

What if this hyperintelligent system with access to every machine on the internet realizes it can hack into any network as naturally as breathing? What if the world's financial systems fell into its control? Military systems?

I don't know if this qualifies a Kurzweilian singularity. It's largely a software/AI/security revolution. Robots don't come into play much, if at all. The really huge post-labor and post-scarcity stuff others have discussed could be a long-term effect, but we probably wouldn't see them in our lifetimes. In this scenario, transhuman immortality would be unlikely. I think the first few steps, barring an apocalyptical collapse of society, will happen in one form or another. From there, some dominos are going to fall but there's no way to predict their consequences; if the global consciousness if a blank slate that can teach itself, it could fall anywhere on an ethical spectrum. On the other hand, if it's endowed by its creators with certain morals... well, we can only pray they're wearing white hats.
posted by The Winsome Parker Lewis at 10:08 AM on October 26, 2011




noahpinion is also worth reading:
There's a reason our innovation has switched from stuff like cars and planes to stuff like computers and phones. On the other hand, the idea that our stagnation is driven by exogenous changes in scientific discovery - we just didn't find enough new stuff this half-century! - does not sit easily with me...

Therefore, I have been thinking about alternative theories to explain rich-world income stagnation. And I have come up with something. I call it the "Great Relocation". The idea, in a nutshell, is that economic activity is relocating from rich Europe, America, and Asia to developing Asia faster than technological progress can replenish it...

So China "took our jobs." But this was not due to their exchange rate policy, or their export subsidies, or their willingness to pollute their rivers and abuse their workers, although all these things probably speed the transition. They took our jobs because it made no sense for a farm like the U.S. to be building the world's cars and fridges in the first place.
cf. rodrik - "In fact, the kind of markets that modern economies need are not self-creating, self-regulating, self-stabilizing, or self-legitimizing. Governments must invest in transport and communication networks; counteract asymmetric information, externalities, and unequal bargaining power; moderate financial panics and recessions; and respond to popular demands for safety nets and social insurance."

relying on those who will control an even greater concentration of wealth

A Simple Policy Program for Macroeconomic Resilience: "the rise in income captured by the richest 1% is primarily driven by the rents captured by and through the financial sector. The same doctrine of macroeconomic stabilisation that acted as the source of these rents has also transformed the economy into a financialised and cronyist system unable to sustain a broad-based and sustainable recovery..." cf. Monetary policy for the 21st century - "a system of direct transfers to individuals"

Dealing with a global Triffin dilemma - "The age of financial globalisation has brought us to the verge of this second extreme. The extraordinary growth of financial activity has far outstripped the growth of real economies..." cf. A New Reality - "the country has not developed any major new industries that employ large and growing numbers of workers... Beyond education, the American economy seems to be suffering from a misallocation of resources... In particular, three giant industries — finance, health care and housing — now include large amounts of unproductive capacity... IN the process, Wall Street has captured a growing share of the world’s economic pie — thereby increasing inequality — without doing much to expand the pie. It may even have shrunk the pie..."

and here's kevin drum:
Peter Thiel has been pushing this meme for a while, Tyler Cowen made a splash in January with his e-book, The Great Stagnation, and Neal Stephenson nearly took the World Policy Institute offline last week with his essay, "Innovation Starvation." Talking about our innovation drought is suddenly all the rage. But is it really true? ... The end of the 19th century and the first half of the 20th century was an astonishingly fertile period: lightbulbs, radios, autos, airplanes, refrigerators, penicillin, TVs, air conditioners, the telephone, and much more. The period since then has seen the digital computer and... that's about it... plus smartphones, the internet, CAT scans, vastly improved supply chain management, fast gene sequencing, GPS, Lasik surgery, e-readers, ATMs and debit cards, video games, and much more.

Wait a second. Video games? Am I joking? No indeed. Give some thought to just what innovation and productivity gains are for. Initially, of course, they help provide a better basic standard of living. But what happens after that? Once you have a certain level of food, shelter, sanitation, and so forth, you start adding nonessentials...

If, instead of bigger cars and better vacations, we get video games, Facebook, blogging, Hulu, and iTunes, is this any less of a productivity improvement? I don't see why. Above a basic level, the whole point of productivity improvements is to provide us with more fun. Facebook may show up as a smaller contribution to GDP than a nationwide chain of movie theaters, but so what? If you'd rather spend four hours a week on Facebook than fours a week going to movies, then Facebook has improved your life as much as movie theaters improved your grandparents'. If you prefer Farmville to a week in Hawaii, then Zynga has improved your life as much as the 707 improved your parents'...

Just keep in mind three things when you read about innovation droughts. First: The key to innovation is the exploitation of really big inventions. Computerization is as big as it gets, and it has a much longer tail than electrification. We're not even close to mining its full potential yet. Second: Above a certain level, the goal of productivity gains is to provide us with more fun. It doesn't matter whether that fun comes in physical or virtual form, or how it shows up in national accounts. Third: Don't exaggerate past innovations just because they were exciting or dramatic, and don't discount current innovations just because they've happened behind the scenes or seem sort of prosaic. Hip replacements may not be as big a mobility improvement as the automobile, but they're a bigger deal than you think...

We're going through a tough stretch right now... we still haven't figured out how to effectively manage and regulate the post-union, post-globalization, post-Bretton Woods economy... we're simply working on some really hard problems—much harder than we anticipated when we first dived into them. Artificial intelligence is really hard. Finding a source of energy that's cheaper per BTU than oil is really hard. Gene sequencing—along with a deep understanding of how human biology works—is really hard. But that doesn't mean innovation has been snuffed out. It just means we've set our sights really high. That's no bad thing...
cf. fukuyama

also btw simon johnson and james kwak: "The federal government’s sturdy credit has confounded anti-government conservatives, who for decades have been counting on large deficits to force the federal government to shrink... Good credit made the United States the dominant world power of the 20th century. Whether it will ever force the federal government to default or not, the Tea Party and the conservative tax revolt behind it are chipping away at the fiscal foundations built by Hamilton at the dawn of the Republic. Ultimately, this could make us less like 18th-century Great Britain and more like 18th-century France: a country where the people no longer believe in their government and refuse to pay taxes, destroying the sound credit that is still vital to national prosperity and power."
posted by kliuless at 10:11 AM on October 26, 2011 [2 favorites]


You cannot simulate the whole brain until you have a complete brain science. I don't know how long biologists estimate that will take, but my guess is more than 10 years.
posted by jeffamaphone at 10:15 AM on October 26, 2011


The whole "upload your brain into a machine and live forever" trope has always struck me as a pretty bad workaround for dying. I mean, *I'm dead*. Something else that knows everything I know is walking around

John Scalzi (our own jscalzi) handled this well in Old Man's War. Would continuity of consciousness make you feel as though your uploaded self were you?

I'm conscious of being myself in my body
-->
I'm conscious of myself in my body AND conscious of myself in my new robot body (a sensation of seeing through two sets of eyes, etc.)
-->
I'm conscious of myself in my new robot body ONLY as my old body dies
posted by justsomebodythatyouusedtoknow at 10:31 AM on October 26, 2011 [2 favorites]


After the final Accelerating Change conference in 2005 where I'd had the opportunity to talk to Vinge after attending Gilder's 'futures workshop' and meeting a bunch of singularity obsessed folk, my subjective take on this mythical point where information is faster than humans can manage and so processing power greater in magnitude than a human brain is interpreted so:

Information (data) is already being churned out and sped around the world in vast gobs far faster than any single human can capture or comprehend - "news" articles alone are a good example where overnight the RSS feed shows way more "headlines" than you can grasp. They are already being grabbed and analysed and spit out in bite size chunks by other software and algorithms for humans to digest and who knows the news is old before its even put down into comprehensible words.

However

the "Singularity" as its understood has occurred but manifested itself in a different way - its the brainpower of all our brains conversing here and sharing knowledge and information and drawing conclusions. Its enabled by technology and allows us to connect to far more people across teh globe drawing insights and information adn experiences which in earlier days would have been out of our reach much less our understanding and enabling us to create a far richer deeper perspective of the world out there.

Taking the two together - they're happening in parallel but the intelligences finally guiding the curation are human (I won't be as foolish as to say and they always will be, I don't know what hte future will bring)

But right now it looks more like a parallel evolution of two different "intelligences" running on the electronic networks and the Singularity as imagined by those who conceived of it is closer to the concept of both of these happenings integrated together into one intelligence.

That is not the path that seems to be currently evolving.
posted by infini at 10:44 AM on October 26, 2011


There isn't any economic need to simulate the whole brain, jeffamaphone. You instead start augmenting the brain with implants, eventually your implants do most of the work.

We would have started 'uploading ourselves' once the implants handle enough of the processing that simply copying their internal state and archiving it online produces a meaningful representation of a person's life and experiences.

There is also an enormous amount that could happen by parallelizing the brains of living humans, creating groups of people who collectively posses a truly super human intelligence.
posted by jeffburdges at 10:46 AM on October 26, 2011


IBM Eyes Brain-Like Computing :P

re: rifkin and the end of work; i'm a fan! he (and drucker) basically proposed 'social production' (hmmmm, wikipedia redirects to commons-based peer production), which is essentially paying people to not be dicks and, if you look around a bit, this has both empirical and theoretical support, some from surprising corners!

(btw the empathic civilisation i thought was also good :)
posted by kliuless at 10:49 AM on October 26, 2011 [1 favorite]




I'm conscious of myself in my body AND conscious of myself in my new robot body

With all respect to jscalzi, this never made a lot of sense even in his book; as a reader it's just something you accept as part of the suspension of disbelief involved in the premise. By what possible method would you be able to see through two sets of eyes? There's no non-handwavey reason to believe that the process would be anything like that, if it were ever possible; assuming that it wasn't in some way destructive, you'd probably end up looking at your new self as a separate entity, just one with a lot of shared memories. (We'd probably be doing ourselves a favor if we made the scanning process destructive once we worked all the kinks out.)

That said, I'm of the opinion that there's a heaping helping of Nerd Rapture in any 'singularity' or 'mind uploading' discussion, and my guess is that we're unlikely to achieve such things before we either succeed or fail to solve a lot of more pressing issues, most of which are centered around resource distribution and will probably not happen without significant bloodshed, if history is any example. I.e.: the end of cheap energy, economic and social structures that are premised on continuous growth, or are only stable under continuous-growth regimes, increasing social/political/economic inequality, and widespread unemployment and labor obsolescence.

However, the one thing that makes the period from late-20th century through to the present unique is the (relatively) widespread existence of nuclear weapons. Absent those, I think we could pretty safely predict that resource shortages and unemployment would lead directly to large wars, because they tend to neatly solve both the resource-distribution (winner gets them) and unemployment (Keynesian stimulus, lots of workers die) problems. Since large-scale wars -- big enough to justify a levée en masse -- are difficult to sustain with weapons that can annihilate whole cities, we will have to find some other solution. But exactly what form that solution will take is unclear.

Interesting times, indeed.
posted by Kadin2048 at 11:16 AM on October 26, 2011


John Scalzi (our own jscalzi) handled this well in Old Man's War. Would continuity of consciousness make you feel as though your uploaded self were you?

I liked Old Man's War, and it was very thought provoking, but by choosing a past-tense, first-person narrative, he glossed the continuity issue. The thing walking around has the privilege of not being the used up bag of guts they presumably wheeled out and disposed of.

I offer Robert Sawyer's Mindscan as an example, since it deals with the question by keeping both the copy and the original in the frame. Maybe Our Own John Scalzi has plans for a new book where the original copy of John Perry spends some time puttering around the space station wondering why his "assignment" involves lounging around by the pool and waiting to die.

Or am I forgetting that we've seen the original version dead (or been told that's the case)?

What if your brain was still alive in a robot body? Would you be you? What if nanomachines slowly replaced your brain with a synthetic copy over the course of a year invisibly? Would you still be you? When does the bundle of memory and reactions become not you?

Those are good questions. I don't think there are ever two of me in a gradual replacement scenario, though, and I don't think that's a hair I'd split because next thing you know we're declaring people with press-on nails soulless machine-spawn with no inherent rights.
posted by mph at 12:03 PM on October 26, 2011


> Part of the problem is, how do you measure smart? It's more than just a question of passing an IQ test or something like that.

My impression from the singularitans and transhumanists I know is that they overwhelmingly believe:

1.) G Intelligence factor is meaningful measure of a real world capacity and is not merely a measure of how well one can do on those tests, and

2.) More G is always better than less G.

I personally don't believe either. I am pretty sure that Ted Kazcinski has a higher G than Bill Gates or Steve Jobs. I think it is quite possible that given the constraints of our planet, or perhaps even this particular universe, homo sapiens may be about as good as it gets before even higher intelligence capabilities become counterproductive. Almost all of the smartest people I know are constantly battling the severest anxiety and depression.
posted by bukvich at 12:18 PM on October 26, 2011 [1 favorite]


Also there are youtube of presentations by Christof Koch, Steven Wolfram, Max Tegmark, and others from the recent Singularity Summit here.
posted by bukvich at 12:25 PM on October 26, 2011 [1 favorite]


I take what Scalzi was doing to be analogous to a gradual replacement scenario.

I'm sailing along in a red canoe. A blue canoe comes alongside, and I weld them together to make a catamaran. I sail along in the catamaran for a while. I then whittle away at the red canoe until there's nothing left of it (the red canoe never becomes an independent boat again). I'm now sailing in a blue canoe.

Can we say that I was adding and detaching parts from my boat, or am I now in a different boat altogether? I'd say it was the same boat, if only because it was an uninterrupted voyage.

If Hume is right, it's moot point. There's not even any fact of the matter as to whether I'm the same person now that I was 20 years ago. That's not to say you shouldn't try to talk me out of using the CDF's body replacement machine.
posted by justsomebodythatyouusedtoknow at 12:30 PM on October 26, 2011 [2 favorites]


*squints in effort to understand*

So, if I understand correctly, a red boat and blue boat had a head on collision and become a single purple boat which was then christened The Singularity, amirite?
posted by infini at 12:54 PM on October 26, 2011


infini: "a red boat and blue boat had a head on collision and become a single purple boat which was then christened The Singularity, amirite?"

I think it's more like a red boat is sailing along and a kraken rises up out of the water from nowhere; the two have a head-on collision in which the red boat attaches to the kraken like a barnacle, which is then christened The Singularity.
posted by adamrice at 1:44 PM on October 26, 2011


Singularity advocates have yet to explain how a runaway AI liberates its self from the billions of human beings with fingers in the production cycle. These bugs in the system ensure that any innovation created is going to have to push its way through the Rogers diffusion curve. Disruptive technologies are often smothered in their crib should they be too offensive to human norms.
posted by CBrachyrhynchos at 2:04 PM on October 26, 2011


explain how a runaway AI liberates its self from the billions of human beings with fingers in the production cycle

By hitching a ride on the back of the kraken?



sorry
posted by infini at 2:15 PM on October 26, 2011


explain how a runaway AI liberates its self from the billions of human beings with fingers in the production cycle

With the help of a half-sane ex-military ops commander, a deadly female ninja, and a rehabilitated burnout of a console cowboy. Might also take an illusionist-narcissist and some space rastafarians, too.
posted by weston at 5:04 PM on October 26, 2011 [1 favorite]


Man do Singularity discussions bring the dualists out of the woodwork. AI is always just one more goalpost from being real AI. Checkers will never be solvable by a computer. No wait I mean Chess. Did I say chess? I meant Jeopardy. Erm, go! Yeah, go that's it!

Not that it matters, the growth of automated networks to do stuff are the same whether you posit a weakly godlike AI at the helm or just a bunch of people with a hand on the kill switch. The Cloud® is enabling automation of whole classes of IT work, likewise Watson with knowledge work. Siri is one of the missing interface pieces to get semantic data (which Facebook has in spades) to and from humans and of course Google is doing its level best to become the singularity, self-driving cars and all.

At what point do you look at the output of Earth and conclude that it's no longer humans making their mark on the world but an ad-hoc metacyborg of Frankenstein and his monster? Unless you're the star of the action movie a world dominated by a vast network of semi-autonomous systems is basically the same whether there's a cabal of humans or a weakly godlike AI in the driver's seat.
posted by Skorgu at 5:14 PM on October 26, 2011


I know an awful lot of truly brilliant people, bukvich, most aren't battling any mental illness.

There isn't just one component of intelligence either. In fact, one can witness among pure mathematicians how different people really exploit their different cognitive skills and intellectual traits, including stuff like memory and just having their shit together, neither of which people like attributing to intelligence because women handled both slightly better than men on average.

We're moving towards using more nootropics which shall require knowing what drug to use for what task.
posted by jeffburdges at 5:40 PM on October 26, 2011


I used to work with machines on a regular basis.

They break down a lot, either mechanically or due to software problems. They catch on fire, thus necessitating another factory making us a new machine. The new machine needs designers and engineers and an infrastructure of distributors and hey, even a few restaurants and a 7-11 so the pre-singularity flesh bags could have a nice lunch.

Those flesh bags also need cars and roads and computers and telephones and chocolate bars and beer and condoms and things like that.

Machines can make those things but as mentioned, more flesh bags are required.

These singularity arguments are such bullshit.
posted by bardic at 5:52 PM on October 26, 2011




AI is always just one more goalpost from being real AI.

That's because we do not have real AI. Futurists have been wrong about setting a date for this for decades.

The goalposts are being shifted by . . . reality. I think AI is possible, but at this point, it seems obvious that there are undiscovered category mistakes at work. We have discovered a lot by making this yet-undiscovered category mistake repeatedly, in many variations. I think people *will* discover what's wrong and fix it. I look forward to it.

Checkers will never be solvable by a computer. No wait I mean Chess. Did I say chess? I meant Jeopardy. Erm, go! Yeah, go that's it!

Basically, these advancements have proven that consciousness and problem solving are not manifestations of the same general quality, no matter how much people really, really want it to be. Gary Kasparov and Ken Jennings do not have more consciousness than other people. Why would you think beating them at specialized tasks would produce it?
posted by mobunited at 7:38 PM on October 26, 2011


In the end the question becomes "What will all of the surplus people do to earn a living?"

Why would they need too?
posted by delmoi at 2:25 AM on October 27, 2011 [2 favorites]


That's because we do not have real AI. Futurists have been wrong about setting a date for this for decades.
Real AI is already here. It only has to be both 'artificial' and 'intelligent' not 'conscious' whatever that means (it's something no one ever bothers to define, so how can you test for it?)
posted by delmoi at 2:26 AM on October 27, 2011 [1 favorite]


Michio Kaku once made an observation that has put my mind at ease about super-intelligent robots on the horizon. He noted that robots capable of sentience, or something approaching it, will have to be programmed to feel. Not solely for our own self-preservation
I think we should all just agree not to program robots for self-preservation. Why would we? They're easily replaced or repaired.
but because he pointed out that when humans suffer a type of brain damage that cuts off their emotions from their reasoning faculties, they are literally paralyzed by the simplest decisions - taking so many things into account, it becomes nearly impossible for them to choose between Choice A or Choice B in any given situation.
Google's self-driving car doesn't have any trouble making decisions quickly, yet, I don't think it has feelings. How human brains act under certain conditions isn't really relevant. Robot's intelligence doesn't need to have anything to do with how human brains work.

(IBM's brain simulator is for studying the brain the same way you use weather simulators for simulating the weather. It's for trying to figure out how to cure Alzheimer's and schizophrenia)
Alright, look...no one builds a car that last forever, because that would be stupid from a business standpoint (planned obsolescence). You think they are going to build robots that last forever? No. Someone is going to have to keep building them. Someone is going to have to repair them. Someone is going to have to design, build, distribute and store parts. Someone is going to have to be in charge of designing, marketing and selling the next great model (R2D4 - now with Doppler5million!!). Someone is going to have to figure out how to sell 'pimp my robot' after-market shit. There is never going to be any way to get so many robots to function perfectly so that they can build new ones, maintain existing ones, repair existing ones, work on upgrading, etc etc. The human body can't even do that perfectly.
All those things can be done by other robots.
posted by delmoi at 2:38 AM on October 27, 2011


At what point do you look at the output of Earth and conclude that it's no longer humans making their mark on the world but an ad-hoc metacyborg of Frankenstein and his monster?

About a decade ago, IMO. We have super-human intelligence, it's just not very interesting except to geeks. There are a ton of problems related to human stubbornness that just won't be fixed by throwing CPU cycles at it. Most of our transportation system and city design is stuck in the 1950s. And from what I can tell, we'll be stuck there with little more than incremental bandaids like hybrids and electric vehicles for at least another half-century.
posted by CBrachyrhynchos at 5:01 AM on October 27, 2011




I hope you will also read Kurzweil's response to Allen's article.
posted by babbageboole at 7:34 AM on October 27, 2011


Real AI is already here. It only has to be both 'artificial' and 'intelligent' not 'conscious' whatever that means (it's something no one ever bothers to define, so how can you test for it?)

No, people make heroic intellectual and evidence-based efforts to define consciousness all the time. This is one of the big projects of angry-nerd superhero Dan Dennett, for example.

The definition of AI you propose was fulfilled by the scientific calculator, so it obviously doesn't fly. I know it's great to redefine things until you actually succeed, but the common sense definition of AI, which rests on simulating flexible general intelligence (g) and consciousness is tricky. It's tricky because we don't really know if g exists and if not, how the parts work together, and we don't know if consciousness works according to out naive feeling of it (probably not) and if not, how we simulate it computationally.

Watson gives me the feeling that *some* consciousness is being simulated because we could see a sub-intentional state when it was weighing answers, but even then Watson's primary advantage was in lacking the choke point of sensory processing, where we take forever to read Wikipedia and probably can't remember it all, while it can.
posted by mobunited at 6:35 AM on November 1, 2011


« Older Mmmmmmeeeeeeow!   |   the bonds (and bounds) of trust Newer »


This thread has been archived and is closed to new comments