One for the Copper Tops.
September 29, 2004 5:16 AM   Subscribe

Singularity, The. A black hole in the Extropian worldview whose gravity is so intense that no light can be shed on what lies beyond it. "Popular Science" talks about The Singularity, and asks "Is Science Fiction about to go blind?" Also, see previously, here and here.
posted by seanyboy (43 comments total)
 


Sterling gave an insightful and occasionally rather witty rant on the subject at a Long Now seminar. His key point (at least, my key takeaway) was that singularity theory assumes that a lot of people and institutions let a singularity happen. (Non-trivial secondary point: No one really clearly defines what the hell the singularity is.)
posted by lodurr at 6:28 AM on September 29, 2004


Lodurr - I typed out the following and then hit the "review" button and noticed your comment. I'll read the Sterling piece. I think Sterling and I are thinking along similar lines.

_____________________

"Popular Science" on "The Singularity" - as refracted through the writing of Doctorow and Stross ? Is this a trick post ?

Gee whiz bang! - "One plot device that turns up frequently in Stross and Doctorow’s stories is mind uploading"

Certain professionals have - for over a hundred years now - plumbed ramifications of the mind/body problem, and such consideration of "mind uploading" might be considered by many professional philosophers, eyes rolling, as quaintly Paleolithic. Yet perhaps not - here it comes! (maybe)

Beyond that minor point though, these guys are cute for the extent to which they seem unaware of that which lies beyond their purview - which they seem to view as expansive.

But, they may figure it out one day - if only they can wrest their attention away from their gadgets, for this singularity of focus renders them as amphetamine charged adolescents.

The Singularity ? Yes, I believe it can be. Is it probable ? Well, there might be a few little speed bumps on the fast track, and somehow, much of the surrounding discussion reeks to me of a 19th Century reductionism which much of the scientific community - but perhaps not all sci-fi writers - has repudiated.

If I had possesed the proper pedigree with which to do so, I would have posted a certain question - as "manifesto" - several years ago on Edge. Well, it showed up there anyway, in a certain form although a bit later, on that estimable publication. My question had to do with a certain ability to integrate such speculation as of "The Singularity" with awareness also of a number of existing empirical realities and the currently heavily discounted chunk of real estate called variously as "anomaly", "The Numinous", "fraud", "Opium of the Masses", and "cultural diversity" where "Spirituality" and "Religion:" struggle to pay the rent.

Perhaps sci fi writers could be hewn into two rough camps : those who consider the future through the consideration of the trajectory of complex systems, of meta-economies encompassing - at least - culture, politics, religion, and the natural world (which we meatspace beings might do well now to notice).......and the other camp, where the future is colored by rosy and gunmetal hues of high technology and neat gadgets. I'm painting a charicature for clarity, yes, but there it is.
posted by troutfishing at 6:53 AM on September 29, 2004


I should have mentioned that the Sterling piece is a 70MB MP3 or OGG file... sorry...
posted by lodurr at 6:55 AM on September 29, 2004


Sombody (Lafferty?) once coined a phrase I've been fond of for a number of years: "Don't rely on things that can be switched off."

"Mind uploading" is interesting to me in that such scenarios tend to presuppose that the "mind" is then free to roam some kind of cyberscape wherein it makes its own reality. Gibson put some interesting limits on that in Count Zero, but then essentially rescinded them in Mona Lisa Overdrive.

I still think Poul Anderson gave one of the most plausible "mind upload" scenarios in his old novella "I Am Joe" [name? ... maybe "My Name Is Joe"]. "Joe" is a crippled astronaut who controls a bio-engineered centauroid being on the surface of Jupiter. He spends more and more time in-link with the beast, to the dismay of his doctor; one day, he dies, and they're all astonished when the beast gets up the next day and behaves just as though Joe were still controlling it. "I am Joe," he explains, unhelpfully. Anderson was always pretty hard-edged, so there's no mystical connection presumed there; it was really just another ramification of the "transporter problem" to him. The beast became Joe; Joe died; the beast is Joe, and is alive. Those are not contradictions. There was no soul transferred.

What's really great about that story is that it plays to the strength of the story as thought-experiment. It lays out the mechanism as the elements of the plot, without going into a lot of detail on theory or explanation.

I think it's a really plausible case for the first "mind upload", but then, I tend to think the mind is something that other people don't. I tend to think that the "thing" we identify as our mind is merely an epiphenomenon of our existence -- of our brains functioning in the world. Our consciousness ain't us, in other words. Where this gets *really* interesting for me is when I start to imagine "partial upload" scenarios. What if you start to rely on, say, extra memory? Or extra processing power, for precise mathematics or something. And then what if that's turned off?

I haven't read the singularity fiction, but I've read the high-level stuff, and the problem remains that it's high-level: There aren't people or their individual "will" (read: tendency to do messy things like make decisions and take actions) involved in these scenarios. There aren't messy counter-assumptions, e.g., that someone might build a sentient machine that (in accordance with Lafferty's other law, "Never make anythign you can't unmake") could actually be turned off. No good designer of industrial machinery would ever forget the "kill" switch.
posted by lodurr at 7:12 AM on September 29, 2004


Perhaps it's just me, but I'm wondering if the singularity has become an emerging secular religion. It seems to have many of the characteristics of such a religion. Proponents of the singularity argue that they are the first to really explore the idea. Like new-age gurus we have overstated claims of miracles such as mind uploading and holocausts such as grey goo in which the possibilities from one branch of science lead to a willfull ignorance of the limitations imposed by another. And then, there is this apocalyptic vision of a future that is inevetable but must be prepared for.

I think that most of the claims of singularity are highly overstated. People have been predicting that emergent technologies will fundamentally reshape the human landscape for centuries. Perhaps one of the reasons why the singularity prophets believe themselves to be singular, is because they themselves are blind to an extended history of science fiction utopias and dystopias that were rejected as being too naive. Everything will be different with nuclear energy. Everything will be different with space travel. Everything will be different with doomsday devices. Everything will be different with aviation. Everything will be different with a cheap route to Asia.

Secondly, many of the specific predictions cited by singularity advocates skim over the very real limitations that make them impropable. Treating human minds and computer programs as equivalent glosses over the basic problem that even some very simple real-world problems involve solutions that can't be solved using a computer in any realistic time frame. While I have no doubts that computers will get increasingly smart, I'm profoundly skeptical that it is possible to build a virtual model of a specific human mind.

Grey goo advocates perpetually ignore some of the basic lessons from evolutionary approaches to biochemestry. The most critical one seems to apply to any system that uses information from bacteria to Ford Motor Company. Efficiency and flexibility are competitive goals. The demands of mere survival, much less competition limit the amount of informational dead weight you can carry around.

By all means, I do think that things are going to change in the future. But I don't buy the claim that things are going to change in the future to such a degree that we can't map some form of continuity before and after.
posted by KirkJobSluder at 7:28 AM on September 29, 2004


People have been predicting that emergent technologies will fundamentally reshape the human landscape for centuries.

That they do. And they do. Reshape the human landscape, that is. Just not in the ways, or to the extent, that are predicted.

So much of what you say is preaching to the choir, to me. E.g., I have long believed that AI is highly probable; but that AI which much resembles human intelligence is a mistaken enterprise. (To paraphrase Wittgenstein, if a machine could think, we wouldn't recogize it was happening.)
posted by lodurr at 7:53 AM on September 29, 2004


lodurr: Have you read Edelman's "Wider Than the Sky?" Among other things it makes a nice summary argument against Epiphenominalism.
posted by abcde at 9:52 AM on September 29, 2004


Mind uploading makes for interesting sci-fi plot twists that treat human beings like computer programs. On the other hand, it seems kinda like weighing a rat after it dies to get the mass of the soul. I'm not sure we will be able to FTP the totality of the self like a Creed MP3. Just my gut feeling.
posted by inksyndicate at 10:16 AM on September 29, 2004


I have long believed that AI is highly probable; but that AI which much resembles human intelligence is a mistaken enterprise.

Thus enters the paranormal.

(To paraphrase Wittgenstein, if a machine could think, we wouldn't recogize it was happening.)

If a person could think, we wouldn't recognize it happening. See Turing.

All this naysaying regarding singularity just speaks to a poor or unshared definition. As defined as the interfacing with machines (note: machine being the non-biological, not an AMD Athalon) until which point the two are indistinguishable, and extrapolating from current technological trends, then yes, this is inevitable. The fact is, the "singularity" has been happening for thousands of years - every artifact designed or used by humans is an extension of themselves. The singularity idea is simple a drastic extension of this, incorporating undiscovered technology. And to claim that this future technology will be incapable of modeling the brain in any practical capacity is to invoke a dualistic concept.
posted by iamck at 10:20 AM on September 29, 2004


"....Perhaps it's just me, but I'm wondering if the singularity has become an emerging secular religion." - KirkJobSluder, I think that's correct, and I consider that trend to flow along with many currents of Cornucopianism.

Also, I think we're living already on one singularity threshold : the death of nature.

"What if you start to rely on, say, extra memory? Or extra processing power, for precise mathematics or something. And then what if that's turned off?" - Lodurr, you can do a personal test, of a sort, on that question : by drinking several bottles of Nyquil nightime cold medicine, which will temporarily shut down many of your higher brain functions. (note : NOT an especially pleasant or wise thing to do, I've read. But unlikey to be lethal).

You'll be quite stupid and - what's more - you won't really care.
posted by troutfishing at 10:20 AM on September 29, 2004


Singularity thought is entirely incompatible with the belief that the world is about to enter a new dark age. You're probably not going to get a whole lot of love for it here, seanyboy.
posted by darukaru at 10:56 AM on September 29, 2004


The most thoroughly and plausibly-imagined treatment of 'uploading' I'm aware of remains Greg Egan's Permutation City and Diaspora. Run, don't walk.

I couldn't disagree more with the thesis of the popsci article... the faster things change, the more subjects open themselves up to imagination, not fewer. Stories may obsolesce more quickly, but that's not necessarily a drawback (past views of a future which is already past are still interesting. Especially the ones which either come close to the mark or miss it completely.) There's always been a lazy branch of SF that's not really concerned with "predicting the future" so much as with using futuristic-sounding stuff as a more or less interchangeable host for more or less interchangeable adventure stories.

The idea of a singularity as Vinge describes it does ring true for me, but only in a way. Small individual groups of people probably wouldn't lose continuity within that group -- but they could easily lose the ability to comprehend what might be going on in other groups. (It could be argued that this is already happening.) So more of a loss of cohesion for society (or the species) as a whole, than for individuals.
posted by ook at 10:58 AM on September 29, 2004


I first read Alvin Toffler's Future Shock about 5 years ago, which I suppose is the first time that I heard of the singularity. Anyone who has not read Ray Kurzweil's Age of Spiritual Machines or read Bill Joy's Wired Magazine article should take a look at it. For one, I agree with the idea that the world of 2020 may be as different from today as today is from the stone age.
Kurzweil makes the following point which I will try and summarize:

The human brain contains between 10 and 100 billion neurons, each with thousands of dendrite connections to other cells. Neurons send messages to other cells between 4 and 20 times a second, giving you between 200 billion
and 200 trillion possible computations each second, a computation being defined as a binary operation between a neuron sending a signal to one of it's neighbors or not.

If one looks at Big Blue, the supercomputer that beat Gary Kasparov at chess in the late 90's, one could say that it is the computational equivalent of a salamander lizard, as far as possible computations a second are concerned.

Assuming that Moore's law holds constant, one can say that within the next 20 years a $1000 computer will have the computational equivalent of the human mind. 40 years from now, a $1000 computer will have the computational equivalence of every human mind on earth.

That being the starting point of Kurzwiel's book, the theories that follow are very well thought and to me entirely possible, and I encourage everyone to read it.

I have always understood the singularity to be a moment in time at which the rate of change is so fast that no has time to adapt to any new idea before the next one comes roaring along. Imagine if all the technological advances of the past 2000 years happened within a few months. I know that is a far stretch but that is inevitably what we are looking at. No longer constrained by natural evolution, human bodies will be adapted and augmented with technology. Already physical place means nothing, there eventually will be a metaverse as real as this world. Place will mean nothing. Already IBM is designing computers to fix themselves, the day that they design themselves is not far off.

I do believe that within 20 years life as we know it will be indistinguishable from what we know now. I belive that if humanity can make it that far, we could all live forever.

***And I know this is way off topic, but I would like to mention December 22, 2012. I hope that none of you think me a kook for bringing this up, but this has been a pipe dream for me for some time now, that some new age of man would dawn at this time.
posted by daHIFI at 11:07 AM on September 29, 2004


iamck: And to claim that this future technology will be incapable of modeling the brain in any practical capacity is to invoke a dualistic concept.

That only goes to show that you actually didn't read my post. Instead, you saw the words, "mind", "computer" and "skeptical" in the same paragraph and assumed that I'm talking about some magic juju present in human minds.

My point has nothing to do with any form of dualism. But with a basic problem proposed by singularity advocates. The assumption is that increasing computing power will be sufficient to defeat any mathematical problem. This is shown false by Poincare's answer to the N-body problem.

Take a single object. We can describe the motion of that object using a very simple equation. F=m*a.

Take two objects. Now we have the added complication that an attractive force exists between those two objects. We can calculate the motion of the two objects using some very basic calculus, based on the fact that all of the force will be directed to the gravitational center of the system. This was what made Newton famous.

Take three objects. Here is where things go to shit. Newton couldn't find a general solution, Gauss failed at producing a general solution. It took Poincare to figure why. It's been known for 115 years that the 3-body problem is chaotic except for a handful of cases. Chaotic here means that the evolution of the system over long time frames is highly sensitive to initial starting conditions. Outside of these few select cases, tiny errors lead to radically different end results down the road. Computing the state of the system at t+5 years requires an infinite number of calculations. You can approximate, but over time, the approximations become less and less accurate.

No dualism, no magic soul-related juju. Just simple facts about physics and mathematics.

People who argue for mind-uploading are working on a fundamental fallacy that technology on its own solves problems. The problems faced by people modeling the behavior of complex systems, whether we are talking about genomes, ecologies, galaxies, the optics of human hair, or implementing a specific human mind in software, have less to do with how many teraflops is thrown at the problem, but with basic mathematical theory. In order for it to be possible, we need a new mathematics, not new computer design.

Of course it is possible to model the brain to a useful capacity. I would say that we already do so, but I was quite careful to phrase my claim in a very specific way: "...I'm profoundly skeptical that it is possible to build a virtual model of a specific human mind."
posted by KirkJobSluder at 11:15 AM on September 29, 2004


daHIFI: The human brain contains between 10 and 100 billion neurons, each with thousands of dendrite connections to other cells. Neurons send messages to other cells between 4 and 20 times a second, giving you between 200 billion and 200 trillion possible computations each second, a computation being defined as a binary operation between a neuron sending a signal to one of it's neighbors or not.

You see, this is one of the reasons why I believe that worshipers of the singularity are overstating their case. Humans do not have the biggest brains on the planet. The capacity for # of operations per second is not terribly relevant to the issue of developing a super-human intelligence. (Which I would argue, computers are already super-human for some types of problems.) In fact, Deep Blue is an interesting example because Deep Blue has been surpassed, not by throwing more transisters at the problem, but by advancing the theory in that domain.

I have always understood the singularity to be a moment in time at which the rate of change is so fast that no has time to adapt to any new idea before the next one comes roaring along. Imagine if all the technological advances of the past 2000 years happened within a few months.

And here is the other fallacy. For all that technological advance, there have been relatively few real paradigm shifts. Web is faster and more democratic than telegrams, newspapers, broadsheets, the pony express and Roman grafitti, but functionally the exact same thing. You can fly around the world in less that 24 hours, but it is still functionally the task of moving material, people and wealth from place to place. The pace has accelerated, but not the basic ways in which humans interact. If you transported Willie Shakespeare in the middle of his career into the 21st century, he'd spend a few years brushing up on contemporary English euphemisms for sex and start writing about mistaken identities of lovers on the internet.

The notion of technology advancing rapidly in only a few months is such a crock of shit that I don't even know where to begin on it. Techonolgy does not just happen when some nerd implements a new idea. It is an economic transition.
posted by KirkJobSluder at 11:53 AM on September 29, 2004


"...I'm profoundly skeptical that it is possible to build a virtual model of a specific human mind."

But when you reduce the human brain to an object with parts, in theory, it is, and in practicality will be possible to replicate it. As for defeating ANY mathematical problem, that claim seems a little much...

For all that technological advance, there have been relatively few real paradigm shifts.

So if I have the ease of access to a medium such as the internet, whereas my grandparents' medium is the television, there is no cultural divide created by this? Is this not a paradigm shift?
posted by iamck at 12:22 PM on September 29, 2004


I'm going to advance some ideas here that I'm not sure I believe, so bear with me.

When I was in 10th grade, my Western Civ teacher put up a recently compiled list of estimated IQs of intelligent people from history. From memory, Goethe made the top at 210; John Stuart Mill was the highest English-speaker with 200; Einstein was up there around 180, and Shakespeare was pretty high too in part owing to his enormous vocabulary.

And of course you all read Flowers for Algernon, where the IQ 68 moron undergoes a black boxeration that triples his IQ to precisely 204.

That's all well and good for chess problems - more iterations faster, better retention - but what is this good for in the real world? As a neurologist, my work is something like 1% cogitation and 99% perspiration. When was the last time you had to stop and say "Hard problem. Must assume Rodin pose! Thoughtneurons - ON! HULK SMASH difficult problems with sheer intellect!"

I don't know about you, but the difficult things in my life don't particularly yield to better cogitation. Online Scrabble does.

Without defining intelligence too explicity, can we stipulate that intelligence is a property of human brains and that evolutionary forces caused it to arise? In that case, it probably evolved to be just as long as Abraham Lincoln said a man's legs should be - long enough to reach the ground.

What exactly are we going to do with a man (/machine/cyborg) whose intelligence is three times as long as it needs to be to reach the ground? Would it be another Goethe? (I think not - and would a Goethe be useful today?) Or would it be an interesting curiosity - good for solving chess problems, the way a man with 8 foot long legs is good for anatomical aberration studies?

Could it be that the people who make People Magazine's 50 smartest people list (or some ideal People Magazine that actually listed 50 very smart people) - could those people be as good as it gets?
posted by ikkyu2 at 12:28 PM on September 29, 2004


I agree with the assessment that this is just a secular religion, fueled by an unhealthy (and under-educated) optimism about technology.

I dont see any scientists I respect weighing in on this, just a bunch of sci-fi writers whose ideas, it seems, should remain in the world of fiction.

I'm not usually a fan of the "negative ninny" remark but I think its important with posts like this, that general readers understand that these are not ideas with serious scientific credibility but more akin to, yeah, a religion.
posted by vacapinta at 1:16 PM on September 29, 2004


In order for it to be possible, we need a new mathematics, not new computer design.

And since we haven't had a new theory in math or physics in, say, the last 15 minutes....

IMO, the future is unpredictable in any specific detail beyond more of the same. Pre-Newton there were no scientists working on Newtonian physics and so forth. But trends in culture and science have a Nth thermodynamic law-like tendency to continue and so faster computers, new forms of media, greater connectedness yet greater emotional separation will all probably dominate in the near term.

Quatum computing is probably less than five years from being useful. This morning's Mercury News had a letter to the Editor stating that if Bush wins on 11/2 California ought to secede. The Magic Book is a technology that allows a user to look through a handheld viewer with a built-in camera and see 3-dimensional models as if they float above the book.

As always, the biggest changes will blindside the vast majority, intellectuals included.

On preview: Vacapinta, Vinge himself is a pretty well respected physicist as are many other SF authors, though not Doctorow or Charlie.
posted by billsaysthis at 1:27 PM on September 29, 2004


iamck: But when you reduce the human brain to an object with parts,

Which is another fallacy in this case, reductionism. Again, take a look at the three-body problem. This is really a simple problem. Three objects. Each object has a mass, a position, a momentum vector, and two force vectors. From a reductionist view, this should be a trivial problem, the kind of problem that shouldn't even require a computer, just a slide rule and some scratch paper.

And yet, when you look at the three objects inteacting with each other, this is a problem that can't be solved in the general case by any computer. The best supercomputer models can only make rough approximations.

in theory, it is, and in practicality will be possible to replicate it.

Chaos theory suggests that if the system is chaotic, that any attempt to create an accurate long-term model of the system will require a prohibitive quantity of calculations.

As for defeating ANY mathematical problem, that claim seems a little much.

Some problems simply can't be solved no matter how much computer power you throw at them. This has been known for more than 60 years.

So if I have the ease of access to a medium such as the internet, whereas my grandparents' medium is the television, there is no cultural divide created by this? Is this not a paradigm shift?

Well, there is the whole fuzzy definition of paradigm shift coming into play. My grandparents lived long enough to see the web, and were quite comfortable making the obvious analogies to their prior experience. The technorati like to talk down the people who were born in the first two decades of 20th century. However, I would argue that the first half of the 20th century involved sweeping economic and political changes that make our bitching and whining about how the internet has changed everything look trivial. My granparents were witness to the influenza pandemic and the polio epidemics, the first air war, mutually assured destruction as a goverment policy, mass adoption of radio and electricity, the first effective anti-bacterial drugs, and the great depression. Are we really so naive to believe that the cultural divide between the old and the young exist because of yet another technological device?

But my point in terms of a paradigm shift is that the web is not functionally different from the Gutenberg Bible or Egyptian scarabs. The medium has changed but the function has not. Fundamentally, you are still talking about information encoded in some form to be read by someone other than the person doing the encoding.

Technology exists in an intimate dance with economics. Thus, the reason why the idea of technology advancing out of control in a few months is such utter twaddle. I don't see the basic economic landscape in which super-intelligent machines would operate undergoing a radical shift. Corn and Soy in Nebraska needs to be traded for tractors from Illinois. The BBC needs someone (or something) in Nigeria to get the local perspective on Virgin Atlantic's new venture there.

And that is the primary reason why I don't see the dreaded generalizable machine super-intelligence replacing human beings in the near future. I think that what we are more likely to see is what we have now, specialized super-intelligence.
posted by KirkJobSluder at 2:29 PM on September 29, 2004


If a person could think, we wouldn't recognize it happening. See Turing.

Hmmm.... it always seemed to me that Turing's point was precisely the opposite: We can see people think. Or as good as.

I had no idea this thread would spin so far, or I'd have been more precise in my descriptions. When I say that I think making a replication of the human mind is a mistaken enterprise, I mean something pretty precise. I spent a lot of time thinking through this, particularly in respponse to Searle's arguments back in the mid-'80s. Something rang false for me in both the "Chinese Room" (John Searle) and "Chinese Nation Mind" (Ned Block) arguments, and after thinking through it for a long time, it hit me: Neither Ned Block nor John Searle really understood the terms of their own arguments.

The "functional isomorphism" argument is trivially true, and more or less unassailable. That's the error that both of them make. If you say that the room or the "chinese nation mind" is functionally isomorphic with a human mind, then it by definition has mental states.

Similarly, the "right stuff" argument was also pretty unintersting, as far as I could see, because as part of the original arguments we stipulated functional isomorphism -- and all the "right stuff" argument did was more precisely stipulate that. So it was redundant, and its conclusions more or less mistaken. It was sufficiently redundant as to be dishonest, IMHO. All the "right stuff" really amounted to was that you didn't do what couldn't really be done (i.e., making a mind out of cans and string -- much as you really can't make an internal combustion engine out of cheese).

And yes, I'm aware that a Turing machine blah blah blah.... The point is, Block, Searle, et al all assumed that wasn't sufficient, but never stated that assumption. I.e., they didn't really understand their own discipline. In a sense, all of 80s phil-o-mind AI was mostly just tenure-keeperish wanking -- a game played by an in-crowd of clever young old-boys, who later got rightly showed-up when the "applied AI" crowd ran off with a bunch of their funding.

Here the thing: If you stipulate that you are creating a mind that is functionally isomorphic with a human mind, then I'll stipulate that such a thing is broadly logically possible. I actually don't really care if it's feasible, because the first question that I have in followup is: Why in the hell would anyone want to do that?

Really: Why would you want to make a first synthetic mind that's functionally isomorphic with a human mind? What does it prove? What would be more interesting is to make a mind that's not like a human mind; that's where you'll "learn" (i.e., decide) what actually constitutes a mind. Arguments about epiphenomenalism, soul, consciousness, etc., will continue probably long after all the projects we discuss here today either happen or are rendered n/a.

One of the cleverer ideas about the Singularity is the idea of emergent machine intelligence. But they approach it in a really unimaginative way, AFAICS. They always want this machine "sentience" to be like human "sentience." Well, why would it be? Why should it be? And will a "sentient" machine even be conscious in ways that we can comprehend? (And again -- why would it be?)
posted by lodurr at 2:40 PM on September 29, 2004


billsaysthis: And since we haven't had a new theory in math or physics in, say, the last 15 minutes....

The problem is, that the kinds of theories required would be revolutionary because they would undermine a heck of a lot of the last 100 years of theory that some of these problems are intractable.

As always, the biggest changes will blindside the vast majority, intellectuals included.

Certainly. Nobody saw AIDS coming, and yet, once it steamrolls over the demographics of large chunks of Africa, it will probably shape international policy for the next three generations. I don't think anybody saw European contact with the Americas coming. Most people are blind to the possibility of another pandemic that kills 1/10 people.

But because of the intimate dance between technology and economics, I really don't see how a technological singularity can happen.

Vinge himself is a pretty well respected physicist...

Which is probably the core problem. I just buzzed through When Life Nearly Died which has a great 2.5 page list of all the half-baked theories regarding why the dinosaurs died off. Almost all of the theories were proposed by outsiders to the field of paleobiology, who approached the subject with equal parts bald speculation and bombast while making critical errors. The basic moral of the story, scientists should tread with caution when speaking out of their field. Benton notes that there seems to be a heirachy of scientists in which physicists at the top of the pile can make grand announcements while forgetting that us piddly little soft-scientists might no more about a given domain.

Skinner stuck his foot in his mouth with political philosophy, Dawkins keeps trying to make an impact on a field that doesn't need him with memetics, and Vinge is talking about technology adoption and society.

So I'm looking over Vinge's footnotes, and I can't find even the basics on how technology interacts with human societies. No Cook & Brown, no Rogers, not even the highly fashionable Lave & Wenger. He makes two claims, first, the physical design of computer technology will soon surpass that of human brains (mmmm, brraaaains!). That is well documented and well argued (although, as I've noticed that more transistors does not mean more intelligent.)

But his other claim is, OMG, THIS WILL TRANSFORM HUMAN SOCIETY AS WE KNOW IT IN WAYS THAT WE CAN'T PREDICT, is just sort of thrown out there without any support. In fact, we can predict what will happen. We have some pretty darn good theories about what will happen. These theories have been strongly tested.

Technologies that are not compatible with local cultures are rejected. This is true whether you are talking about teaching people to boil drinking water, or installing a corporate knowledge management database. Does this mean that a radical change is not possible? Not really, but it does mean that we should be thinking on the span of decades rather than months.

Also, it is my understanding that it is not yet clear as to whether quantum computing can solve these problems either.
posted by KirkJobSluder at 2:45 PM on September 29, 2004


Encyclopedia Galactica's definition of Singularity.
[1] A point where the known laws of mathematics or physics no longer apply

[2] A state impossible to predict or comprehend by those that have not attained it.

[3] A toposophic grade of creative problem-solving, incomprehensible to those that have not attained that state. See also, S (Singularity Level)
*note: embeded links at the site*
I find the Orion's Arm site a fun read, and the prospect of a Singularity intriguing.

posted by Trik at 2:48 PM on September 29, 2004


lodurr: Egg, I should stop writing on this.

One of the cleverer ideas about the Singularity is the idea of emergent machine intelligence. But they approach it in a really unimaginative way, AFAICS. They always want this machine "sentience" to be like human "sentience." Well, why would it be? Why should it be? And will a "sentient" machine even be conscious in ways that we can comprehend? (And again -- why would it be?)

One of the things that always baffles me is the notion that an artificial intelligence would be concerned with humans at all or even aware of humans? For example, I can imagine the hyper-intelligent router of the future living on the internet. It's business is to trade packets of information with other entities like its self. It might even have an "emotional" life, joyfully swapping code with routers it likes, grimly going to war against a compromised node flooding the net with poison packets, holding a grudge against that site when the node is fixed. But it might not have any concept that the things it trades has any meaning other than some sort of economic value against other packets.
posted by KirkJobSluder at 2:58 PM on September 29, 2004


In fact, we can predict what will happen. We have some pretty darn good theories about what will happen. These theories have been strongly tested

We have good, tested theories about how human societies will interact with superhuman intelligences?

But they approach it in a really unimaginative way, AFAICS. They always want this machine "sentience" to be like human "sentience."

You might try Greg Bear's books Queen of Angels and / (Slant in some markets), which have not-very-human AI's. For instance, Jill (the AI in QoA) is capable of perfectly modeling her own mental processes in real-time, which causes hijinks to ensue. Another AI is revealed to be of rather unorthodox construction, leading to peculiar blind spots mixed with, I guess, hyperacuity spots.
posted by ROU_Xenophobe at 3:02 PM on September 29, 2004


Gnnh... things can be technically possible without being economically viable. A lot of scientific speculation falls flat because it assumes that cutting edge developments will be produced en masse and disseminated worldwide. It doesn't take into account things like production costs, or the need for specialist maintenance. I mean, like, we've got the technology to produce all those 1950s silvery home-help droids, but there's not a lot of point. We don't need them enough to offset development and production costs.
Scientific theory might well achieve some exponential growth rate, but in terms of how that gets transferred to physical infrastructure... hmmph... sounds like techno-bollocks to me.
And that Is SF About to Go Blind article was really anti-Fantasy! My legions of winged chimps will *own* any robot horde that takes it on!
posted by RokkitNite at 3:46 PM on September 29, 2004


ROU_Xenophobe: We have good, tested theories about how human societies will interact with superhuman intelligences?

If said superhuman intelligence comes down from the sky in an intergalactic spaceship and says "take me to your chickens," then, that might be a concern.

However, what we are talking about here is a technological product. One of the claims about the singularity is that we don't know what is going to happen before, during, and after it happens, Vinge writes: " It is a point where our old models must be discarded and a new reality rules."

I don't think we are blind as all that. I've had the great joy of hearing internet economist Castronova speak and he pointed out that economics only requires local abundance of one good, and local scarsity of another good. I don't see steel mills popping up in Nebraska and soy taking over the rust belt in any singularity. So there is one set of rules that probably won't change much.

What other rules? Well, what is the market for superintelligent machines without a leash that can disrupt the economy? What is the motive for making such a machine? These are also questions that fit well into market economics. Economics works much like an ecology, what is the niche for such a machine and would it be competitive outside of that niche? (Humans are a singularity as nicheless animals. Somehow, everyone seems confidant that it will happen twice.) There is already a bit of a backlash against Moore's law as the processing power of desktop computers increases beyond the need for additional functionality.

There's never been a superhuman intelligence, but we have 2000 years of history demonstrating what happens when something comes along to disrupt the rules of how people live their lives. Sometimes the disruption gets squished hard, and sometimes you have a complicated period of accomodation and negotiation.
posted by KirkJobSluder at 4:03 PM on September 29, 2004


... joyfully swapping code with routers it likes, grimly going to war against a compromised node flooding the net with poison packets, holding a grudge against that site when the node is fixed.

Oh, lordy, you just made me grin. Then frown, as the idea crossed my mind of Disney making a movie about it... we must kill, KILL this idea before it is appropriated by evil marketroids!
posted by lodurr at 4:13 PM on September 29, 2004


Well, what is the market for superintelligent machines without a leash that can disrupt the economy?

That's not really a fair question. I haven't read any stories where people are just out to build the Eschaton or the Mad Mind in an unleashed form just for kicks.

There are lots of reasons why you might want a superintelligent machine to talk to. The question isn't whether they'd have a leash. It's only whether, or how fast, they'd outgrow or outsmart it.

And you're assuming singularity through AI-in-a-box. Another theoretical possibility is singularity through massively upgraded human intelligence, where the superhuman intelligences are housed in-and-on human skulls -- humans with essentially infinite, exact memory combined with numerous and powerful processing aids. I don't think we can know how present societies would react to some of their members having strongly superhuman minds (ie, not just faster, but qualitatively smarter), given that the new superhuman intelligences will have their own motivations in the matter and will be awfully hard to leash.
posted by ROU_Xenophobe at 4:34 PM on September 29, 2004


Well, how is singularity through super-humanism different from atomic weapons? I mean, they both change the equation at a fundamental level. We have the literal capacity to destroy the world; yet we still far more normal lives than were foreseen 70-80 years ago.
posted by lodurr at 4:43 PM on September 29, 2004


Unfortunately, Vernor Vinge is full of shit.

Fortunately, he's only full of shit in that (a) he doesn't really know what he's talking about; (b) he's overestimating the speed of the timeline; and (c) he makes some overly apocalyptic conclusions.
posted by azazello at 5:50 PM on September 29, 2004


ROU_Xenophobe: A problem with many of these scenarios is that it assumes we wake up one morning to find that a super-human intelligent machine is ready to take over the world. I do think that we will see machines with superhuman intelligence, I also think we will see malevolent superhuman intelligences, but I think that the process of getting there will involve something more akin to an ecological arms race like what we are seeing with viruses and anti-virus software.

And you're assuming singularity through AI-in-a-box. Another theoretical possibility is singularity through massively upgraded human intelligence, where the superhuman intelligences are housed in-and-on human skulls -- humans with essentially infinite, exact memory combined with numerous and powerful processing aids. I don't think we can know how present societies would react to some of their members having strongly superhuman minds (ie, not just faster, but qualitatively smarter), given that the new superhuman intelligences will have their own motivations in the matter and will be awfully hard to leash.

Actually, my criticism applies to just about any human-created singularity event. In this particular case, we have an entire entertainment industry that explores the possible consequences of superhuman augmentation. Furthermore, of all the possibilities, this one would be most understandable leading up to it. I don't think superhuman augmentation is possible without some pretty in-depth understanding of human cognition. And it is most likely to be developed in stages. The implications are already being debated in regards to wearable computers and smart phones. Quite a bit of augmentation would require extensive safety testing in advance. I really don't know how much human technology can be safely augmented.

But you hit on one of the assumptions that really bugs me. The progression seems to go like this:
Salamander - Dog - Chimp - Human - OMNIPOTENT INFINIY!

Another (IMO more likely) hypothesis is that while the access to data and data managing tools in such a scenario will change, the basic wetware operating system that has served us in a very robust manner for 50,000 years won't fundamentally change.

The third hypothesis, that a system of interconnected parts might become complex enough to become consious and self-aware is the least likely to be predictable. It is also the least probable. The road to vertebrate self-awareness probably was triggered in the precambrian when one breed of soft-bodied animal developed sensory organs capable of distinguishing other soft-bodied animals from the environment, and then found them good eating. That triggered the arms race that led to more and more complex body plans ending at their peak with starfish, wasps, squids and vertebrates (to pick the most complex from separate branches of the family tree.) Human-like intelligence didn't develop until a unique set of circumstances kicked a bright but not brilliant ape out of its niche. Intelligence appears to be the evolutionary offspring of poverty and conflict But perhaps most importantly, that hypothesis sounds like spontaneous generation. A secular religion indeed!

I think the fourth hypothesis of a nanotech singularity is least likely. Drexler seems to have backed away from the grey goo a bit. But he seems to be profoundly unwilling to augment his own thought experiments with a basic one from microbiology that I learned before the publication of his book. Why don't we see a superbug? Why do we see bacteria that eat wood, and bacteria that eat humans, but very few that do both?
posted by KirkJobSluder at 7:31 PM on September 29, 2004


But you hit on one of the assumptions that really bugs me. The progression seems to go like this:
Salamander - Dog - Chimp - Human - OMNIPOTENT INFINIY!


It annoys me too, except that's probably where the interesting stories are -- who'd want to read a whole book about a progression of Dog - Chimp - Human - SOMEWHAT SMARTER HUMAN! or HUMAN REALLY GOOD AT MATH!

In fairness, though, the implicit progression is implicitly
chimp - human - smarter human better at designing upgrades - even smarter, even better - ... - OMNIPOTENT INFINITY!!! with the feedback loop of upgrades taking a "short" (weeks to few decades) time.

Again, the setting of Queen of Angels and / might appeal to you, except for the rather strong nano. There's implants and AIs, but people still understand each other, more or less, even if some folks live in rather larger mental universes than others, and the AIs are limited in their own ways and kept carefully on leashes.
posted by ROU_Xenophobe at 7:57 PM on September 29, 2004


I'll say this again, for emphasis : "The Singularity" notion presumes continuity.

Systemic breakdown in the biosphere, sudden climate change, random chunks of large space junk impacting the Earth....whatever : with any of these possibilites, no "Singularity" - and, the future then looks more like an iteration from "A Canticle for Liebowitz"
posted by troutfishing at 11:02 PM on September 29, 2004


I'd like to suggest one technological advance that could make some pretty sweeping changes in human society: an onboard processing system of some kind that made the wearer/user capable of discerning with extreme accuracy when someone else is lying to them.

I'm extrapolating from your thought, ROU_Xenophobe, about having a huge, perfect memory, and then coupling that with powers of perception augmenting our senses that would allow us to see the same (and more) physiological shifts that a present-day polygraph detects now, along with voice-stress analysis, as well as wireless access to recordings of previous statements, actions, general trends of a person as regards honesty, even criminal records, court documents etc. Basically having the ability in real-time to assess someone's honesty, or lack thereof, to say, 99% confidence of truth or falsehood of any statement.

Such a device being available to a small group - say, a government - might change things quite a bit, if it can be shown to be that accurate. (Note that modern polygraphs are said to be extremely accurate by those who use them - though they are still not accepted as proof of truth or falsehood in American courts, if you fail a polygraph test you are pretty much certain to be lying.)

Having such a system available to nearly everyone would completely transform almost every facet of human interaction, which is full of dishonesties and hidden information. Removing the ability of people to lie to you would fundamentally change your relationship to every person on earth.

Such a device is theoretically possible - there are some gadgets out there now which put a voice stress analyzer in your phone, supposedly to allow you to tell if the person you're talking to is lying (I have no idea whether they actually work or not). So I would guess that a gadget like this will be out on the market as soon as someone figures out how to make it small enough.

That would be something that could trigger sweeping changes pretty quickly, once (if ever) it got out into widespread public use. A component of the Singularity, perhaps?

For a little gedankenexperiment, imagine what your life would be like if you and only you had one. What would change? A lot, I bet! Now imagine if everyone else BUT you had one. Now, only the police and government.

Of course, this would only work for spoken communication. It wouldn't affect our typing here on MeFi, would it? If that happened, do you think that people would stop speaking and only communicate through emotionless and "truth-signal-less" text - to preserve their ability to lie? Hmm!

On preview: troutfishing, hehehe... quite right. Although, if our technological civilization falls now, it will be quite a long time before we can get back out of the stone age... if only for the simple fact that there is no more easy-to-mine, high-purity iron, copper, tin, nickel, aluminum left. Nowadays we refine our metals mostly from ore with very small percentages of the actual metal in it - sometimes as little as 5% or less! No, if we go back to the Stone Age via any mechanism you mention, that will probably be the end of high-tech for humanity. No steel, no copper... back to wood, stone, bone, skins and shells.

Really, that's the eventual alternative to some form of Singularity, unless we can get off this rock and expand outward into space. We can stay an adolescent species if we manage to do that successfully.
posted by zoogleplex at 11:54 PM on September 29, 2004


Arthur C. Clarke says:
When a distinguished and elderly scientist says that something is possible, he's almost certainly correct; when he says something is impossible, he's very probably wrong.

Its also well known that we tend to overestimate progress in the near term and underestimate long-term progress.

So, since mind uploading, AI, etc is not physically impossible, then it'll probably happen, only sometime futher into the future than we predict.
posted by Meridian at 4:12 AM on September 30, 2004


if our technological civilization falls now, it will be quite a long time before we can get back out of the stone age... if only for the simple fact that there is no more easy-to-mine, high-purity iron, copper, tin, nickel, aluminum left

There are still plenty of high-quality iron ores around.

And there will be easy-to-mine, high-purity iron, copper, tin, nickel, and aluminum in the ruins of cities and industrial sites. On that front, they'd have it easier than we did, since they'll have all the iron and steel and aluminum they can stand without even having to smelt or refine.

The kicker might be petroleum, but there's still plenty of coal to get them to ~1900 technology and bootstrap themselves into some other sort of high-tech.
posted by ROU_Xenophobe at 6:35 AM on September 30, 2004


... except that's probably where the interesting stories are -- who'd want to read a whole book about a progression of Dog - Chimp - Human - SOMEWHAT SMARTER HUMAN! or HUMAN REALLY GOOD AT MATH!

I dunno, I thought that was pretty much where Robert Nylund ended up in Signal To Noise.

You can have a lot of really interesting consequences from incremental advances, and yet have no interesting consequences from groundbreaking advances. What's important is the impact these things have on your life, and on the structures of society. I remember being really mind-blown by Lin White's Mediaeval Technology and Social Change when I was 17, as he described the social and cultural impact of, say, the stirrup's introduction to Europe. Or consider a mundane thing like refinement in battery technology, and how that's driven the use of cell phones.

Compare to that the impact on daily life in the industrial world of, say, Penecillin. Huge impact, to be sure; but we don't perceive it, and the effect is really "same only more so": We're all safer from infection, and yet we're not aware of that at all, because it's like being aware of not carrying a weight when you've never had to carry it.
posted by lodurr at 7:42 AM on September 30, 2004


I think another fallacy expressed here is that more computations per second == really good at math. One of the amazing things about computer technology is that the Pentum 4 is not fundamentally all that different from the Babbage Difference Engine, the slide rule, or the abbacus.

There is a great anecdote about Feynman being beat by a man with an abbacus at division until Feynman chose an extremely large number for which there was a theoretical shortcut. One of the big tragedies of our school system is that most teachers don't teach math; they teach addition, subtraction, multiplication and division.

The most critical problems out there are qualitative, not quantitative. Scientific American had a very interesting perspective on Albert Eintstein that argued Einstein was brilliant, not because he was good at mechanical addition and multiplication (many of his peers were better) but because he had a knack for asking the right questions, at the right time, and for pushing on those questions long beyond the point when more career-minded physicists would give up.

Statistically speaking, more data is not necessarily better. A sample of 10 will tell you the obvious. 100 will tell you something subtle. 1000 will tell you the trivial. 10,000 is useless.
posted by KirkJobSluder at 11:11 AM on September 30, 2004


I've known a bunch of guys who did higher math for a living, either as mathematicians or physicists. Almost without exception, they were mediocre at arithmetic. Some of them (a math instructor at U of Rochester and a particular Nuclear Engineering prof at RPI spring to mind) were really, really good mathematicians, though. (Not based on my eval -- based on the opinion of other professionals.)

Which is not to say that's a categorical virtue. I've known a bunch of engineers and a few other types (biologists, business types) who had a great, almost instinctive capacity for arithmetic. They could damn near feel when data was wrong.

I think both of these are cases that would be hard to postulate a mechanistic equivalency for. They're both organic, in the sense of being properties of a complex, evolved system -- i.e., a human mind. Evolved systems tend to have interactions and properties that would have been difficult to plan for.
posted by lodurr at 12:52 PM on September 30, 2004


"...I'd like to suggest one technological advance that could make some pretty sweeping changes in human society: an onboard processing system of some kind that made the wearer/user capable of discerning with extreme accuracy when someone else is lying to them." - lodurr, that's brilliant.

Here's the practical business model - which can be implemented now right now with existing technology.....

Well, I'd better not say it.
posted by troutfishing at 1:20 PM on October 2, 2004


Oh, lordy, you just made me grin. Then frown, as the idea crossed my mind of Disney making a movie about it... we must kill, KILL this idea before it is appropriated by evil marketroids! Lodurr

I'll say this again, for emphasis : "The Singularity" notion presumes continuity.

Systemic breakdown in the biosphere, sudden climate change, random chunks of large space junk impacting the Earth....whatever : with any of these possibilites, no "Singularity" - and, the future then looks more like an iteration from "A Canticle for Liebowitz"
Troutfishing


Speaking of breakdowns trout, I noticed while reading this very, very fascinating thread, that nobody has mentioned the side effects of the ongoing, supposedly ineluctable process leading us toward this so-called "Singularity". As in, what new ways to deceive the au natural bumpkin populace shall emerge in the ever increasing technological abilities of the content producer? Will these abilities to convince via the ever improving "tool" of media, VR and the subsequent dumbing down of all those posessing gray matter, somehow be able to make some of us, many of us, most of us, all of us forget that there are actual "systemic breakdown(s) in the biosphere" etc?

Is that not a systemic natural breakdown in and of itself?

In some ways, before we get to this singularity we just may find ourselves mired in a dystopic present that many other SF authors have also warned the world about. I personally, am afraid we'll get to the point where every comedic absurdity can be matched in reality -- only with far deadlier effect.

And nobody notices.
posted by crasspastor at 7:35 PM on October 2, 2004


« Older Hamas meets Photoshop   |   Roadmap for the Prosecution Newer »


This thread has been archived and is closed to new comments