Computational Theology
June 24, 2011 9:07 AM   Subscribe

Author and Mefi's Own Charles Stross presents Three Arguments Against The Singularity
posted by The Whelk (188 comments total) 44 users marked this as a favorite
 
Charlies should be Charles.
posted by ChurchHatesTucker at 9:19 AM on June 24, 2011


All Charlies should be Charles.
posted by pracowity at 9:21 AM on June 24, 2011 [2 favorites]


But can they prove they're Charles?
posted by Marisa Stole the Precious Thing at 9:22 AM on June 24, 2011


Really great read, by the by. Managed to condense three ideas I've had a hard time grasping into a form I could understand better, as well as the underlying flaws in them.
posted by Marisa Stole the Precious Thing at 9:23 AM on June 24, 2011


Mod note: THERE IS NO "I" IN CHARLES. GO OUT THERE AND WIN THAT GAME.
posted by cortex (staff) at 9:26 AM on June 24, 2011 [20 favorites]


One singular sensation
posted by lukemeister at 9:27 AM on June 24, 2011


I used to be a singularity but now I'm married.
posted by bwg at 9:27 AM on June 24, 2011 [1 favorite]


Great piece, and made me even more excited about Rule 34.

Which almost made me forgive him for MAKING BOB HOWARD SUFFER SO MUCH!
posted by lumpenprole at 9:28 AM on June 24, 2011 [1 favorite]


Managed to condense three ideas I've had a hard time grasping into a form I could understand better

I don't even get them well enough to get this.
posted by shakespeherian at 9:30 AM on June 24, 2011


One argument for the singularity: The Chandrasekhar limit.
posted by GuyZero at 9:30 AM on June 24, 2011 [4 favorites]


Also, cstross, "The Chandrasekhar Limit" is a great title for a book. Now all it needs is a few thousand words and an agent.
posted by GuyZero at 9:31 AM on June 24, 2011


GuyZero,

I don't want this thread to degenerate.
posted by lukemeister at 9:31 AM on June 24, 2011


GuyZero,

I don't want this thread to degenerate.
posted by lukemeister at 9:31 AM on June 24, 2011


... and apparently I'm a boson.
posted by lukemeister at 9:31 AM on June 24, 2011 [3 favorites]


tachyon.
posted by tommasz at 9:33 AM on June 24, 2011


I'm a
posted by tommasz at 9:34 AM on June 24, 2011 [19 favorites]


Argument 5: We'll all die before that because crypto-Kaczynski milleniarianism absolves me of responsibility and needing to care.
posted by This, of course, alludes to you at 9:34 AM on June 24, 2011 [2 favorites]


Interesting read, but I what I saw were lots of arguments for why the three concepts under discussion are are unlikely, difficult, improbable, ethically dubious, fraught, and/or undesirable. I saw none for why any of them are impossible, and I think that what's really interesting about these ideas (the singularity, mind-uploading, and reality simulations) is that -- as they are typically presented -- given the theoretical capability and a long enough time scale, each of them becomes essentially inevitable. Of equal importance is the idea that each need happen only once in order to fundamentally transform the human condition.

It was a good read with some fascinating caveats to ponder, but I'm not sure that I'm any more convinced either way as to the impending reality of the underlying concepts.
posted by Scientist at 9:35 AM on June 24, 2011 [6 favorites]


If we get mind-uploading, get get strong AI too -- an uploaded mind is an AI.

And soon we'd start "pruning" those uploaded minds to create more efficient AIs. That's where the real ethical battle starts: is it OK to make a copy of your own mind, remove its desire to laze about or think about sex, and set it to working for you for selfless 18-hour days?

Is it OK to take someone else's mind-state, prune away all the desires it came with, and wire its software "serotonin" reward loop to make it obsessively defeat capchas or mine for World of Warcraft gold? Or to create software analogues of Vernor Vinge's "Focused", autistic hyper-grad students who live to research.

What about making an idiot savant that exists only to move its robotic arms to produce an endless number of McDonald's burgers, until you turn it off at the end of its workday?

Is something that started out as a human mind-state, but now has a dog-level intelligence except for a limited ability to work in a factory or a whorehouse, a slave or a working animal?
posted by orthogonality at 9:35 AM on June 24, 2011 [8 favorites]


Here's a counter-argument that doesn't invoke philosophy or theology.

1. The brain is nothing more than a computer
2. With enough understanding of the physical brain, we can simulate a brain using a computer
3. A simulated brain given the right sensory inputs will develop intelligence
4. Computers get exponentially faster over time (Moore's law corollary, so far it holds true)
5. Thus, a simulated intelligence also gets exponentially faster over time
6. Human intelligences design computers
7. A simulated intelligence growing exponentially faster can exponentially increase the rate of computer development
8. Feedback loop -> singularity

#3 is the only really controversial statement, but it's plausible under the condition that we can simulate the exact initial conditions of an infant. At no point do we need to answer the question "what is consciousness?" or understand the brain's algorithms, in exactly the same sense that I can emulate Windows perfectly without understanding it; all that is necessary is an understanding of the underlying hardware.

This process seems inevitable, with the only possible way to stop it being to blast ourselves out of existence first.
posted by qxntpqbbbqxl at 9:39 AM on June 24, 2011 [4 favorites]


I used to be a singularity but now I'm married.

Well, there's one argument against it, anyway.
posted by fourcheesemac at 9:40 AM on June 24, 2011


human-equivalent AI is unlikely. The reason it's unlikely is that human intelligence is an emergent phenomenon of human physiology, and it only survived the filtering effect of evolution by enhancing human survival fitness in some way

That's pretty easy to get around if you have enough computing power. Create a simulated world, give entities in that world the same sorts of brains that pre-humans have, and simulate thousands of years of evolution. Obviously that requires a lot more sophisticated technology than we have today, but it's not inherently impossible for the reason that Stross says it is.

I think Ted Chiang's The Lifecycle of Software Objects explores a more realistic issue, which is that if you create the equivalent of a human baby in AI, you still need an intelligent adult to raise it just like you would need to do with an actual baby.

And soon we'd start "pruning" those uploaded minds to create more efficient AIs. That's where the real ethical battle starts: is it OK to make a copy of your own mind, remove its desire to laze about or think about sex, and set it to working for you for selfless 18-hour days?

Also due to the fact that a simulated mind could work a lot faster than a wetware mind, an 18-hour day in real time might be 10,000 years in simulated time.
posted by burnmp3s at 9:42 AM on June 24, 2011


Singularity != Signularity

Also, Black Holes are neither black, nor holes.

Thirdly, an uploaded mind is not the same mind. (aka Teleportation Kills)
posted by blue_beetle at 9:43 AM on June 24, 2011


4. Computers get exponentially faster over time (Moore's law corollary, so far it holds true)

This is not a corollary of Moore's Law, which only states that the density of transistors in a chip roughly doubles every eighteen months. And I don't think it applies here anyway. Given that any system can only be perfectly simulated by a system more complex than the original, and given that the brain is indeed a fantastically complex piece of work, I have real doubts that we can cram enough transistors into a machine to make this happen.
posted by xbonesgt at 9:48 AM on June 24, 2011


Otherwise you're missing out on the fertilizer in which the whole field of singularitarian SF, not to mention posthuman thought, is rooted.

The whole field is indeed rooted, and based on fertilizer; well played, that man.
posted by flabdablet at 9:56 AM on June 24, 2011 [1 favorite]


it's plausible under the condition that we can simulate the exact initial conditions of an infant

which, of course, we would never be able to gather enough information to do even if a feedback system as complex as a human brain were not so exquisitely sensitive to them.
posted by flabdablet at 9:59 AM on June 24, 2011


Heavier than air flight is unlikely. The reason it's unlikely is that wings and feathers are an emergent phenomenon of avian physiology, and they only survived the filtering effect of evolution by enhancing avian survival fitness in some way.

Or, if we do not yet know exactly how the brain works, it's a little ridiculous to assert that a brain is either the only or the best platform to implement consciousness. And a current and long-standing state of ignorance does not imply a permanent or enduring state of ignorance, as the history of powered flight in the last century might suggest.
posted by localroger at 9:59 AM on June 24, 2011 [3 favorites]


I remember reading that Moore's Law will not hold true indefinitely. There are physical limits that we will hit eventually. Also, it's not obvious to me that simply increasing intelligence will increase the rate of computer development at the same rate. I don't think "intelligence" is the only input to be considered.
posted by adamdschneider at 10:03 AM on June 24, 2011


Stipulate to the two non-controversial points: "mind uploading" will meet with hysterical opposition if it's ever feasible, and the proposition that we're living in a simulation is, in the current state of affairs, unfalsifiable. Sure.

As to the third point -- that strong AI of the kind that will bring about the singularity is unlikely to develop -- meh. First I don't think "human-equivalent" AI is necessary (see Watts, Peter), and even if it is, the notion that a human-like intelligence is impossible to build because it's a biologically emergent phenomenon that served an evolutionary purpose is silly. So is a kidney. Second, his argument seems to just glibly deny the premise of the singularitarians -- they assert that a new paradigm of AI is on its way, and his response is that no, it's not, because he likes the current paradigm just fine.

And that's leaving aside the question of whether the concepts of "consciousness" and "volition" are even all that meaningful to a die-hard materialist.
posted by eugenen at 10:03 AM on June 24, 2011 [1 favorite]


Also what localroger said.
posted by eugenen at 10:04 AM on June 24, 2011


1. The brain is nothing more than a computer

What's a computer?
posted by ennui.bz at 10:05 AM on June 24, 2011 [1 favorite]


A fundamental mystery when considering these singularity discussions and future predictions: When will computing go the way of the airplane?

From nothing, commercial flight technology advanced tremendously in the last century in terms of speed, maximum distance, passenger capacity, and safety. We moved from props to jets; new enabling alloys were developed and refined with new understanding of fracture and fatigue mechanisms. Concurrently, computing technology moved from mechanical calculators to vacuum tubes to solid-state transistors, which were steadily refined with new understanding of quantum physics and lithography.

Then things leveled off for the commercial airplane. Compared to the first prop planes, the new Airbus 380 isn't much of a different beast than the Boeing 747 forty years ago. We fly subsonic, because it costs too much to build a plane to exceed Mach 1. It is easy to imagine that commercial passengers will still fly subsonic a hundred years from now. (The richest passengers, I mean. Due to increasing fuel costs, the window of commercial flight will have closed for middle class and below.)

So when will things level off for computing? Exponential improvements to any particular technology are unsustainable; will transistors give way to a new paradigm, and how long will this new paradigm last? To build a facility to make competitive integrated circuits right now costs the GDP of small countries. The cleanliness requirements are such that a one-micrometer dust particle per cubic foot will sink your process. Up to this point, the returns have been worth it. Will we be able to transition past the approaching physical limits for transistor size reduction? If not, we might be using antique 2030 computers in 2070. And laughing at some of the arguments of the singularity and Omega Point folks.
posted by Mapes at 10:05 AM on June 24, 2011 [1 favorite]


Ray Kurzweil is gonna be pissed. Quick, someone send him a tweet.
posted by Fizz at 10:06 AM on June 24, 2011


This is not a corollary of Moore's Law

The current vector for computational power increase is parallelization (since we've hit a wall with clock speed). More transistors -> more parallel cores -> Moore's law corollary that (parallel) computing power is increasing exponentially w.r.t. time.

All indications are that the brain is a massively parallel system so a parallel architecture should be a reasonable platform for simulation.

We can already simulate a rat brain. Human brains are not getting faster, but computers are --- as long as that continues, we will reach a point where computers can simulate brains.
posted by qxntpqbbbqxl at 10:06 AM on June 24, 2011 [1 favorite]


Very interesting, and although I agree with his ultimate conclusion — that the Singularity and uploading are unlikely — I'm not sure I find the arguments in particular very convincing.

The ethical arguments, in particular, don't seem to hold much water. I am trying to come up with an example of a similarly tempting, useful technology that has been completely closed-off due to ethical concerns. I'm not coming up with any examples. There are currently restrictions on embryo research, but that's on research, and those against it get away with the restrictions because it doesn't really appear to a casual observer to be significantly impeding progress. Stem cells are a better example, where social conservatives have tried to restrict their use in research, but there has been much more pushback from the pro-research side because there are more obvious applications. And the benefits of stem cell research are pretty vague compared to the applications of strong AI or uploading.

If someone really ever got close to workable consciousness uploading, which implies immortality of a sort, although I think Stross is right that the religious community would attack it, they do not hold the reins in most developed countries: educated elites do, via economic power. If push came to shove, and you dangled the prospect of immortality in front of the world's billionaires, the religious protesters would be facing nerve gas and machine guns when they turned up with their pitchforks and torches.

(As an aside, I think contemporary observers of American politics generally overestimate the role of the religious lumpen masses in politics, because they are a convenient tool of powerful elites and serve to fracture what would otherwise be a dangerous working-class power bloc. They are kept like chained dogs in a state of constant rage over meaningless — to billionaires — social issues, while important things like tax policy gets hammered out by grown-ups over cigars and bribery. Mind uploading would definitely be a grown-up issue.)

The only threat to AI that I can imagine (and it's one that Ian McDonald posits in River of the Gods, so not my idea) is that if it were developed very suddenly, that the world's elites might try to curtail it in order to prevent disruption of the status quo. I doubt that they would be particularly successful in holding it off long-term, though, and I think that this sort of sudden discovery is unlikely anyway. More likely is that human-equivalent AI would emerge, if it emerges at all, incrementally, and takes nobody by surprise.

But anyway, I'm not bullish on either strong AI or mind uploading. Not because of ethical or even strictly technological concerns (I'm not a dualist so I see no intrinsic reason why uploading should be impossible), but mostly because I suspect the very rapid technological growth that has characterized humanity since the beginning of the Industrial Revolution is drawing to a close. We might someday get strong AI and uploading, but it will only be when a lot of other problems — including a host of demographic and ecological ones, plus energy-source issues — have been solved. You can't with a straight face take the rate of growth during the 19th and 20th centuries and extrapolate it forwards into the 21st, given how that growth was financed.

I also have a suspicion that there may be a sort of asymptotic limit to the technological progress of an industrialized society — which, incidentally, would explain the lack of ETs flying around in starships — as a result of increasing fragility as a society becomes increasingly specialized, and dependent on higher and higher technology. That fragility, coupled with people who will always upset the status quo using whatever means are available (dissidents, terrorists, etc.), may effectively cap progress by causing energy to have to be spent on redundancy rather than further specialization. E.g., we might not have the resources to devote to strong AI research, if all the best engineers are working on how to shield critical utilities from the Quebecois Liberation Front's latest pocket EMP devices, or even just keeping Lulzsec from hacking into Tyson's computer systems to produce penis-shaped chicken tenders.

Also, nuclear war.
posted by Kadin2048 at 10:08 AM on June 24, 2011 [12 favorites]


if you create the equivalent of a human baby in AI, you still need an intelligent adult to raise it just like you would need to do with an actual baby.

While I've long thought this to be true, the benefit is that you can then make a zillion perfect copies of the result.
posted by a snickering nuthatch at 10:08 AM on June 24, 2011 [2 favorites]


... a human mind would lumber about in a massively inappropriate body simulation, analogous to someone in a deep diving suit plodding along among a troupe of acrobatic dolphins.
posted by flabdablet at 10:09 AM on June 24, 2011


... wings and feathers are an emergent phenomenon of avian physiology, and they only survived the filtering effect of evolution by enhancing avian survival fitness in some way.

Retorts like this seem to imply that human heavier-than-air flying machines rely on features and flapping wings.

I.e., the discussion of probabiliy (or lack thereof) was in regard to the idea that (mind uploading) = (strong AI) -- i.e., strong AIs will be functionally isomorphic with meat-minds. Which, if we take the aircraft analogy a step further, is contrary to precedent.
posted by lodurr at 10:10 AM on June 24, 2011


the benefit is that you can then make a zillion perfect copies of the result

I think you'll find that doing so would breach your EULA.
posted by flabdablet at 10:11 AM on June 24, 2011 [1 favorite]


"We are not evolved for existence as disembodied intelligences..."

We're not evolved for sitting in front of a computer all day, but I'm hanging in there. Being bodiless can't be too much worse than being in a cube for eight hours a day. Sign me up for uploading!
posted by diogenes at 10:11 AM on June 24, 2011 [3 favorites]


While I've long thought this to be true, the benefit is that you can then make a zillion perfect copies of the result.

Chiang's story addresses that issue as well. It's a really great story, I highly recommend it to anyone who is interested in these topics.
posted by burnmp3s at 10:12 AM on June 24, 2011


I.e., the discussion of probabiliy (or lack thereof) was in regard to the idea that (mind uploading) = (strong AI) -- i.e., strong AIs will be functionally isomorphic with meat-minds.

The whole idea of the Singularity is that artificial and uploaded minds won't be isomorphic with meat-minds, they will be better.
posted by localroger at 10:15 AM on June 24, 2011


There's a slight irony in stross taking this view, in that his big breakout (at least in America) was with a book of extropian tales involving, among other things, mind uploading.

Not that I want to hold a writer to the limitations his earlier story ideas, and I mean that seriously: expecting an SF writer to be consistent with the ideas they work through in their stories is a bit like expecting C. S. Lewis to believe in Narnia. (Further disclaimer: comparisons between Charles Stross and C. S. Lewis are not meant to imply equivalence.)
posted by lodurr at 10:17 AM on June 24, 2011


The whole idea of the Singularity is that artificial and uploaded minds won't be isomorphic with meat-minds, they will be better.

It is in fact that uploaded minds will be minimally isomorphic. That's the only way uploading could work.

As for artificial minds, I was referring only to the argument that uploading implied strong AI. "Artificial" minds are a separate issue. I'm quite certain there will be artificial minds capable of much faster, clearer, albeit quite different reasoning from mine. But I'm likely to have less insight to how they work than I would to how a lion's worked.
posted by lodurr at 10:21 AM on June 24, 2011


I think that where cstross is going with this piece is best captured in a comment he makes half way down:
I'm not convinced that the singularity isn't going to happen. It's just that I am deathly tired of the cheerleader squad approaching me and demanding to know precisely how many femtoseconds it's going to be until they can upload into AI heaven and leave the meatsack behind...
In other words, "bloody fanboys, go read this and stop thinking I'm one of you". That's an excellent goal, because the femtosecond counters are exceptionally annoying. What that means though is that his argument is more forceful than his weighing of the probabilities strictly allows for.

For instance, he argues that the critical path to super-intelligent AIs goes through human-equivalent AI, which isn't something that we actually need.

His argument for why we don't need it is that a conscious, volitional agent is an electronic pain in the ass – a car shouldn't "argue with me about where we want to go today". That's not a killer argument: it presents a use case in which you don't want a certain kind of volitional behaviour, and then says that a conscious, volitional agent is itself bound to cause this behaviour. No it isn't. It can't be – we already have agents that are a little bit self-aware and somewhat capable of making and following goals, and they don't do this.

But what if his use case was completely at odds with a volitional AI? Would that invalidate the idea that there could be demand for such a thing? Of course not – you can just think of another use case. Where is the argument that there is no use case for which a decent amount of self-awareness is wanted? No autonomous rovers on Mars? No autonomous miners in the asteroid belt, or a kilometre under China? (No exocortex?)

cstross is telling a story of the non-Singularitarian future, just as he told the opposite story in Accelerando. But stories make certain paths seem more likely, just by calling them to our attention. How likely is it, really, that our future will be dominated by a highly self-improved version of an artificial pet? Yet I can accept it easily when reading his books.

If this article stops most people from thinking that cstross is a cheerleader for the Nerd Rapture, it will surely count as a success. But it doesn't deal with the broad categories of possibility that must be in order to push the needle towards 'unlikely' on the Singularity Probability Scale.
posted by topynate at 10:21 AM on June 24, 2011 [3 favorites]


1. The brain is nothing more than a computer

What's a computer?


It's a kind of brain.
posted by Mister_A at 10:26 AM on June 24, 2011 [7 favorites]


I remember reading that Moore's Law will not hold true indefinitely. There are physical limits that we will hit eventually.

Stross mentions heat dissipation in the main link, and even Kurzweil (the wildest and wooliest of the Singularitarians) accepts that there are limits to Moore's Law. He just thinks that we'll switch to some different sort of substrate to run computing on before we hit the applicable limits to transistors or other presently available options. Some future jump equivalent to the leap from vacuum tubes to transistors, essentially.

Frankly, I agree with Scientist. These are some very clear, concise, and humorously written reasons why the Singularity's main concepts are neither inevitable nor even necessarily likely to come about. Impossibility remains unproven, though.

This reminds me of that other (and earlier) "inevitablility" of the 20th century, nuclear warfare. There were plenty of excellent arguments made back in the Cold War years (mostly by anti-nuclear activists, but by some academic groups and think tanks as well) that in the absence of massive and immediate nuclear disarmament by both sides, nuclear conflict between the Soviet Union and the United States was simply unavoidable. And it's clear, looking back, that the possibility was never very distant, always within easy reach of becoming a reality. But it did not happen.

In a thirty-years-from-now world of competing economic and military powers, it seems unlikely that these sort of technologies could be repressed, any more than gunpowder or the emergence of banking systems could have been repressed in early modern Europe. If we see strong enough international institutions emerging over the next few decades, backed by greater cooperation between security organizations, it might be possible to do something analogous to Tokugawa Japan's repression of gunpowder weapons, or Ming China's outlawing of foreign sea trade. Likewise, if we're still binging on fossil fuels and untroubled by climactic changes or demographic pressures on food and water resources* we'll have a very different relationship to these technologies than if we're trying to get by in a post-Collapse society of one description or another.

Upshot is that uploading and AI are not, to my mind, either inevitable or impossible. Whether they appear and when, and how they are applied and by who, depend on an enormous range of factors, from economic viability to the environmental and social state of the world when they arrive.

*okay, just writing that made me spew soda through my nose.
posted by AdamCSnider at 10:27 AM on June 24, 2011 [1 favorite]


I don't understand how people can believe their consciousness would be transferred to a computer instead of copied. If they don't believe this, then what's the draw? You're still going to die, but there will be some other asshole living indefinitely in a computer somewhere who has your memories and personality. I guess that's supposed to be a comfort or something?
posted by ODiV at 10:29 AM on June 24, 2011 [16 favorites]


Finally, the simulation hypothesis builds on this and suggests that if we are already living in a cyberspatial history simulation... we might not be able to apprehend the underlying "true" reality.

Whenever I try to understand quantum mechanics I get the feeling that we might not be able to apprehend the underlying "true" reality. It's just gets so weird. If we are code trying to understand the code we're in, I imagine that's pretty much what would happen when we got to a certain level.

Oddly, even thought I'm an atheist, it's when I'm reading about quantum mechanics that the idea of God creating the universe seems most plausible. It's that feeling that if we live in a designed universe, at some point we'll get to a level where the design stops making sense, and maybe that's where we are.

What really would be the difference between a God that created a physical universe and the developers that created a simulated universe? In both cases some entity has to enter all the variables. I supposed the difference is that God would have just made everything "be," while the developers of a simulation would have to enter the data into something. But God would still have to "enter" the variables into something.

I have no idea what I'm on about. I think I need a walk.
posted by diogenes at 10:31 AM on June 24, 2011 [1 favorite]


I saw none for why any of them are impossible

I don't think it's possible to say that they're logically impossible, but Stross' point is that, for practical purposes, they basically are impossible for a variety of previously unconsidered reasons.

what's really interesting about these ideas (the singularity, mind-uploading, and reality simulations) is that -- as they are typically presented -- given the theoretical capability and a long enough time scale, each of them becomes essentially inevitable.

I think there's a fallacy in here. You're basically saying that over an infinite timespan, every possible becomes necessary, which plays fast and loose with the concept of "infinity". A set can be infinite in size and still exclude things--the set of even numbers is infinitely large, but contains no odd numbers. More practically, a die being rolled an infinite number of times does not make any particular sequence necessarily occurring--it's not statistically impossible for a fair die to come up sixes every time. The property of infinity does not presume the inclusion of all possible things.

1. The brain is nothing more than a computer

You said #3 is controversial, but this is one is moreso. A computer is an implementation of our model of computability, and it's not clear (I'd say it's doubtful) that our current model of computability is capable of handling intelligence or consciousness. Research into AI keeps running into walls that are mathematically defined, where the complexity of intelligent behaviour scales exponentially. Moore's law won't help with this, even if it does extend indefinitely. And Moore's law doesn't say that processing power doubles, it says that the ratio of processing power to cost doubles. In other words, processing power will not, in itself, scale indefinitely.
posted by fatbird at 10:38 AM on June 24, 2011 [5 favorites]


The current vector for computational power increase is parallelization (since we've hit a wall with clock speed). More transistors -> more parallel cores -> Moore's law corollary that (parallel) computing power is increasing exponentially w.r.t. time.

Adding more cores to a system gets around the problem to some extent, sure, but doubling the number of cores in a system does not give you double the computing power - you have to wait longer for data to arrive from faraway cores (the speed of light is really fast, but not instantaneous) and you have to spend more resources on things like data coherency. So I don't think that throwing more cores at the problem will necessarily work. And once you move away from transistor-based computing, Moore's Law doesn't apply anymore.

All indications are that the brain is a massively parallel system so a parallel architecture should be a reasonable platform for simulation.

Agreed. But I don't think we will be able to simulate a human brain using a transistor-based system.

We can already simulate a rat brain.

Hardly. We can simulate a fraction of the neurons contained inside one part of a rat brain. Fully simulating an entire human brain, which is orders of magnitude more complex, is nowhere near close to happening.
posted by xbonesgt at 10:39 AM on June 24, 2011 [3 favorites]


ODiV, you're so right. Whether my personality lands in a computer or not, there will come an instant where the line of consciousness I've followed since birth STOPS and I'll be dead. It doesn't matter if my consciousness is still running somewhere else; I will never experience that person.
posted by Buckt at 10:39 AM on June 24, 2011 [1 favorite]


I don't understand how people can believe their consciousness would be transferred to a computer instead of copied.

One interesting thought experiment: What if there was a way to convert your brain into a machine neuron by neuron? Assuming that the artificial neurons could replace the existing ones smoothly with no alteration in your brain's function, you would still be you for the entire process, but at the end you would have an artificial brain.
posted by burnmp3s at 10:40 AM on June 24, 2011 [4 favorites]


I've tried reading "The Coming Technological Singularity" a few times and all I can say is "meh." It's like I'm missing the religious upbringing common to technologists (or futurists or whatever label is appropriate), just like sitting down and paging through the bible I only see historical folklore, morality lessons, and pure fantasy instead of, what other people call, the word of god. If anyone has any other quality links besides the three that Charles Stoss mentions in his opening paragraph I'd be glad to read them, otherwise I'm not sure why I can't dismiss this modern technological eschatology as casually I dismiss the Book of Revelation. It just seems to me a mixture between the Matrix and a Y2K fin de siècle.

Also, last night I finally finished reading Gödel, Escher, Bach after 10 years of it sitting on my bookshelf. I found much of Hofstadter's speculations on AI to be quaint and his lay-persons explanations of DNA and genes to be quite dated. I think the ideas about the singularity will feel the same to technologists/futurists in 30 years.
posted by peeedro at 10:45 AM on June 24, 2011 [2 favorites]


Orthogonality writes: And soon we'd start "pruning" those uploaded minds to create more efficient AIs. That's where the real ethical battle starts: is it OK to make a copy of your own mind [...]

First of all, this is why we need to crush software patents NOW. Because the first company to digitize any part of the human mind will patent it, and by the time the technology trickles down to such people as you and I, not only will the process be encumbered, but you yourself will be encumbered.

You aren't going to digitize yourself because legally you won't have the rights to yourself. But for a small (enormous) fee and by accepting certain waivers and signing an EULA on yourself, a company might assist you.
posted by George_Spiggott at 10:45 AM on June 24, 2011 [1 favorite]


I don't understand how people can believe their consciousness would be transferred to a computer instead of copied.

As I understand it, the argument is a sort of modification of the Ship of Theseus argument. Your brain cells are switched out, one at a time, over a matter of seven years or so, right? Presumably you consider yourself to be the same person in some sense, based on continuity of consciousness or whatever. If you don't, okay, but you probably won't be worried about "copy" vs. "transferral" in that case, so just skip over all this.

Now imagine that over the same seven year period, you replaced each cell with a synthetic cell, which replicates exactly what a natural, biological neuron does. Is the key critical element here that the natural cell is replaced with something that does exactly what it does? Most people would say yes.

If so, what if you don't take seven years? What if you do it all in seven minutes in a lab? What if the "artificial neuron" is actually a program running on a computer which interacts with the rest of your natural neurons through a little node in your skull? As long as it replicates what the original neuron did, most people would say the difference doesn't matter. And once all your neurons have been replaced, you could switch out each of the artificial neuron programs, move them to physically distant locations, etc. limited only by the need to maintain the same speed and pattern of interaction as the original natural neural network.

What this handwaves, of course, is the massive technical disparity between this scenario and where we are now, and the theoretical limitations that may affect it/ Can we build machines or programs with that degree of fidelity? Can we "swap out" a neuron fast enough to avoid fucking pu the brain in small but significant ways all through this process? Does the physical biology of the brain - the actual unique properties of biological as opposed to silicate matter - have any subtle effect on the brain's function, and can those properties be replicated exactly? We don't have answers for most of these questions and any one "no" could derail the entire project permanently.
posted by AdamCSnider at 10:45 AM on June 24, 2011 [6 favorites]


I don't understand how people can believe their consciousness would be transferred to a computer instead of copied.

Consciousness isn't a thing. It's not something I have, it's something I do. If a computer does it, does that mean I'm doing it? Well, that's more a linguistic point. If "I am conscious" really means "this body does conscious thought with reference to 'I'", then "this computer does conscious thought with reference to 'I'" may be harder to represent in non-technical English, but it's just as well defined.
posted by topynate at 10:46 AM on June 24, 2011 [1 favorite]


burnmp3s got there before me, I see. Curse these slow and clumsy biological hands!
posted by AdamCSnider at 10:46 AM on June 24, 2011


Also, be sure you jump at the right time and with the right information. Don't be that guy who only runs on Symbian.
posted by George_Spiggott at 10:47 AM on June 24, 2011 [3 favorites]


.. but what I really want to know is what the relationship is between CASE NIGHTMARE GREEN and what posthumanists call the Singularity. I would love to hear (perhaps I missed it in the Laundry books) what Bob Howard thinks of Nerd Rapture True Believers.
posted by jepler at 10:51 AM on June 24, 2011 [1 favorite]


Consciousness isn't a thing. It's not something I have, it's something I do. If a computer does it, does that mean I'm doing it? Well, that's more a linguistic point. If "I am conscious" really means "this body does conscious thought with reference to 'I'", then "this computer does conscious thought with reference to 'I'" may be harder to represent in non-technical English, but it's just as well defined.

But the concern is about whether the particular 'I' that I experience would be carried over to the computer. You have consciousness just as much as I do, but I don't experience the 'I' of your consciousness, just as I wouldn't experience the 'I' of the computer's consciousness, even if it had been copied from my brain.
posted by shakespeherian at 10:51 AM on June 24, 2011


We can already simulate a rat brain.

We can simulate a slice of a rat brain, and that requires one of the most powerful supercomputers around, plus training the model on extensive electrophysiological recordings from an actual brain slice maintained in vitro. I'm not saying this research isn't cool or exciting but it bears repeating that we are nowhere near simulating an arbitrary, entire brain.

(on preview, what xbonesgt said)
posted by en forme de poire at 10:51 AM on June 24, 2011 [1 favorite]


In my opinion, it all comes down to this: the experience of being a person has far more to do with embodiment, which is all about navigating space and negotiating with physics, than most extropians seem to believe. Bodies, viewed as a computational matrix, are staggeringly more energy-efficient than anything designed to emulate what they do (including implementing consciousness) could be. They're already running on the bare metal of the Universe.

The standard thought-experimental thing you can use as a drop-in replacement for one of my neurons is of no use if it dissipates more power than the neuron it's replacing. Any computation-based consciousness within a computational fabric of comparable complexity to the physical and biological systems that support all the existing ones is going to be outperformed, joule for joule, by those existing ones. This is true no matter how smart its designer might be. It's just how things work.

I have yet to encounter any writing on the convergence between information science and neuroscience that's (a) been informed by actual research rather than a techno-Raptural wish that death were not a part of life and (b) suggests otherwise.

On preview:

Kurzweil ... thinks that we'll switch to some different sort of substrate to run computing on before we hit the applicable limits to transistors or other presently available options

We already did that; the alpha release came out three and a half billion years ago. Kurzweil needs to eat some acid and spend some time looking at his hand.
posted by flabdablet at 10:51 AM on June 24, 2011 [5 favorites]


That is an interesting thought experiment, burnmp3s. Whenever I head down paths like that I end up paranoid about whether or not my consciousness itself carries over and whether I am in fact the same "person" as I was last month or even yesterday.

Thanks for the explanation, AdamCSnider.

I guess I kind of likened consciousness transferal to "teleporters" that would work by scanning you, creating a new you on the other side and then vaporizing the current you. The "Ship of Theseus" presents an interesting view and when I think about it that way I (again) end up worrying that my consciousness is not really any different that the vaporize and re-create teleporter model.
posted by ODiV at 10:52 AM on June 24, 2011


Also: Outside 2.0. That shit is detailed.
posted by flabdablet at 10:55 AM on June 24, 2011


The singularity* happened years ago. I've been living inside MetaFilter since late 2006.

*there is no singularity
posted by It's Raining Florence Henderson at 10:59 AM on June 24, 2011


Whenever I try to understand quantum mechanics I get the feeling that we might not be able to apprehend the underlying "true" reality. It's just gets so weird. If we are code trying to understand the code we're in, I imagine that's pretty much what would happen when we got to a certain level.

Well, only because your ancestors didn't live or die based on their understanding of quantum mechanics. They lived or died based on their understanding of classical interactions, mostly those happening within 100 meters and involving animals within an order of magnitude of their own size. (Yes, some died because of microscopic attackers too, but evolution gave them other tools, like the immune systems to deal with that.) Intuitive physics, even though it's often wrong, is a good enough approximation of what happens to human-sized entities in terrestrial Earth-like conditions.

Unsurprisingly, those good rules of thumb don't work at well in other domains. That some of those other domains are "more real" was irrelevant to great-great-g-g-g-g-grandmother's survival.

Presumably, once we have a better handle on the mechanics of intelligence, we can simulate more "quantum mechanical" environments and evolve intelligences that intuitively "understand" those environments.
posted by orthogonality at 10:59 AM on June 24, 2011


1. The brain is nothing more than a computer

What's a computer?

1.1. It's a kind of brain.


1.01. What's a brain?
posted by ennui.bz at 11:00 AM on June 24, 2011


Would Sleep Paralysis be a possible side effect?
IF
posted by clavdivs at 11:03 AM on June 24, 2011 [1 favorite]


Okay, now I'm trying to figure out what would stop the consciousness transferal specialists (patent pending) from just copying and nuking your brain if it was cheaper than doing the gradual crossover. It's not like the end result would look any different, right?

Come to think of it, why nuke your brain? They can just capture you and use you for slave labour in their mining camps. And no one will ever come looking for you.

Maybe some sort of narrative could be told to you throughout the process and if you can remember the narrative afterwards then you are the same person. But what if they start being able to implant memories (a la Blade Runner).

This sounds like a pretty cool novel actually. I'd like to read it.
posted by ODiV at 11:03 AM on June 24, 2011 [1 favorite]


Get a computer! Morans.
posted by peeedro at 11:03 AM on June 24, 2011 [2 favorites]


I don't understand how people can believe their consciousness would be transferred to a computer instead of copied.
"The capability to serialise and store a human mind-state is unprecedented. We believe that the only reason we can do this today, without legal obstruction, is that the law of man does not know that it is possible. If it was known to be possible, there is an excellent chance that it would be illegal. This is despite the facts that you are a consenting adult and neither of us have raised significant ethical objections.

"A mind-state is not a legal human being. It does not hold rights, including the legal right to exist. It is copyrighted binary data, and it would be protected by copyright law. You would be the copyright owner. Copyright law varies in severity depending on geographical location, but it extends little beyond fines and jail time, whereas the destruction of a mind-state could be seriously construed as murder. While you have a contract with Mr. Hunt to protect and ensure the integrity of the binary data you'll be storing at his data centre, once the serialisation procedure is published, he may find that he has no legal course of action but to destroy it.

"And finally, much like cryogenic storage, the technology to reactivate a stored mind-state, either in a computer simulation or in a real human body, does not yet exist. For all we know, it may never exist.

"These are not the risks. These are the knowns. We can put numbers to all of these possibilities. They are the safe outcomes; the eventualities in which your mind-state is lost forever, and you continue with your life as normal, and all we have wasted is time.


"The simplest way to put it is this: once digitised, your mind could be sent anywhere, anytime. As you've mentioned yourself, it's thought that within a few decades it will become possible to store an arbitrary amount of data in a single fundamental particle, itself stored in a device as small as a basketball... or a thumb... or a fingernail. You will be copied and copied and copied all over the world. Copies of your mind-state - the first digitised human mind-state in history, remember - could survive until the end of human civilisation. After you go to sleep this afternoon, one of you will wake up tomorrow morning. There is, let us say, a one in a million chance that you will wake up tomorrow morning. The rest of you are embarking on a subjectively instantaneous one-way journey into the uttermost unknown, where, beyond a few decades into the future, your single physical self will not be able to protect you. You will be completely without support or protection or preparation.

"We can't put a mind-state back into a body. But the hope is that one day it will become possible. Somebody could steal your mind and insert it into another body on the other side of the world. Under their terms. And do anything they liked to you. They could kill you. Then they could find another body, insert your mind-state again, and continue to kill you. For ever.

"You could wake up in a digital world. Any of countless possible digital worlds. They won't be real, but you'll feel them for real. Imagine a virtual heaven. But now imagine a virtual hell. In a simulated environment, a malfeasor would have absolute, eternal, unbreakable control over you.
-Fine Structure: Postmortal
posted by Nonsteroidal Anti-Inflammatory Drug at 11:04 AM on June 24, 2011 [5 favorites]


I guess that's supposed to be a comfort or something?

I don't believe I'll ever see it, but hypothetically, if that was an option, hell yes it would be a comfort.

This situation may not necessarily apply to everyone, but in my case, Alzheimer's disease runs in my family. It is nearly certain, if I don't off myself in some other way, that eventually my brain is going to rot and take my consciousness along with it, in a very unpleasant fashion, and probably before I'm ready for it to go. The human brain is a pretty shitty consciousness-container, at least in my particular implementation.

So yeah, if that option existed, I'd take it. For no other reason than I'd be able to trust the machine to tell me when it's time to run a hot bath and put a Glaser Safety Slug into my soft palate, rather than having to gamble on whether or not I'll be able to do that tomorrow if I put it off today. I could potentially see something like that giving me — the actual meat-me — a significantly longer lifespan. What the machine consciousness would do after I've exited in a pink mist would be its own business, but it would've earned it at that point.

That's just one particular use case. I'm sure there are lots of others.
posted by Kadin2048 at 11:08 AM on June 24, 2011 [2 favorites]


Bodies, viewed as a computational matrix, are staggeringly more energy-efficient than anything designed to emulate what they do (including implementing consciousness) could be. They're already running on the bare metal of the Universe.

Why is it a given that the human body is the most energy-efficient possible expression of a computing platform? Even if evolution up to this point has produced the best possible computing platform it could, the available materials and mechanisms for producing a brain through animal cells is just a subset of the overall possible mechanisms for designing such a system.

The standard thought-experimental thing you can use as a drop-in replacement for one of my neurons is of no use if it dissipates more power than the neuron it's replacing.

Having a brain made out of replaceable parts would be useful from a practical point of view even if it used a lot more power than a normal brain.
posted by burnmp3s at 11:11 AM on June 24, 2011 [1 favorite]


Maybe there is only one consciousness, but it cycles through all the brains in the universe very fast, like a single core CPU cycling through threads to create the illusion of parallelism. If you upload your mind, but keep the original, you've doubled the amount of time the universal consciousness spends being you, which is arguably a win. This proposal neatly addresses most of the irritating questions about consciousness and mind uploading and so forth.

For example, if a single-core supercomputer runs multiple strong-AI threads through preemptive multitasking, does each strong-AI get a separate consciousness, or is it the CPU that retains the consciousness, and the threads are simply swapping its memories? (Universal consciousness (UC) answer: the threads all share the universal consciousness with everything else, so it makes no difference how many or few CPUs it takes to run them).
posted by Pyry at 11:12 AM on June 24, 2011 [1 favorite]


I think the creation of "machine consciousness" (let's just pretend that's a well-defined idea) in some form similar to what's described in this comment and similar ones up-thread might one day be possible, because that wouldn't really require us to develop a thorough understanding the higher-level organizational principles that apply to consciousness to pull it off.

Virtual neurons sufficiently able to virtually mimic the physical functions of real neurons should be able to replicate the processes of consciousness whether we understand those processes or not, if we were to adopt the kind of neuron-by-neuron replacement strategy some have proposed. This has already been done to a limited extent, with some success, in lab rats, where researchers have successfully mimicked and replaced the functionality of some part of the brain's hardware with electronics that perform identical functions.

So I think the development of machine consciousness might actually be possible, though it's not a certainty, but it's not inevitable, and I doubt even once we'd managed to completely transfer a living consciousness to a machine analog of the brain that we'd necessarily have enough understanding of what consciousness is or how it works to create it from scratch.

That is an interesting thought experiment, burnmp3s. Whenever I head down paths like that I end up paranoid about whether or not my consciousness itself carries over and whether I am in fact the same "person" as I was last month or even yesterday.

Of course you aren't. You literally aren't, and you figuratively aren't. But you are a physical continuation of the same person/processes you were then. Who you are now depends on and is informed by who you were then, but they aren't identical. Nothing paranoid about realizing that.
posted by saulgoodman at 11:13 AM on June 24, 2011


I don't understand how people can believe their consciousness would be transferred to a computer instead of copied.

There are basically two camps on this:
  1. those who, sloppily, assume that consciousness would be transferred because they either don't think it through or don't want to know otherwise;
  2. those who think it doesn't matter, because from the perspective of the uploaded mind, your consciousness was transferred.
This is akin to the transporter argument, wherein people who take position [2] often argue that whether or not you kill the original copy simply doesn't matter: having a perfectly true copy is just as good as having the original. That argument is a little harder to make for uploaded minds because the broad logical assumption of perfect similarity doesn't hold anymore: there's an obvious difference between the uploaded and original mind.

Extropians (from whom Stross is at pains to distiniguish himself) will tend to argue that it still doesn't matter, because the meat is inherently inferior.
posted by lodurr at 11:17 AM on June 24, 2011


I end up paranoid about whether or not my consciousness itself carries over and whether I am in fact the same "person" as I was last month or even yesterday

Think about candle flames (in still air, if death is a distressingly distracting thought to visit for you).
posted by flabdablet at 11:19 AM on June 24, 2011


I don't understand how people can believe their consciousness would be transferred to a computer instead of copied. If they don't believe this, then what's the draw? You're still going to die, but there will be some other asshole living indefinitely in a computer somewhere who has your memories and personality. I guess that's supposed to be a comfort or something?

I don't understand how people can believe their consciousness "wakes up" after anesthesia, instead of being utterly destroyed and then recreated. The same goes for sleep. Think about all the ways you've changed since birth -- hell, since this morning at 5 AM. How can any of us ever be sure that our consciousness is a single, continuous phenomenon?

All of this stuff depends greatly on human subjectivity... and human subjectivity is not known for rejecting comfort, much less the idea that "I" am "I".

In short: from vorfeed 2.0's perspective, she would be me. She would simply go on living accordingly, while my old body would quietly die. It'd be more of a mind-fuck if my old body didn't die, I suppose -- but even then, there'd be two of me, not "me and some other asshole with my memories and personality".
posted by vorfeed at 11:21 AM on June 24, 2011 [2 favorites]


If you prick vorfeed 2.0, does vorfeed 1.0 cry out in pain?
posted by Pyry at 11:25 AM on June 24, 2011


Virtual neurons sufficiently able to virtually mimic the physical functions of real neurons should be able to replicate the processes of consciousness whether we understand those processes or not, if we were to adopt the kind of neuron-by-neuron replacement strategy some have proposed.

Let's assume you mean this in a figurative sense (because this is almost certainly not how brains work), and accept broadly that if machine B is functionally isomorphic with machine A, you can 'upload concsiousness' from one to the other.

That's actually not a very interesting proposition, because it's just a hair -- and a not very important one -- from being a tautology.

Here are two interesting directions to take the question:

First, what if there's something about the meat that makes it special? I'm not talking about anything spiritual, here -- I'm simply talking about the level of complexity in a system. In discussions like this, it's customary to draw boundary conditions around the brain that don't include the body to any great extent. As many neurologists would be happy to tell you, though, the "brain" extends well beyond the skull. Furthermore, as noted above, the "neuron by neuron replacement" assumption (which is already questionable enough) very likely doesn't begin to capture how the brain is actually working. (viz recent discussions about the role of glia and plaques.)

Second, why do we care so much about uploading from meat to virtual hosting? To me it seems obvious it's about mortality. To me, the much more interesting question is how to make something intelligent in general -- along with the related question of how we would know. (See: talking lion.)
posted by lodurr at 11:26 AM on June 24, 2011


I don't understand how people can believe their consciousness "wakes up" after anesthesia, instead of being utterly destroyed and then recreated.

It's definitely an interesting question. I think I pretty much am forced to believe this in order to stay sane and make productive life decisions. I don't believe I could function properly if I believed I would effectively cease to exist the next time I took a nap.
posted by ODiV at 11:28 AM on June 24, 2011 [1 favorite]


Concerining onesself with what consciousness "thinks" is a bit narrow of focus, IMHO. But then, I'm an epiphenomenalist: I don't see any reason to suppose my consciousness is the totality of my being. Or even really driving, much of the time.

If all you concern yourself with is concsciousness, I think you're missing 95% of what makes you who you are.
posted by lodurr at 11:28 AM on June 24, 2011 [1 favorite]



Here's a counter-argument that doesn't invoke philosophy or theology.

1. The brain is nothing more than a computer


Depends on what's meant by "philosophy"; forests have been put to the blade to provide paper for philosophy texts about just that.
posted by gimonca at 11:29 AM on June 24, 2011


people who take position [2] often argue that whether or not you kill the original copy simply doesn't matter

Well, it certainly matters to me.

there'd be two of me

This depends on a very loose definition of "me," namely that "I" start with a certain state and remain "I" no matter what divergence happens afterward. The other "you" would certainly be a different person by any practical definition.
posted by adamdschneider at 11:31 AM on June 24, 2011


Second, why do we care so much about uploading from meat to virtual hosting? To me it seems obvious it's about mortality.

There's a lot more to it than that. Assuming it's possible at all, the ability to experiment with variants on your psyche (and to revert afterward if you don't like it, or even auto-revert programatically just in case the result is so screwed up it lacks the ability to volitionally do so) to more directly apprehend data, and to alter one' subjective environment, to directly transfer selected memories and thoughts to friends, these are all really interesting and potentially cool possibilities.
posted by George_Spiggott at 11:33 AM on June 24, 2011


I don't understand how people can believe their consciousness "wakes up" after anesthesia, instead of being utterly destroyed and then recreated.

I do actually believe that. I have experienced the sense from time to time of "rebuilding" myself. It's a weird little mental hiccup, like deja vu. I have also sometimes had the distinct feeling that the "me" of today is not the same "me" as yesterday. Like there's a continuity gap in my personality. None of these are persistent feelings, and may very well be phantom sensations that have no basis in physiological reality. But just as a data point, there you have it. I do actually believe that I am "utterly destroyed and then recreated" when I lose consciousness.

Doesn't bother me at all.
posted by It's Raining Florence Henderson at 11:34 AM on June 24, 2011 [1 favorite]


I don't understand how people can believe their consciousness "wakes up" after anesthesia, instead of being utterly destroyed and then recreated.

Simple, because there's nothing to point to to say, "that used to be me," or even "hey, that thing is still alive over there, please turn it off". Continuity is self-evident in that case. The same hardware is running substantially the same software.
posted by adamdschneider at 11:35 AM on June 24, 2011


How can any of us ever be sure that our consciousness is a single, continuous phenomenon?

It's there every time we check.

That's pretty convincing to a lot of people; the concept of object permanence is generally pretty well in place by two years old IIRC.

But I'm with lodurr and IRFH on this question.
posted by flabdablet at 11:36 AM on June 24, 2011 [1 favorite]


If we're doing computational analogies, it seems to me that a consciousness inquiring as to its own continuity is in a similar position to a process that tests its own run state; it will never, ever see any state other than "running".
posted by flabdablet at 11:42 AM on June 24, 2011


I'm sure that it's already been said, here and/or elsewhere, but I don't think we're ever going to just upload, in one jump. I think if it happens it'll be a progressive cyberneticization, where the boundaries between you and the things you use to augment your life become less and less clear.

You won't just jump into a machine one day; more like one day you'll realize you haven't made any use of your bio-brain for years, so why are you even maintaining it anymore?
posted by George_Spiggott at 11:43 AM on June 24, 2011 [2 favorites]


P.S. wish teabaggers would come to that realization like right now
posted by George_Spiggott at 11:44 AM on June 24, 2011


Everybody wants prosthetic foreheads on their real heads.
posted by flabdablet at 11:46 AM on June 24, 2011 [3 favorites]


The part where Stross loses me is where he says; "This is obviously the most useful kind of machine intelligence, so therefore no one will make any of the other kinds."

I don't want my robot housekeeper to spend all its time in front of the TV watching contact sports or music videos.

Maybe Stross doesn't want that, but I guarantee you somebody does.
posted by straight at 11:47 AM on June 24, 2011


... I think if it happens it'll be a progressive cyberneticization, where the boundaries between you and the things you use to augment your life become less and less clear.

I've thought this for years. But what gets me is how casually people treat the idea. For all the lip-service in singularitarian fiction to the fundamental differentness of the post-cybernetic person's experience, there seems to me at base to be an assumption that we'll behave in the same ways, or at least in echoes of the same ways. It's mostly sloppy thinking and sloppier fiction, and certainly everyone doesn't do it -- Alan Moore's approach to Dr. Manhattan [which is a fundamentally similar scenario] is a step in the right direction, and Catherine 'no relation' Moore's approach in "No Woman Born" is a good corrective -- but there's just no good reason to suppose that a being as advanced as we would hope these changes would make us would be at all comprehensible to us.
posted by lodurr at 11:49 AM on June 24, 2011


Everybody wants prosthetic foreheads on their real heads.

ok, why is this exciting the song-lyric-recognition area of my brain?
posted by lodurr at 11:51 AM on June 24, 2011


They Might Be Giants, lodurr.
posted by shakespeherian at 11:52 AM on June 24, 2011


Try the song-lyric-recognition area of Google, metal grasshopper.
posted by flabdablet at 11:58 AM on June 24, 2011


The singularity will not exist because the complexity of the network and the latency of information, and variety of environmental and competitive factors will not enable a single super-organism to be dominant.
posted by humanfont at 12:02 PM on June 24, 2011


I confess I have not yet hard-wired Google into my brain. I fear the results when I do.
posted by lodurr at 12:08 PM on June 24, 2011


That rat cortical column simulation did near-real-time cell-level modeling of around 10,000 neurons using a 22 teraflop supercomputer. Under the (reasonable, but still unproven) assumptions that their modeling resolution was good enough and their software is scalable, a full rat brain in real time would require a larger supercomputer than any available today... albeit not by much.

It's another four or so orders of magnitude up from "rat brain" to "human brain"... but on e.g. Intel's roadmap that gives us the possibility of having enough CPU power for real-time human brain simulation in about 15 more years.

That's if RAM keeps up, anyway. I can't even seem to *find* numbers for memory usage on that cortical column, and it may be the limiting factor; RAM-per-processor-core keeps dropping and newer supercomputers aren't getting "bigger" as quickly as they're getting faster.
posted by roystgnr at 12:11 PM on June 24, 2011 [1 favorite]


For all the lip-service in singularitarian fiction to the fundamental differentness of the post-cybernetic person's experience, there seems to me at base to be an assumption that we'll behave in the same ways, or at least in echoes of the same ways.

Yeah, Vinge on it:

an intellectual transition as impenetrable as the knotted space-time at the center of a black hole, and the world will pass far beyond our understanding

We're very good at imagining things, and none of the fictional scenarios I've seen so far have this quality. After all, someone non-singulartized penetrated them well enough to write about them. I do think that editable personality (being able to change whatever you like about your self) fits this definition, not as an idea, but rather as, "What would that even feel like?"
posted by adamdschneider at 12:15 PM on June 24, 2011


to be clearer, i'm not tarring vernor vinge or charlie stross with that brush. and i understand there are limits to what you can expect to accomplish in prose, w.r.t. getting inside the mind of a radically superior being. but a lot of the extropian transhumanist stories that i've read basically seem to assume that we'll be more or less like we are, but with better toys & more-instant gratification.
posted by lodurr at 12:24 PM on June 24, 2011


I do think that editable personality (being able to change whatever you like about your self) fits this definition, not as an idea, but rather as, "What would that even feel like?"

Well, this is where you really get into the continuity of consciousness scenario. Would you be able to remember feeling differently? It's hard enough to remember what you felt like in an earlier, non-edited version of yourself -- e.g., before you fell in love, left the church, found the church, found heroin, etc.

So editable personality is a much thornier problem, really, than functionally isomorphic "artificial" minds. I haven't seen it dealt with in any really penetrating way.

Which leads me to ponder again what I expect from a writer. I want the writer to lead me through a thought experiment, setting up their parameters in a believable way as they go. (To me, this is not just consistent, but congruent with "being true to your characters" or "telling a good story." I describe it as 'sound narrative reasoning.') Editable character changes the rules of that game pretty fundamentally -- we (meaning readers) might be so turned-off by a plausible account that we'd wall-test the book -- i.e., the more realistically a writer sets up the editable personality experiment, the less likely the book is to be commercially successful?

(Aside: You could argue that David E. Kelley's entire career has been composed of thought-experiments on the editing of characters, since none of his characters have ever honored a consistent character-logic for more than 5 minutes of screen time.)
posted by lodurr at 12:34 PM on June 24, 2011


And I certainly don't want to be sued for maintenance by an abandoned software development project.

I really want him to write this story.
posted by heathkit at 12:38 PM on June 24, 2011 [1 favorite]


Singularity means
1 (originally) - Emergence of super-human intelligence.
2 (popular version) - Ability for human thoughts, experiences and consciousness to be abstracted from human body.

Super-human intelligence
This does not mean that machines will be smarter than man, though that's the one everyone latches onto since it makes the best fiction. The most likely scenario is developing concepts that no single individual can comprehend alone. Or said another way, it would take a team of people to understand and utilize the concept or technology. This actually seems plausible to me. In fact one could argue its already true. The LHC, can you find a single expert that has the knowledge and skills to build, operate and interpret the results? Maybe, but I doubt it.

Digital Consciousness
This is the one that everybody gets hung up on. Most people seem to think like there's some big switch or event. I'm conscious in my body, and *click* now I have consciousness in some other thing. However, look at it from a different angle. Lets start with memories. How many people have memories stored digitally? Lots. How many people have expressed thoughts and feelings digitally? How many people feel compelled to share something of interest that happened to them in real-time (yes Twitter, I'm looking at you)? How much can I learn about some stranger by using Google, access to their emails, text messages, twitters, rss feeds, music stored, goods bought, social network, etc?
We are no where near to having consciousness as we understand it online. But lets take a hypothetical (and I seriously hope its a hypothetical). Let's imagine we build a engine that could take a personal identity and scrape all digital information about that identity. This data is fed into a heuristic engine that builds a likely profile of that person.
Note 1 - The profile may or may not actually match that of the person
Note 2 - Note 1 doesn't matter, what matters is that a very rudimentary sketch of a person can be created.
I don't think that we're at the point yet of where we could animate this hypothetical profile, imbibe it with the ability to develop responses to new, unanticipated events in a way that was self-consistent with its profile. But we're also not very far away from being able to do that (Jeopardy and IBM). This is not consciousness, but it would be a sketch of it, or a rudimentary model of it ... or perhaps more apt, a parody of it.

The two visions of the singularity have an interesting tension in my mind. Super-human intelligence would seem to be a phenomenon of merging, while digital consciousness seems to be a phenomenon of segregating what constitutes an entity from everything else. No conclusion there, just an interesting observation that raises the question of whether "self" will mean the same thing in the future? Ten years ago it was relatively easy to separate yourself from society, disentangle yourself from all social connections. It is orders of magnitude harder to do that today. Anyway moving on, as I've already rambled too much.

My point: By trying to put a concrete form on the end state of what singularity means can distract you from seeing the very real evolution of technology and knowledge that continues to accelerate. 400 years ago, the printing press. 100 years ago the telephone and telegraph. 75 years ago broadcast radio. 50 years ago broadcast TV. 25 years ago personal computing. 10 years ago the internet and Google. 5 years ago cell phones, WiFi and social media. 2 years ago smart phones and cloud APIs for just about everything. The ability to share and network continues to grow, where it stops no one will know.
posted by forforf at 12:42 PM on June 24, 2011


First, what if there's something about the meat that makes it special?

That's an argument that's been made before, but it seems purely speculative and has a whiff of the specious "man is nature's one special snowflake" scent to it. If we didn't know any better, we could ask the same things about any given calculator's ability to perform calculations. What if it's something about these specific electronic parts that makes it special? We can ask that.

But we know it's not true of the machines that can do the limited subset of human mental functions analogous to arithmetic calculation. More complex biological machines, like our brains, might be fundamentally different from calculators or similar physical machines in some special, magical way, but I don't see any scientific reason to assume so.

Doesn't it fall on those making this argument to demonstrate there's actually something special about the meat, if they think that's so, rather than just pointing out that if the meat were special, there'd be an argument to make? That's less a counter argument than it is speculative fiction, in the absence of any evidence for the suggestion.
posted by saulgoodman at 12:46 PM on June 24, 2011


Under the (reasonable, but still unproven) assumptions that their modeling resolution was good enough and their software is scalable, a full rat brain in real time would require a larger supercomputer than any available today... albeit not by much.

Don't forget that you would still need to train the brain with actual data, though. We'd have to collect decades of electrophysiological data to do something like this on a whole brain (typical electrophys experiments right now have very small n since it's such a difficult and laborious technique, though it does seem like they're making some progress there), even absent computational considerations.

Even having solved that problem, we would only have succeeded at simulating one rat brain; we wouldn't necessarily understand how to generalize from that to an arbitrary rat brain. Data-driven models can capture observed phenomena well, but you don't always get a lot of generally useful principles out of them.

I'm not saying these obstacles are insurmountable, and the neocortical column is a great proof of principle, but I do still think the sentiment that it's mostly a matter of scaling up is pretty optimistic given where we are right now.
posted by en forme de poire at 12:50 PM on June 24, 2011 [1 favorite]


An interesting question of whether meat is privileged as a basis for consciousness. Another is whether ontogeny must recapitulate phylogeny in this respect -- to create a virtual being with a human-like consciousness is it necessary to emulate a brain? Or is there some subset of states expressed in the brain which constitute consciousness and you merely need to replicate those in some abstract way? And how far down that road to you go before you cross the line between emulation and imitation? And is the distinction even meaningful? If you have a system that perfectly imitates a conscious person, have you in fact created a conscious person or merely a system that will fool anyone else? And once again, is there a difference? Only the emulation can subjectively "know" if it's conscious. And maybe not even that. Is there a difference between a mind and a perfect imitation of one?

Fun questions, and I don't pretend to have an answer to any of them.
posted by George_Spiggott at 12:56 PM on June 24, 2011


That's an argument that's been made before, but it seems purely speculative and has a whiff of the specious "man is nature's one special snowflake" scent to it.

Mostly I'd agree with you, but mostly such argument are exactly that. But there are two other, different kinds of "special meat" argument that you're dismissing with this resort to "magical thinking":

First, that the type of isomorphism required for the logical arguments to work will never, ever happen, and in the end is just an empty logic game. The "meat" is integrated into its reality in ways that the digital never will be (because it wouldn't make sense), but which have profound impact on the nature of our consciousness. We have eyes, for god's sake, that work by assembling a low-quality source image that's passed through a cloudy vitreous substrate, and we then do some pretty heavy image processing on the result, the upshot of which is that we aren't actually capable of even processing the raw image data.

Second, that the people championing uploadability really don't have a grasp of the complexity involved. There's growing evidence, as noted, that the brain is a hell of a lot more than a bunch of neurons: Glia, plaques, hormones all play a singificant role in brain function. Certainly you could model those things, too. When you're done with that, you can model the inputs, both sensory and otherwise. (Did you eat wheat today? Did you consume alcohol? Did you just walk up the stairs?)

Doesn't it fall on those making this argument to demonstrate there's actually something special about the meat, if they think that's so, rather than just pointing out that if the meat were special, there'd be an argument to make?

Sure. Which is why, when I raise the issue, I do just that.
posted by lodurr at 1:04 PM on June 24, 2011 [2 favorites]


Holy crap, I still haven't even looked up Vernor Vinge, Hans Moravec, or Nick Bostrom on Wikipedia yet. No singularity for me today.
posted by Xoebe at 1:04 PM on June 24, 2011


Oh, it also occurs to me (and probably many others in the past) that you could model a brain perfectly in a biomechanical sense and fail to model what makes it "conscious" in that fuzzy, magical way we prize even as we fail to understand it:

We might easily fail to model some other property which turns out to be essential, such as entropy or quantum effects. We might not even know we needed to model them, let alone know how to. The trouble is we don't really know what we are.
posted by George_Spiggott at 1:06 PM on June 24, 2011 [1 favorite]


Just to be clear, I don't think meat is privileged for consciousness at all. (Though I do think accidental organisms, such those produced by evolution, might be.) What I do think is that animal "meat" might be quite special when it comes to the question of uploading or accurately simulating an animal brain.

I think it's more or less inevitable (ecological singularity notwithstanding) that there will be sentient machines (though I don't assume sentience requires consciousness). With regard to uploading human brains, I also think that's conceivable, though a) I still wonder what the point is, because b) I think it's fantastically unlikely that what you'll end up with will bear any useful experimental resemblance to a real human mind.
posted by lodurr at 1:08 PM on June 24, 2011


I suppose it might be worth mentioning that I once wrote a 10,000 word online story about these very issues.
posted by localroger at 1:26 PM on June 24, 2011 [3 favorites]


We're barreling towards the singularity at breakneck speed. I don't know if it will happen in my lifetime, but it sure is approaching quickly.

When I look at my life, the pace of technological progress has been simply astounding. I'm the first person in my family who grew up with indoor plumbing. But I'm also the first to grow up with cell phones and the internet (sort of, those became popular when I was entering high school).

I wonder what kind of technology my kids will grow up with. Mind uploads? Sure, why not?
posted by ryanrs at 2:01 PM on June 24, 2011


This depends on a very loose definition of "me," namely that "I" start with a certain state and remain "I" no matter what divergence happens afterward. The other "you" would certainly be a different person by any practical definition.

This is very much the definition we operate on, though. Brain damage, drug use, mental illness, hormones, and the changes of age all produce noticeable, even insurmountable divergence, but we don't tend to claim that this makes "I" not "me" unless there seems to be little or no consciousness left.

"I" am not as "I" was at five, or fifteen, or twenty-five, in ways that are kind of creepy when you stop to think about it (I thought Tron Legacy had an interesting take on this -- an AI copy that had been in its thirties for a thousand subjective years, complete with the same hang-ups the original man, now grown old, had had at thirty). So no, when you prick vorfeed 2.0, vorfeed 1.0 would not cry, any more than the "me" I was at fifteen cries when I cut myself. We are "different people by any practical definition", yet we are the same person. That's the problem of continuous consciousness in a nutshell...
posted by vorfeed at 2:03 PM on June 24, 2011 [1 favorite]


1. The brain is nothing more than a computer

Hahahahahahahhahahah.... ah, hahahahaha *sighs*

I just love it how people just blindly assert that consciousness is a computable number. If that's so then let's see the proof.

AI: large bag of small cheap tricks.
posted by warbaby at 2:30 PM on June 24, 2011 [3 favorites]


We are "different people by any practical definition"

Not really, no. You are clearly a continuous organism, with ongoing access to all of the inputs from your senses. If you copy yourself, you don't have that from the other. Are you really saying that if, say, you could snap your fingers and make a perfect copy of yourself as you exist in this moment, but which is wholly separate from your body/mind as it exists at this moment, then it went off and had sex with someone while you read a book, that you would point to it and say, "that is me," and, "I just had sex with someone," as if they were your own experiences? It seems obvious to me that you (the person with whom I am conversing) would not have had that experience, nor could you pull up any memory of it.
posted by adamdschneider at 2:36 PM on June 24, 2011


That's what your motherboard said.
posted by It's Raining Florence Henderson at 3:00 PM on June 24, 2011


Not really, no. You are clearly a continuous organism, with ongoing access to all of the inputs from your senses.

I don't think there's any "clearly" about this. I perceive myself as being a continuous organism with ongoing access to all of the inputs from my senses, but as near as I can tell, this is not objectively true -- memory and consciousness is remarkably easy to fool. I used to tell one of my best friend's favorite stories for years as if it'd happened to me, until he finally called me out on it, and then I didn't believe it wasn't me until a third party confirmed it. Truth be told, I still don't quite believe it, precisely because I can "pull up a memory" of it happening to me. So it goes.

Are you really saying that if, say, you could snap your fingers and make a perfect copy of yourself as you exist in this moment, but which is wholly separate from your body/mind as it exists at this moment, then it went off and had sex with someone while you read a book, that you would point to it and say, "that is me," and, "I just had sex with someone," as if they were your own experiences? It seems obvious to me that you (the person with whom I am conversing) would not have had that experience, nor could you pull up any memory of it.

I think I'd have to say "that is me" and "one of me just had sex with someone". Like I said, there'd be two of me, and I'd have to do my best to deal with the situation that way. Having a "me-me" and a "she-me" (she would certainly not be an "it") is weird, but so's having a "past-me" and a "present-me" and a "future-me", or a "workday-me" and a "weekend-me", and we're all very used to that.
posted by vorfeed at 3:03 PM on June 24, 2011 [1 favorite]


Besides, if I could really snap my fingers and make a perfect copy of myself as I exist in this moment, but which is wholly separate from my body/mind as it exists at this moment, I suspect I'd be able to say "I just had sex with someone" in all honesty shortly thereafter.

Just sayin'.
posted by vorfeed at 3:03 PM on June 24, 2011 [4 favorites]


Ok, we disagree on the definition of the word "me," then.
posted by adamdschneider at 3:07 PM on June 24, 2011


2. With enough understanding of the physical brain, we can simulate a brain using a computer

With enough wheels, my grandmother would be a bus.
posted by escabeche at 3:12 PM on June 24, 2011 [3 favorites]


To elaborate, I consider it blindingly obvious that an exact copy forked from me would not deserve the label "me". It would not share my sensory inputs, it would not have the same memories as I do, as of whenever it was forked, it would certainly be thinking different thoughts at any given moment and be able to voice those thoughts, proving its selfhood, and killing it would not affect my integrity in any way. Not me.
posted by adamdschneider at 3:15 PM on June 24, 2011


Is not an identical twin simply an exact copy of you that forked from you right off the bat? A clone would be an identical twin forked from you at an even later date. Something whose consciousness is forked from mine would, as it diverges from me in experience, would very quickly become an independent being (though one with a fair few decades of shared memories with me).
posted by chimaera at 3:25 PM on June 24, 2011


I definitely find the Ship of Theseus method for "mind state" uploading to be a fairly attractive idea.
posted by chimaera at 3:26 PM on June 24, 2011 [1 favorite]


To elaborate, I consider it blindingly obvious that an exact copy forked from me would not deserve the label "me". It would not share my sensory inputs, it would not have the same memories as I do, as of whenever it was forked, it would certainly be thinking different thoughts at any given moment and be able to voice those thoughts, proving its selfhood, and killing it would not affect my integrity in any way. Not me.

This is obviously never going to be technically possible, but what if you were physically split in half length-wise, and your body spontaneously regenerated itself for both? So that one was left-original and right-copy, and the other was right-original and left copy. Are either of them you? Do you cease to exist?

Something whose consciousness is forked from mine would, as it diverges from me in experience, would very quickly become an independent being (though one with a fair few decades of shared memories with me).

Are you an independent being from your five year old self? Your experience has diverged greatly since then and in fact your body no longer shares much of the material from that version of yourself. The only difference is that you and your five year old self don't exist at the same time.
posted by burnmp3s at 3:34 PM on June 24, 2011 [1 favorite]


To elaborate, I consider it blindingly obvious that an exact copy forked from me would not deserve the label "me". It would not share my sensory inputs, it would not have the same memories as I do, as of whenever it was forked, it would certainly be thinking different thoughts at any given moment and be able to voice those thoughts, proving its selfhood, and killing it would not affect my integrity in any way. Not me.

Sure, but all of these would apply equally well (at the time, at least!) if you time-traveled to three weeks from now, met yourself, and then killed yourself. That doesn't mean you won't be "you" in three weeks.
posted by vorfeed at 3:54 PM on June 24, 2011


I knew it was me that did it! I would have gotten away with it, too, if it weren't for... ME!
posted by It's Raining Florence Henderson at 3:58 PM on June 24, 2011


I wouldn't worry that much about it taking a supercomputer from 2015 to simulate a rat brain. Once we've tested and verified that the model works, we can get down to the task of optimization. Prune the parts that don't seem to matter, lose a few decimals of precision and time granularity, and suddenly you might end up with something that can run on tomorrow's desktop PC.

Of course, it won't be a perfect model of a rat's brain anymore, but "perfect" is often the enemy of efficiency.
posted by ymgve at 4:15 PM on June 24, 2011


I'm really sorry that cstross isn't in this discussion -- although it seems pretty clear from this essay that he's not too interested in hashing out these issues, I think his books have provided some really interesting fodder.

One vignette in particular stands out to me -- near the end of Accelerando, there's a description of the childhood activities of the protagonist's son, involving assassination as a playtime game, since regular backups make "death" a relative triviality. It got me thinking about the difference between what consciousness is versus what it means. In the end, I think that (1) I agree with the argument that the persistence of consciousness has no ultimate reality, and that being vaporized and reconstituted would not be fundamentally different from going under anesthesia, and (2) the meaning we impute to consciousness is much more about "ownership" -- of memories, aspirations, personality.

That is, if you could fork yourself, would you really care that much about consciousness? Is it the experience of seeing out of your eyes that's really important, or is it your sense of your history, your anticipation of your future, your participation in the universe? If your duplicate had all of these -- had absolutely every quality that makes you you -- you might not see out of their eyes, but I'm not sure that stepping into the molecular disassembler would have the same emotional impact.
posted by bjrubble at 4:55 PM on June 24, 2011


I had an odd experience in my mid-30's which speaks to the "young me, old me" thing in a big way.

Things were not going well on several fronts; I would not find out for six or seven more years that I was pre-diabetic and that's why I was gaining weight, had no energy, and my body was starting to fail in several other worrisome ways. One day I was reading something that viscerally reminded me of a state of mind I hadn't experienced in many years, and my younger self walked in on my older self.

After a moment of confusion, I felt as if my consciousness had traveled forward in time to land in my 35-year-old body. In fact, I could place the point I had traveled from pretty accurately by the worries and concerns which were suddenly foremost in my mind -- all of which, I noted with a reaction of near euphoria, I had managed to completely solve. Young me found old me's problems relatively minor and old me to be in decent enough shape. You'd think young me would be a bit upset to lose 10 or 12 years of life in an instant, but I was actually bouyant with all that I'd accomplished, it seemed effortlessly to young me, in that time. I walked around my house gaping in wonder at all the crap I'd collected. All this stuff that was old and familiar was suddenly new and amazing.

I was worried that young me might have a problem doing old me's job but he didn't; he did have my memories and skills even if he marveled at the power of the computers I was using. The first night I went to sleep worrying that I might wake up old me again but I didn't. It actually took a couple of weeks before old me started creeping back in. That transition took a couple of days.

It's now been about as long since that event as that event was from the version of me who walked in, and there isn't much I wouldn't do if I thought it would let me have that experience again.
posted by localroger at 4:55 PM on June 24, 2011 [3 favorites]


I just love it how people just blindly assert that consciousness is a computable number. If that's so then let's see the proof.

I just love it how people just blindly think there are things in the Universe that aren't computable numbers. If there are let's see the proof.
posted by localroger at 4:59 PM on June 24, 2011 [1 favorite]


8/0
posted by It's Raining Florence Henderson at 5:03 PM on June 24, 2011


So if your forked self kills someone in a prosecutable way, should both of "you" be held criminally responsible?
posted by adamdschneider at 5:13 PM on June 24, 2011


So if your forked self kills someone in a prosecutable way, should both of "you" be held criminally responsible?

I'd say that the critical question would be whether the intent was formed before or after the fork. I think this would fit with both public policy and existing notions of mens rea.
posted by bjrubble at 5:31 PM on June 24, 2011


I'd still be uncomfortable with that. What if I really wanted to kill someone, changed my mind, but my other me didn't? Not to mention that it would be quite hard to prove exactly when you formed intent barring external evidence.
posted by ymgve at 5:47 PM on June 24, 2011


Didn't the fork then murder thing happened in a Doctorow book?
posted by ODiV at 6:01 PM on June 24, 2011


localroger: "I just love it how people just blindly think there are things in the Universe that aren't computable numbers. If there are let's see the proof"

Here you go.
posted by Proofs and Refutations at 6:04 PM on June 24, 2011 [3 favorites]


P&R, within the first line of your link the concept of a "real number" turns out to be central to whatever this Chaitin's Constant thing is. As you are probably aware real numbers do not exist in the Universe. I was asking for something that can be expressed as an assemblage of matter and energy. Something like a human brain.
posted by localroger at 6:19 PM on June 24, 2011


This chapter (and the next) of The Mind's I is a great exploration of a lot of these ideas. Really, the whole book is very thought-provoking on this subject and other mind things.
posted by OverlappingElvis at 6:25 PM on June 24, 2011


I wouldn't worry that much about it taking a supercomputer from 2015 to simulate a rat brain. Once we've tested and verified that the model works, we can get down to the task of optimization. Prune the parts that don't seem to matter, lose a few decimals of precision and time granularity, and suddenly you might end up with something that can run on tomorrow's desktop PC. Of course, it won't be a perfect model of a rat's brain anymore, but "perfect" is often the enemy of efficiency.

It was reading Errol Morris' series about his brother's possible invention of email that brought some of this stuff home to me in a different way. The whole thing is well worth reading, but there's one chunk of a couple paragraphs that's relevant here, and I beg your indulgence for quoting it. This is Morris talking to Jerry Saltzer, a early computer scientist who worked with his brother at MIT in the 1960s:
ERROL MORRIS: Van Vleck said that computer programs are like hieroglyphs. If you lose the people who have actually created them, they become difficult — maybe even impossible — to understand.

JERRY SALTZER: The programs were not terribly well-documented. A program operates in an environment. If you write a program today that’s supposed to run on your desktop computer, it depends on the existence of perhaps Windows or the Macintosh operating system. And if you want to try to get that program running 20 years later, you have to run it in the same environment, or you have to reconstruct the environment.

ERROL MORRIS: How difficult is that to do?

JERRY SALTZER: The environments are very complex. And most programs have errors in them, simply because people aren’t perfect programmers. But the errors interact with the environment in so many different ways, and sometimes they are benign, other times they cause trouble. The ways that cause trouble usually show up as problems on your screen, and they get fixed. But the things that are benign, don’t show up. And if you then attempt to run the same program in a reproduction of the original environment, the errors may no longer be benign, because you have not actually reconstructed the original environment perfectly. And so, somebody trying to get CTSS running again, for example, will be operating on an emulation of an IBM 7094, not on an actual IBM 7094. And the emulation isn’t perfect. And that imperfection may interact with an imperfection in the software to produce a completely unexpected result. Now, the question is: Well, what did you intend to happen in this case? And the answer is, “Well, go talk to the programmer.” “Yeah, but he died 30 years ago.”

ERROL MORRIS: Are you saying the meaning of a computer program is not recoverable?

JERRY SALTZER: It’s difficult. Without getting into the head of the original programmer, it can be challenging. On the other hand, a smart person can work around a problem or figure out what must have been going on in the mind of the original programmer. Actually, it’s a little bit easier to deal with a program from the 1960s than it is to deal with Aramaic.

This passage seems to me to highlight a number of practical problems that exist with the Singularity, or at least the mind-uploading aspect of it. Just about every single metaphor we've used in this discussion presumes a perfect copy. But can that be done, truly? Outside the skull, does a brain work? Outside the world? Who would be the "smart person" who would work around the problem and figure out what was going on in the mind of the original programmer? The computer within which the simulation was housed? How would it know how to do that? On what basis would it decided? Can we even comprehend such a being, comprehend enough to tell it it's wrong, that's not how my mind works, put me back? And in the analogy of brain-as-emulated program --- you can't emulate something on a machine less sophisticated than itself. If consciousness is an emergent property, wouldn't anything smart enough to contain and emulate a human brain also be conscious? If so, how do we know it'd be willing to just, you know, let us hang out in storage, be willing to dust the shelves and keep feeding us electricity for all time?
posted by Diablevert at 6:39 PM on June 24, 2011 [2 favorites]


I'd still be uncomfortable with that. What if I really wanted to kill someone, changed my mind, but my other me didn't? Not to mention that it would be quite hard to prove exactly when you formed intent barring external evidence.

Point taken, but evidence is rarely cut and dried, and "proof" is always a matter of judgment. I'd put a lot of weight on the public policy consequences of allowing free crimes by forking immediately before a crime so that one fork gets off scot-free. (Not to mention that "it wasn't me, it was the other fork" would be an obvious defense if guilt were not shared.)

If nothing else, a world with forking would provide a fascinating laboratory for examining just how much agency we really have in our actions. I suspect that instances of forks taking wildly divergent paths would be quite rare.
posted by bjrubble at 6:40 PM on June 24, 2011


Dammit, dammit dammit. Stupid typo. Did not properly close blockquote. The para that begins "This passage seems to me" is me and not MR. Saltzer, and if the mods were willing to descend and amend my error and delete this I would be most grateful.
posted by Diablevert at 6:41 PM on June 24, 2011


If nothing else, a world with forking would provide a fascinating laboratory for examining just how much agency we really have in our actions. I suspect that instances of forks taking wildly divergent paths would be quite rare.

Identical twins do it all the time. I mean, there's a million bajillion studies going on and on about the unexpected things they share. But they choose different professions, move different places, one may love to travel while the other's a homebody, one's more outgoing the other more reticent, etc.
posted by Diablevert at 6:43 PM on June 24, 2011


localroger, if you don't want a proof don't ask for a proof. Chaitin's constant is perfectly well defined. If I were to set up a large but finite simulation of the first n Turing Machines (say a few billion), and run them for a large but finite amount of time (say fifty years) then I would expect about 0.00787499699... of them to have halted (less, because some would not have halted yet, but the longer I ran them the closer I'd get). That's "something that can be expressed as an assemblage of matter and energy".
posted by Proofs and Refutations at 6:50 PM on June 24, 2011 [1 favorite]


Even assuming we don't have any free will, I think identical twins (and clones, and copies) would always end up different. After all, they are not getting the exact same sensory input, since they don't occupy the exact same physical space in the world. Over time all these differences add up, and the twins become distinct personalities. (Sometimes radically so, due to chaos effects)
posted by ymgve at 6:50 PM on June 24, 2011


Even assuming we don't have any free will, I think identical twins (and clones, and copies) would always end up different.

Eventually. But identical twins have no commonality of experience at all, so I don't think that's a very close analogy.

The standard conception of free will (which I think is hogwash, so I might be misrepresenting it) seems to hold that forks would make different choices essentially right off the bat. I think that they would exhibit remarkable "stickiness" of behavior, and demonstrate that "choice" is 99.99% state + input, with almost nothing of what free-willers would regard as agency.
posted by bjrubble at 7:18 PM on June 24, 2011


You are your brain. Without the brain there is no you. You can't copy it, because it is the brain. You could only copy it with a literal copy of your own brain in your own body.

I gotta say, I don't think you need to get into complex arguments to rebut singularity proponents. The phenomenon is hardly new; it's just a cargo cult for rich, white, western boys uncomfortable with their mortality and scared of the world they are helping to shape.
posted by smoke at 7:47 PM on June 24, 2011 [2 favorites]


Actually the choices it would make are irrelevant to me. I wouldn't experience it as "me" in any sense whatsoever.
posted by adamdschneider at 8:11 PM on June 24, 2011


You can't copy it, because it is the brain.

Oh, well that's airtight logic.
posted by vorfeed at 8:12 PM on June 24, 2011 [4 favorites]


Re the uploaded conciousness thing: What if, rather than copy the conciousness, you just made a lot of peripheral computational power available to it. Maybe as conciousness made more and more use of the additional computational resources, the conciousness would, over time, just sort of climb into the computer. And then at some point down the road the conciousness might notice that, "Hey my brain died three years ago."
posted by Trochanter at 8:45 PM on June 24, 2011 [2 favorites]


We are not evolved for existence as disembodied intelligences

Maybe you aren't. The sooner I can shed this meat suit the better. Unfortunately I recognize this desire for trancedence as essentially religious, and fear that despite my supposed belief in transhumanism it's yet another crutch to deal with the fear of death.
posted by Lovecraft In Brooklyn at 2:12 AM on June 25, 2011


P&R, here is my argument: As far as we know, the universe itself is a collection of objects interacting by relatively simple rules which are well within the scope of Turing machine emulation. Therefore, anything which actually exists within the universe must be computable, because the Universe is computing it. Anything which purports to make an exception to this is rather obviously wrong*, no matter how plausible it may seem.

I have read the Chaitin article three times and I cannot see where it has anything to do with anything that physically exists. Yes, you could build a billion Turing machines and blah blah, except that you aren't and it wouldn't matter if you did.

There seems to be a close parallel between your assertion that this is a physically real problem and some aspects of quantum mechanics where, while you may not know when a particular particle will disintegrate, you can know with great precision how many of a large number of them will over time. Your inability at a certain scale to know an individual solution does not the solution is not computable, it just means that you do not have access all the data the Universe is using to construct the solution.

* The one exception I would admit to this is the possibility that various formulations of metphysical woo describe the Universe better than Physics does. Human experience is full of holes which can make this seem plausible, but I somehow doubt that this is your point. Also, I would consider the statements "there are metphysical woo exceptions to the laws of physics" and "the Universe is an emulation" to be equivalent.
posted by localroger at 5:15 AM on June 25, 2011


Maybe you aren't. The sooner I can shed this meat suit the better. Unfortunately I recognize this desire for trancedence as essentially religious, and fear that despite my supposed belief in transhumanism it's yet another crutch to deal with the fear of death.

I dunno that being transformed into binary would ultimately mean transcending embodiement. Sure you shed the meat suit, but you still need power, you just don't get it from the relatively arduous process of digesting plant and animal matter. Ultimately there's a plug somewhere (or perhaps a solar panel). If there's a plug it can be pulled. You may well reduce the risk of death because of your new form, but you'll never eliminate it, so long as your existence is taking place in the universe.
posted by Diablevert at 6:29 AM on June 25, 2011


On further thinking, why the Chaitin construction does not prove anything about the Universe: If you build a billion physical Turing machines and run them for a few years, neither you nor the Universe is making a prediction about whether any particular one of them will halt. What you are doing is actually running the Turing machines -- which is the very definition of computable functionality -- and observing whether they halt. Your ability to do this has nothing to do with the Halting Problem since the Halting Problem doesn't say you can't actually run a Turing machine and see if it halts, it only says you can't look at the program by inspection and predict whether it will halt.
posted by localroger at 6:55 AM on June 25, 2011


localroger, how far does your finitism extend? Do you deny the validity of the real numbers? rational numbers? natural numbers after a certain maxmum? calculus? What kinds of mathematical arguments do you accept?
posted by Proofs and Refutations at 7:39 AM on June 25, 2011 [1 favorite]


As far as we know, the universe itself is a collection of objects interacting by relatively simple rules which are well within the scope of Turing machine emulation. Therefore, anything which actually exists within the universe must be computable, because the Universe is computing it.

This argument appears to be circular, and if it isn't, then it's because the first line contains some massive assumptions. It appears to embody scientific determinism as it was preached in the 19th century. If nothing else, the existence of quantum mechanics would appear to contradict that.
posted by fatbird at 7:47 AM on June 25, 2011


P&R, there is a point of departure beyond which mathematics continues to have an internal structure worth exploring, but at which point it stops being useful for describing the actual Universe. It turns out the Universe we can observe and interact with is quite demonstrably finite -- it has a finite size, the Hubble limit, beyond which there may or may not be more Universe but as far as we are concerned it is completely inaccessible. There is a quantum limit on what we can know about the state of particles within that finite space. It turns out you can completely describe the position of an at-rest proton within the volume of the Universe in about 330 bits. (You get slightly different answers depending on some assumptions, but the key is only slightly different.) There are a finite number of particles in that finite volume of universe, so the universe as far as we are concerned as mortal beings interacting with it contains a finite amount of information -- somewhere in the vicinity of 1081 bits, give or take a couple of orders of magnitude. The interactions between particles are very mechanical and fall off quickly enough with distance to often be handled in bulk without loss of observable resolution, e.g. you don't have to calculate the effect of every nucleon in the Sun to determine how the Earth is going to behave with respect to it. In fact, some aspects of the Universe such as the speed of light and Heisenberg uncertainty principle look like exactly the kind of design points God would include to keep the computational costs down.

The existence of quantum mechanics says only that there is state information which is inaccessible to us. Quantum effects actually add weight to the argument that the Universe is a machine, since they are a perfect screen behind which the workings might be hidden. My departure from 19th century determinism is that the Universe is chaotic, and the right kind of relatively simple machine can be its own simplest accurate emulation, so that the only way to see what the machine will do is to set it up and run it through all of its possible states. Such a machine is deterministic but not predictable, and so silly notions about the implications for free will and the purpose of running the machine become irrelevant. (Ironically enough, the first description of this phenomenon, made long before Benoit Mandelbrot came along, was by some fellow named Turing.)

The only question which is really pertinent at our scale of operation is whether the Universe is a relatively simple fixed-purpose machine, which presumably self-organized the order we observe today, or whether it is a more general purpose machine which is capable of selectively violating the apparent simple order it expresses on occasion. I don't think we can rule out the latter case completely, but if it is the latter case there are profound implications that go far beyond the existence of a possible OFF switch.
posted by localroger at 9:00 AM on June 25, 2011 [1 favorite]


On further thinking, why the Chaitin construction does not prove anything about the computabilty of consciousness: If you build a billion physical Turing machines and run them for a few years, neither you nor the Universe is making a prediction about whether any particular one of them will exhibit consciousness. What you are doing is actually running the Turing machines -- which is the very definition of computable functionality -- and observing whether they become conscious. Your ability to do this has nothing to do with the Consciousness Problem since the Consciousness Problem doesn't say you can't actually run a Turing machine and see if it becomes conscious, it only says you can't look at the program by inspection and predict whether it will become conscious.

If you assume your conclusion, exotropy makes sense. If you actually have to prove and test the assumptions of exotropy, it looks like handwaving nonsense. The basic assertions are unproven and based on current experience, untrue. If the proponents want to claim otherwise, the burden of proof is on them.

Claiming the moon is made of green cheese means at some point you are going to have to produce the cheese. Claiming the brain is "just" a computer, let's see it. Don't be pulling out a Polaroid photo of somebody and claim that you cloned him.
posted by warbaby at 10:24 AM on June 25, 2011 [1 favorite]


warbaby, I have no idea what point you are trying to make by replacing "halt" with "become conscious" in my point about whether the Universe can contain non-Turing-computable objects. It certainly doesn't seem to have anything to do with any point I was making, either about consciousness or the computability of objects that exist within the Universe.

As I said in my other reply to P&R, all of the universe's observed characteristics are compatible with it containing a finite amount of information on which a finite amount of processing is performed to effect the passage of time, and if that is the case consciousness must be a computable function since it exists within the Universe. That is far from the result of your ridiculous twisting of my words.

The last five hundred years have been a steady progression of mysteries being explained as measurable and predictable (or, as with the weather, unpredictable within predictable parameters) natural phenomena. Consciousness is one of the few mysteries left that you can talk about as if it might never be solved without sounding like a complete Luddite hick. I doubt that will remain the case for more than 50 years or so.
posted by localroger at 11:07 AM on June 25, 2011 [1 favorite]


LOL the use of "cyberspace" in the article.

Anyone who thinks that somehow, humans are exempt from the rule that every species eventually becomes extinct needs to retake biology.
posted by Ironmouth at 12:32 PM on June 25, 2011


The only question which is really pertinent at our scale of operation is whether the Universe is a relatively simple fixed-purpose machine, which presumably self-organized the order we observe today, or whether it is a more general purpose machine which is capable of selectively violating the apparent simple order it expresses on occasion.

There is no "universe" and it has no "purpose." This is projecting a human heuristic on to the totality of natural phenomena and then anthropromorphizing it. Only in our minds does the concept of "universe" exist, and acting as if it has a purpose is just assigning the shorthand we use to describe human behavior to inanimate objects.
posted by Ironmouth at 12:38 PM on June 25, 2011


Anyone who thinks that somehow, humans are exempt from the rule that every species eventually becomes extinct needs to retake biology.

This may be true, but it has little to do with the question at hand. "Eventually going extinct" is perfectly compatible with a technological singularity, for many values of "eventually". Asimov pointed this out in The Last Question, even.

And by the way, by your logic there is no "rule" that every "species" goes "extinct", either. These are just as much human heuristics as "universe" and "purpose" are.
posted by vorfeed at 1:13 PM on June 25, 2011 [2 favorites]


For over 20 years, Penrose's arguments that consciousness is non-algorithmic have been publicly debated. It's a strong position and one that the singularity fans don't seem to be able to address coherently.

If you want the long form on my disbelief on consciousness not being a computable number, read Penrose.

When I replace halt with become conscious, it's a simple semantic substitution of the successful operation of an algorithmic process. If we are talking about a computer becoming conscious, it's the same thing as halting in the halting problem. It means "works" as opposed to crashes or produce gibberish output.

As far as localroger's attempt to reduce things to integers, good luck. Real numbers are real. That's what Turing's use of the Cantor diagonal argument is all about.

Sort list of unlikely things:

Machine consciousness
Faster than light communication
Time travel
Safe atomic power
Reasonable Republicans
Non-gullible Libertarians
Courageous Democrats
posted by warbaby at 1:15 PM on June 25, 2011 [2 favorites]


I don't think that dualism is required to be skeptical of the ability to simulate, and therefore upload entire minds. The required breakthrough isn't technological, so throwing more hardware or clock cycles at the problem isn't necessary, it's mathematical.

Most complex systems are chaotic, the results of the system are strongly dependent on small changes in initial starting conditions. Differences in estimation of the starting conditions well below measurement error can result in radically different outcomes.

Now we can cheat with a variety of methods, but there's an inherent tradeoff when it comes to accuracy vs. ability to run the simulation. Which is why we have supercomputers work days on crude estimates on short-term physical processes that were understood in the gross abstract sense a half-century ago. And it's still often just easier to build the physical system and measure it with the best devices we have available. Modern encryption is based on simple concepts, but it's easy to construct cryptosystems that can't be cracked by any computer that we could build in this solar system.

My other argument against singularity is that disruptive technologies get smothered in their crib by their parents or are subverted into supporting the dominant economic models. Few of the techno-utopian dreams of the 80s came to pass, and we're largely back where we started with most markets dominated by a handful of corporate gatekeepers that have successfully appropriated electronic communication. Gibson on that score appears to be prophetic, although those players have little interest in doing anything with commando squads when backroom deals and lawsuits appear to work.
posted by KirkJobSluder at 1:21 PM on June 25, 2011


When I replace halt with become conscious, it's a simple semantic substitution of the successful operation of an algorithmic process. If we are talking about a computer becoming conscious, it's the same thing as halting in the halting problem. It means "works" as opposed to crashes or produce gibberish output.

The halting problem is that you can't write a program to evaluate if another program will eventually stop. But obviously any program can halt. So if you replace "halt" with "become conscious" then that just means that no program could check if another program could become conscious. Even if your substitution makes sense (which I don't think it does), it says nothing about whether or not a program can be conscious.
posted by burnmp3s at 2:15 PM on June 25, 2011


warbaby again:

Real numbers are real. That's what Turing's use of the Cantor diagonal argument is all about.

Wasn't someone recently saying something derogatory about taking your result as an assumption? In any case while real numbers have a very demonstrable usefulness in that the models we make with them describe reality to various degrees, that does not mean they are "real" in the sense that any particular real number decribes some thing that actually directly exists and can be measured. IRL all measurement is subject to error, which means that the world could indeed be made of integers. We have no direct evidence otherwise.

Until quantum mechanics came along there seems to have been a sense that the motion of particles was infinitely fine, and thus real numbers might actually describe the universe. QM kind of blew that up though. Alan Turing curb-stomped that idea and Claude Shannon started shoveling the dirt into its grave. The QM universe is much more likely to really be made of integers than the one it displaced.

The 16th through 20th centuries were about developing models with such fictions as real numbers which happen to approximate the bulk behavior of the relatively simple components of the universe. So far the 21st century is shaping up to be about ever more elaborate models detailing the behavior of systems when they depart from or can't be described by the convenient approximations of things like Calculus.

OOOH! A LIST!

Machine consciousness Obviously disagree
Faster than light communication Mostly agree
Time travel Yep
Safe atomic power Yep
Reasonable Republicans Yep
Non-gullible Libertarians Yep2
Courageous Democrats Sadly, Yep

Looks like we agree on more than we disagree.
posted by localroger at 2:52 PM on June 25, 2011


Oh, I forgot to mention: I'm aware of Penrose's arguments, and I basically find them ridiculous. The man knows far less about information theory than I do about physics.
posted by localroger at 2:54 PM on June 25, 2011 [1 favorite]


Also I think it's somewhat premature to look at today's computers and claim that we'll never be able to match the processing that happens inside a brain.

Let's say I have some decent AI that could do most of the things a living bat can do (flight control, echo location processing, etc.). Then I ask the best scientists in the world to make me an artificial bat that can match all of the characteristics of an actual living bat (size, weight, maneuverability, vision, hearing, energy intake, etc.). They wouldn't be able to fabricate anything close to as good as an actual bat. And yet it would be silly to claim that there is no possible way that mankind could ever fabricate artificial bat ears that could meet or outperform the standards of actual bat ears.

Yes, we can't fabricate a brain today, but we can't fabricate much simpler things than that anyway. A brain isn't made out of magic, it's a physical device. We're a long way from making something equivalent, but don't count humans out long term when it comes to making stuff.
posted by burnmp3s at 3:01 PM on June 25, 2011 [1 favorite]


burnmp3s: Also I think it's somewhat premature to look at today's computers and claim that we'll never be able to match the processing that happens inside a brain.

I'll make the argument that we already have super-human intelligences. They just are not that interesting to talk to because they didn't evolve for human sociability or human problems. We've already fabricated better brains, but the behavior of those brains only marginally approximate anthropic ideas of intelligence. It's not the intelligence that's flawed, but our ideals that it must work according to our standards.

Flight is a great example, why build ornithopters when you can get more power from rotary engines and more lift from either fixed or rotary wings? Why do the hard thing of emulating a fish's ability to transform turbulence into thrust when rotary propellers are easy and scale well?

The problem isn't fabrication, that was fixed a few years ago. The problem is simulation of a physically chaotic system. And I'm unconvinced that's a problem that scales linearly, putting it within reach of our ability to build super-human intelligences. You'll never build a perfect orrery of our solar system for the same reason.

There's no magic involved here, only hard math and physics.
posted by KirkJobSluder at 3:27 PM on June 25, 2011 [1 favorite]


Anyone who thinks that somehow, humans are exempt from the rule that every species eventually becomes extinct needs to retake biology.

Anyone who thinks that there is such a rule didn't take biology.
posted by George_Spiggott at 5:11 PM on June 25, 2011


"It turns out you can completely describe the position of an at-rest proton within the volume of the Universe in about 330 bits."

What? An at rest proton doesn't have a definable position.
posted by Proofs and Refutations at 7:38 PM on June 25, 2011 [1 favorite]


P&R, of course a proton has a definable position. It consists of a probability field which has a center and a falloff over distance. The center of that probability field is what most reasonable people would call its "position."

Really, your statement is technically true with respect to the math of QM, but baldly stated as you do is a perfect example of where following the math leads to absolute craziness in terms of physical observations. Do you really think we have no idea at all where the individual particles are in a salt crystal? Taken literally, your statement would imply that solid matter cannot exist. Obviously this is not how the Universe works.

So using the language in a reasonable manner, of course particles have positions, it's just that they have positions that we can't know beyond a certain accuracy. Which was actually my point; if you take the best possible measurement, throwing away all velocity information, your estimate of the position of a proton can be fully described in about 330 bits within the volume of the Universe. If you insist on knowing more about its velocity -- such as that you left it in its place in a crystal lattice -- you will need less bits for position but you will now be using some to describe velocity. In any case the amount of information you can collect about that particle is finite. That is the very essence of the Uncertainty Principle.
posted by localroger at 6:24 AM on June 26, 2011


Anyone who thinks that somehow, humans are exempt from the rule that every species eventually becomes extinct needs to retake biology.

I think the alligators and sharks missed that class.
posted by localroger at 7:51 AM on June 26, 2011


Using language in a reasonable manner there is a huge difference between a proton at rest and a proton whose velocity is unknown. It is very hard to have a productive conversation about technical topics when you refuse to use language precisely. Especially the kind of controversial claims you are making in this thread.

Regardless, this has become a derail from the thread, although I would be happy to continue this conversation over MeMail.
posted by Proofs and Refutations at 7:56 AM on June 26, 2011


P&R, I don't really think our conversation is a derail because it speaks directly to the subject of the thread -- titled by the OP "computational theology."

We are going to use words a bit differently because I learned computers and radio first and I see physics and math through the lens of information theory, so I tend to be precise about the information content of systems and sloppier about the less important matter of what that information represents. You're coming from the other direction, seeing information theory through the lens of math and to a lesser extent physics.

Ironmouth knew exactly what I meant by "general purpose" and you know exactly what I mean by "position." And while it was more technically incorrect you now know that by "at rest" I meant something more precisely like "measuring entirely for position accuracy and ignoring velocity."

Simple question: Do you believe it is possible to acquire an infinite amount of information about a particle through measurement? Because no matter what mathematical model you use if your answer to that is "yes" you are using a different version of the Heisenberg Uncertainty Principle than they were teaching 25 years ago.

In the world that has been revealed to us in the wake of Godel, Turing, Claude Shannon, and Benoit Mandelbrot, I would consider an infinite Universe the extraordinary claim. But you obviously disagree, and neither of us is likely to convince the other.
posted by localroger at 8:48 AM on June 26, 2011


No, no measurement can provide infinite information. I am quite happy to agree with you there.

I am still unclear as to your exact position on the use of mathematical infinities. Is it meaningful? If so, how if the universe is finite? If not, why is it so useful in calculations?

My main objection to a discrete universe is that General Relativity strongly implies continuous spacetime, and it's a pretty strong theory. I also haven't ever seen a discrete probability theory married with QM, although I'm not familiar with the more exotic variations there, so if you have an example I'd be fascinated.

I have to admit at being truly perplexed by your citing of Godel, Turing, Shannon and Mandelbrot in an argument against infinity. All of them used infinities in their mathematics to produce useful results. Hell, it's hard to think of a mathematician more convinced of the reality of infinity than Godel.
posted by Proofs and Refutations at 9:21 AM on June 26, 2011


Flight is a great example, why build ornithopters when you can get more power from rotary engines and more lift from either fixed or rotary wings? Why do the hard thing of emulating a fish's ability to transform turbulence into thrust when rotary propellers are easy and scale well?

We can make things that seem much more impressive animals because we have access to enormous amounts of energy and much better materials. For example a blue whale has to eat around 8000 pounds of krill per day to get the equivalent of 50 gallons of gasoline in energy. Good luck designing a submarine the size of a blue whale that runs on 50 gallons of gas per day. Or try making a device to capture energy from sunlight and store it, using only the material you can dig out of a patch of soil. Oh and make sure you can design all of them to make copies of themselves automatically.

Yes, there are definitely ways that we are better at creating devices that solve certain tasks (wheels are better at moving in a straight line on a flat surface than legs are) but my point is that when you talk about directly matching a device in nature, we can come up with a crude device to solve the same sorts of problems, but we not even close to being able match most natural device with all of their complexities and constraints from scratch. When we have the ability to construct devices that can actually match natural equivalents, it will make our current fabrication skills look like the stone age.

I'll make the argument that we already have super-human intelligences. They just are not that interesting to talk to because they didn't evolve for human sociability or human problems. We've already fabricated better brains, but the behavior of those brains only marginally approximate anthropic ideas of intelligence. It's not the intelligence that's flawed, but our ideals that it must work according to our standards.

Just on a raw computing power perspective, our computer "brains" that we have built are nowhere near the level of actual human brains. There's just way more information and processing going on in 100 billion analog neurons connected by over quadrillion synapses than there is going on in a cpu. And that's just the hardware side, our software is nowhere near as sophisticated at solving comparable problems. For example, each of our optic nerves is a bundle of over 1 million individual nerve fibres, and the brain processes all of that visual information in very complex ways many times per second, much more complex than our best visual processing-specific AI. Our thinking machines are better at certain task like doing a lot of mathematical calculations and storing the results indefinitely, but again I would say that is basically the same situation as that we can create a space satellite that runs on solar power and yet we can't fabricate the equivalent of a tree. We will get there eventually but you have to look at our current methods as being fairly primitive with a lot of room for improvement.
posted by burnmp3s at 9:33 AM on June 26, 2011 [1 favorite]


burnmp3s: How much of that complexity though is an artifact of the fact that it's a massively parallel adaptation of less-than-ideal systems. Organic systems are complex not because they're miraculously designed, but because they carry with billions of ears of evolutionary legacies with them.
posted by KirkJobSluder at 9:53 AM on June 26, 2011


Organic systems are complex not because they're miraculously designed, but because they carry with billions of ears of evolutionary legacies with them.

I think more of the complexities of higher-level animals is due to the amount of problems they have to solve. Try building a machine that can reach a top speed of 50 MPH, can handle very difficult terrain with minimal wear and tear, processes plants for all of its energy needs, can heal significant damage itself, and can replicate itself, and make it less complex than a horse. We have created less complex devices to perform some of those tasks, but not all of them at the same time. Or going from the other direction, make a device that can replicate itself inside a hostile host body and transfer itself across a global population that's less complex than a flu virus. Or make a machine that converts carbohydrates to alcohol and carbon dioxide that is less complex than yeast. There's not a lot of waste or inefficiency in a modern organism given the constraints they evolved in, especially compared to devices we can create under similar constraints.
posted by burnmp3s at 10:20 AM on June 26, 2011 [1 favorite]


I am still unclear as to your exact position on the use of mathematical infinities. Is it meaningful? If so, how if the universe is finite? If not, why is it so useful in calculations?

Infinities arise within models that we build to approximate the behavior of things we observe. When those models are useful, the infinities they contain are useful. But there is another one of those use-of-word distinctions that will probably cause us friction over the idea of whether things like infinities or real numbers "actually exist."

I learned enough radio theory to get my ham license, which included a lot of information about sidebands and bandwidth, and I taught myself to program several early computers in assembly language before I was taught Calculus. This meant that before I learned Calculus I had a very seat of the pants appreciation of finite math, and while I saw the convenience of Calculus for describing RL system behavior, I never thought for a moment that physical systems were acting that way because they were really being iterated at an infinitesimally small scale. I saw the infinity as an approximation that made the math easier.

To take a less controversial example, Kepler's laws do not describe planetary motion because the planets are really measuring the area of ellipses they traverse; that just happens to be a model which is usefully parallel to behaviors that arise for a totally different reason, as Newton showed. But Newton's laws only accurately describe planetary motion if you account for perturbations which nobody took very seriously for hundreds of years. Only in the last few decades, using computer models, have we realized that planetary orbits are wildly unstable, that none of Sol's planets are anywhere near the distances at which they formed from the Sun, and that solar systems which get unlucky in the matter of interorbital resonances can be ripped apart by near-collisions of their gas giants. That is an example of something that could not be derived from equations at all, even though equations are used in the computer model; mathematical models like Kepler's only work when physical systems happen to converge on the model, and outside of certain parameters such models have always failed.

I tend to regard all models that require infinities, singularities, infinitely precise real numbers, and so on as being like Kepler's laws; they may usefully describe the Universe, but not necessarily because the Universe really works that way.

The philosophy of mathematics that tended to regard inifity as a real thing was not informed by firm understanding of the Hubble Limit, the Heisenberg principle, or decent estimates of the Universe's rate of expansion limiting the amount of mass it can contain. When I first learned of Kurt Godel, it took me several months to figure out what his point was, because when I figured it out I realized it was something I considered so obvious as to be a waste of so much time to "prove." But I didn't grow up in a world where something like the Principia Mathematica was still thought to be a worthwhile pursuit.

So Godel is important to me mostly because he laid the groundwork for Turing, who laid the groundwork for Von Neumann and an industry that made it possible for Mandelbrot to make his discovery: You don't need infinities, much less gods, to get great unpredictable complexity out of a fundamentally simple, finite system.

I am completely unsurprised at the difficulties resolving QM and General Relativity, because they are two models which happen to describe the Universe at different scales but neither is actually a description of how the Universe works. Computers modeling swarms of particles with finite math can quickly converge on solutions that are very closely parallel to Calculus-like mathematical models, even though the model is nowhere to be found in the simulation. I tend to think a similar phenomenon is at work in the Universe, with the bulk behavior of particles being an emergent property.

I am of course a product of my time, but I'd say that everything that has been learned about the Universe since Einstein and Hubble is ever more suggestive of such a finite underpinning.

Incidentally, a good example of someone with similar ideas fleshed out much more thoroughly than mine and much better educated than me, would be Stephen Wolfram.
posted by localroger at 10:48 AM on June 26, 2011


Thanks, that was very helpful in understanding where you're coming from. All I can really say in response is that mathematical infinities produce astonishingly accurate answers to an astonishingly wide array of problems. It's simply not plausible to me that there isn't some reason for this correspondence.

Now it's entirely possible that somewhere out there is a theorem that describes how the very useful reasoning we perform with infinities could be performed with purely finite methods, but no-one's found it yet, and many mathematicians have looked very hard indeed. Without that kind of bridging law, without some explanation of why infinities work (in the manner of that provided by limits for the successes of calculus despite it seeming to rely on "the ghosts of departed quantitites") I find myself compelled to believe there's something there.

In short, we have good reason to believe we're talking about something when we talk about infinities, and we've yet to find something finite it could be. Until that point I find ruling out infinity of all kinds premature.

As for Wolfram, he's a very smart man who's made some excellent software, but it's been nine years since "A New Kind of Science" without any significant results following from its lead. I won't say there's definitely nothing there, but I'm not particularly hopeful.

Finally, Godel's proof is subtle and non-trivial, requiring quite careful attention to detail to carry out. If it seems obvious, it's because popular treatments elide significant complications in the proof. Unless you've actually worked through a version of the proof, I'd be very cautious about relying on second-hand accounts.
posted by Proofs and Refutations at 11:34 AM on June 26, 2011 [2 favorites]


I agree that Wolfram's particular theory isn't very impressive, but I mentioned him because he is an example of someone very well versed in the math who is convinced that the Universe can in fact be described by a finite system. In fact, the gist I got out of ANKOS is that he isn't even insisting that his particular finite system is the basis for the Universe, only that it is a potential candidate.

Now it's entirely possible that somewhere out there is a theorem that describes how the very useful reasoning we perform with infinities could be performed with purely finite methods, but no-one's found it yet

I think you slightly missed my point here. When we study the mathematics of these models, we aren't studying the Universe; we are studying the model. I would agree that a finite system probably can't duplicate the entirety of such a model. My dispute is that the model isn't necessarily the Universe, and at some point the model will fail because it doesn't describe how the Universe really works. There may be no bridging law because there is no real relationship except coincidence between the model and the real system. Now it may seem that the relationship between a model and RL behavior is so precise and useful that there must be some relationship between the model and reality, but I'd say that it's still a pretty big universe and that given enough time and space the unlikely becomes inevitable.

Godel's proof is subtle mainly because it has to work within a particular type of framework to show something that framework can't do. Since it would never have occurred to me to believe that such a framework could be used to construct a proof of its own validity, I tend to slide across the exact methodology Godel used, realizing it must have been very clever to convince someone so deluded as to try to make such a proof that they were wasting their time.
posted by localroger at 12:36 PM on June 26, 2011


localroger: "Godel's proof is subtle mainly because it has to work within a particular type of framework to show something that framework can't do. Since it would never have occurred to me to believe that such a framework could be used to construct a proof of its own validity, I tend to slide across the exact methodology Godel used, realizing it must have been very clever to convince someone so deluded as to try to make such a proof that they were wasting their time"

Yeah, that's not it at all. Dan Willard has found arithmetic theories that can prove their own consistency. Whilst weaker than the Peano Arithmetic, they are still theories of the integers with addition and multiplication (defined as the inverses of subtraction and division, respectively). Godel's theorems are often interpreted to imply that there are no self-verifying theories, but this simply isn't the case.

A high-level description of a result is no substitute for following the exact implications of a proof, it's too easy to elide important details as you did here.
posted by Proofs and Refutations at 8:40 PM on June 26, 2011 [1 favorite]


KirkJobSluder: How much of that complexity though is an artifact of the fact that it's a massively parallel adaptation of less-than-ideal systems. Organic systems are complex not because they're miraculously designed, but because they carry with billions of ears of evolutionary legacies with them

Excellent point. And I would make the related point that intelligences exist within a support-context. E.g., our brains exist within the context of our bodies and our material and immaterial cultures, not to mention societies. Similarly, a 'super-intelligent machine' is really a complex set of programs running in a complex relationship with one another, within the larger context of input and output, which all exists within a human context.

The boundary conditions get pretty hairy after a very short while.

The larger point that I would want to make would be that discussions about intelligence, machine or otherwise, stop being interesting when you take them out of context -- but within their context, they're quite difficult to have because you can't easily tease out what's contextual and what's inherent.
posted by lodurr at 5:59 AM on June 27, 2011


Excellent point. And I would make the related point that intelligences exist within a support-context. E.g., our brains exist within the context of our bodies and our material and immaterial cultures, not to mention societies. Similarly, a 'super-intelligent machine' is really a complex set of programs running in a complex relationship with one another, within the larger context of input and output, which all exists within a human context.

lodurr: I agree with all your points. So do you agree that machine intelligence, were such a thing possible, would be situated within the context of a specific culture (the culture of the humans spawning it) too, and that in addition to the interrelationships among its various programmatic components, an intelligent machine's relationship to the human world it inhabits would likewise provide key inputs to the socialization and cultural acclimation processes of any true, universal AI machine that was also designed to mimic the processes and subjective experiences of a sentient, human-like machine consciousness. In other words, yes, natural human intelligence is dependent on and substantially informed by cultural context, but any machine intelligence we might conceive would be, too, so those objections don't hold up.

I don't know. I'm fuzzy on the idea we could ever actually achieve the kind of real, independently-minded and self-aware form of AI that philosophers, sensationalist science reporters and theoretical scientists are always conjuring up for the sake of argument, except possibly in the case of part-by-part substitution of a living brain with electronic components. That sort of machine consciousness--machine consciousness through gradual replacement of biological components with electronic ones--I think we might be able to achieve one day, though it's far from certain and anything but inevitable.

In the meantime, we probably will soon have extremely powerful inductive reasoning machines that can do things like process and synthesize facts on request, carry on polite conversation, summarize information on random topics, or otherwise interact intelligently with human interlocutors in real time--at least, we'll get that quite a bit sooner than we'll get sentient computers with mood swings and life ambitions.
posted by saulgoodman at 8:55 AM on June 27, 2011


In other words, yes, natural human intelligence is dependent on and substantially informed by cultural context, but any machine intelligence we might conceive would be, too, so those objections don't hold up.

That depends on what you mean by "substantially informed." As I read that term, I don't see the problem. Lots of things that are radically different from one another are "substantially informed" by the same influences.

In any case, you're focusing on some structures that are relatively abstracted from the individual (like culture), when I would stop way before that at the body.

I don't know. I'm fuzzy on the idea we could ever actually achieve the kind of real, independently-minded and self-aware form of AI that philosophers, sensationalist science reporters and theoretical scientists are always conjuring up for the sake of argument....

Loosly, I think we surely "could." My big issue is with whether we ever would, and I find that remarkably unlikely. And uninteresting, at least to me: Wintermute is much more interesting to me than VirtualLodurr. (Don't have my copy of Neuromancer handy or I'd quote the Dixie Flatline's Construct on trying to grok the motives of an AI.)

... except possibly in the case of part-by-part substitution of a living brain with electronic components. That sort of machine consciousness--machine consciousness through gradual replacement of biological components with electronic ones--I think we might be able to achieve one day, though it's far from certain and anything but inevitable.

I actually think it is more or less inevitable, but with the same caveats you introduce in the first half of that statement: It won't look like we're conditioned to expect it to look. One of the things that will come along with the gradual replacement scenario is personality editing, and other variations on the capability to make designed changes in yourself. Even relatively minor changes could have a big impact. Think for a moment about the delicate hormonal balances that keep a brain humming along in normal-mode, and the importance of relative timing in different areas of the brain. An improvement in efficiency in one subsystem could throw the whole system out of whack, flip you from "normal" to "boderline" [BPD]. ("Here's yer problem: Someone set this doll to 'Evil.'")

--
*Hell, it's arguably here, now, depending on where you set your boundary conditions for mind and organism: You can at least see analogs for how it might change things in the way that human decision-making processes have been automated, in various industries and organizations. (Not that I really want to get into that whole inevitable argument about whether it makes sense to call "group mind" a "mind", because I don't personally push the boundary conditions out that far. I'm just devils-advocating on that one.)
posted by lodurr at 9:16 AM on June 27, 2011


« Older You know who else owned things with swastikas on...   |   Don-8r: the alternative to Chuggers Newer »


This thread has been archived and is closed to new comments