On phenotropy and causal bandwidth
November 20, 2003 8:18 PM   Subscribe

Jaron Lanier talks about philosophy, computer science and physics. Suppose poor old Shroedinger's Cat has survived all the quantum observation experiments but still has a taste for more brushes with death. We could oblige it by attaching the cat-killing box to our camera. So long as the camera can recognize an apple in front of it, the cat lives.
posted by kliuless (11 comments total)
 
well, i tried to understand it, but i just kept thinking about flo control.
posted by quonsar at 8:31 PM on November 20, 2003


Of course, you could always throw Shroedinger's Cat in front of Clay Aiken's car...
reference to previous thread; too lazy to link
posted by wendell at 8:34 PM on November 20, 2003


jaron lanier and steve mann should have a fight to the death. I'd pay a few bucks to watch.
posted by krunk at 8:55 PM on November 20, 2003


Remember: Every time you masturbate, Schroedinger kills a kitten.
posted by arto at 11:45 PM on November 20, 2003


This is a nice speculative article but its hard to see how to create non-linear information models without at least some understanding of the linear models (e.g. the work of Shannon.) The only alternative model that lanier proposes seems to rely on systems that can deal with shades of information or context-sensitive information. This is of course what our own brain excels at.

Our brain is excellent at processing information because although it is in one sense a universal machine, it is also a machine wired from the ground up on context (the context being our survival and propagation on this spinning sphere filled with other life-forms). And so it builds models. But these models, and models of models, are essentially part of our cognitive framework, our astounding ability to detect the emerging shape of patterns. If information is not there, we guess at the information and proceed.

Anyone who has played the game of seeing shapes in clouds, understands that we are adept at imagining and constructing signal even when the information is deliberately noise. Its like the one-time cryptographic pads - the receiver knows the message is coming. We can do this because we are born with a model of the universe (the sender of the message.) Studies show that babies are surprised when objects behave in a non-physically intuitive way (e.g. things abruptly disappear or transform)

Here's an even clearer example of this point of view: There's no reason for the present moment to exist except for consciousness. Why bother with it? Why can we talk about a present moment? What does it mean? ... Is it still possible to say that fundamental particles simply move in their courses and there wasn't necessarily an apple or a computer or a recognition event?


This is philosophical speculation but is one of the questions we are now approaching in physics: How do we create an ultimate model of reality which is divorced from our own cognitive preconceptions? That is, what is a construct and what is real and is that question even meaningful?
Put more prosaically, we know that 'blue' and 'red' dont exist in nature - they are just things in our head - which only knows a smooth electromagnetic spectrum. Are things like 'time','space','present' similar constructs and if so to what degree?

Much of the physics side of the article is a mild re-hash of an excellent article by Bekenstein himself in Scientific american which I highly recommend everyone should read.
posted by vacapinta at 12:03 AM on November 21, 2003


I can't help but wonder, given the notion of software architecture ol' natty dread outlines--ie, a myriad of tiny modules communicating via pattern recognition--what the potential problems might be? Sure, each "module" will have some margin for error built-in, but there will still be the potential for an error to spiral out of control into disastrous consequences. As Bruce Sterling might say, "What's the possible Chernobyl of this?"

Having said that, better patten recognition could have obvious benefits for some applications--ask anyone who's ever used a spam filter or a voice recognition system.

And I'd like to see what happens if this guy ever hooks up with the genetic algorithm folks...
posted by arto at 12:21 AM on November 21, 2003


I should also add that I get skeptical when I read any essay that invokes Moore's Law. My own view is that it is just a local observation (local time-wise) and has no foundations as a general principle (for one, the universe has scale) I see it as pseudo-science.

It is like financial types who predicted that the stock market would by now be stratospheric based on the trendlines of the late 90's.

What we are undergoing now is the technological equivalent of the Cambrian explosion. But, that too leveled off.
posted by vacapinta at 1:51 AM on November 21, 2003




vacapinta, that is exactly what Moore's Law is. Moore even said as much. There's no science behind it, though you can look at science and determine why it works right now and extrapolate that it will work for some time in the future.
posted by substrate at 5:39 AM on November 21, 2003


vacapinta - I've no doubt that Lanier is thoroughly skeptical of "Moore's Law". Lanier's "One half of a manifesto" was largely aimed at deflating the millenarian, singularity oriented mania of what Lanier termed "cybernetic totalism" . In short, he pointed out that - even if Moore's Law (really just an ongoing trend) holds true into the foreseeable future - it doesn't point towards some onrushing singularity due to the fact that the software, which runs on our ever more powerful hardware, sucks. As computers become faster, software becomes correspondingly more bloated and infested with programming errors. As Lanier correctly points out (in my opinion) this current approach will not lead to a singularity.

But Lanier is missing quite a bit of the larger picture. Evolutionary software - such as Danny Hillis was "breeding" a few years ago (when he admitted he didn't have a clue how the successful programs actually worked) may deliver the grail of functionally sentient machines. And, of course, humans can mess with their genome to increase brain power - and so progressively push the computational limits of our basic organic form (maybe also aided by interfaces with computers) until we can muddle through to the point of creating sentient machines or, more to the point, novel software to breathe life into the machines.

But - back to Lanier's current Edge writing - it seems odd to me that Lanier doesn't recognize his question - "How many bits can you subtract until images become unrecognizable" (or simply "what are the really essential bits") has been studied extensively already in the field of computer image compression. This is the very principle of the JPEG compression scheme - remove all information which the human eye does not regard as "essential" to the coherency of a JPEG image. It turns out that you can remove an amazing amount.

I have a sneaking fantasy that - in his spare time - Lanier lights up a fat one and goes down into his basement where kidnapped neighborhood cats sulk inside "cat killing" boxes, only kept alive by tiny bandwidths several hundred bits wide and depicting grainy images of apples - from which Lanier experimentally plucks individual bits at random to see what will happen...

ZZZZAPPP! - "Oh shit. I guess that bit's essential....."

But Lanier's a nice guy. So he probably uses a virtual cat.
posted by troutfishing at 6:37 AM on November 21, 2003


I wrote that before checking out quonsar's link-that-can't-be beat.
posted by troutfishing at 6:49 AM on November 21, 2003


« Older Dumping on Clay Aiken   |   The Story of Suzanne Newer »


This thread has been archived and is closed to new comments