NATURE OF ABERRATION: Believes inanimate objects display active hostility. This is not directed at himself personally. In fact, he believes he can circumvent it more readily than most, but expresses concern for safety of the human race. Discusses this belief with scholarship and detachment.
They are liquid, semi-visible goliaths that rage through the streams and chunks of ordinary traffic, with the effervescent tendrils of mile-long tails whipping behind them like Chinese dragons. Though composed of hundreds of pounds of steel, glass and plastic, they are able to pass through solid objects. They are bound by the laws of the highway, but not by any conventional notion of time or space.
They are Aggregate Traffic Animals: a menagerie of emergent beasts drawn from the interacting behaviours of many individual human beings driving many individual cars with many individual goals, their collective activity giving rise to something with greater presence, power and purpose than the sum of its constituents.
[I]f you are able to de-emphasise the organism itself you are free to appreciate the idea of beaver ponds as artificial lakes generated by beaver genes, or to see a spider's web as an arrangement of silk drawn by DNA. By extending the lines with which we bound the traditional phenotype, we define new organisms, merging technology and individuals into communities the same way that ancient micro-organisms interacting inside bilipid membranes fell into symbiotic lockstep dances to found the first stable cells.
So what falsifiable (measureable) evidence would convince you, of say Level III presence?
Will we develop artificial intelligences based on silicon? I don't know, and I'm quite serious about that. I'm not certain it will ever happen, and I don't think it will happen soon. The first rigorous work on the entire question of artificial intelligence was done by Alan Turing, who proposed what we now know of as the Turing Test. More or less, it was an attempt to deal with the basic question, "Can computers think?"
What Turing proposed was that if it were possible to create a machine whose behavior so closely emulated that of a human that an observer could not, after extended observation, determine whether a given being was human or machine, then the difference between "thought" and whatever it is that the machine did would no longer be important.
The result is often misquoted as "...then computers will have achieved the ability to think", but Turing was not so careless or incautious. Turing was trying to work around the fact that there exists no rigorous consensus definition of "thought", and therefore it was both impossible and pointless to even contend that computers were thinking. Turing attempted to describe an objective and unambiguous criterion for a certain level of capability of machine performance that didn't require any consensus definition of "thought".
Since then, researchers have worked on various approaches to artificial intelligence, including concentration on specific functions like recognition of objects or the ability to understand spoken language, and the ability to perform certain specialized kinds of high level decision making (e.g. playing chess). But all attempts to develop more general and versatile artificial cognition were spectacular failures, and what has emerged is a cautious conclusion: true intelligence probably can't be based solely on deductive processes. True intelligence probably requires inductive reasoning. Somewhat more speculatively: true intelligence may not be possible in any deterministic system. True intelligence may require some controlled degree of indeterminism in system execution.
In other words, true intelligence may be analog, not digital.
None of this has been proved, but the case at least that induction is indispensable is looking better and better, and induction by its nature includes a degree of analog calculation, which neurons represent using pulse code modulation. Can a digital simulation of those analog components serve the purpose? (as could be implemented on existing commercial computer designs, none of which use pulse code modulation?)
Not known, but because of the problem of the "butterfly effect" I'm skeptical. Digital simulations of analog systems always include small initial errors, and as digital calculations iterate ever more deeply, that error grows until the error swamps the signal, at which point the digital simulation will have no greater than a random chance of being the same as the analog system it is trying to simulate.
If that's the case, then no amount of digital hardware, no matter how fast, parallel or well connected, can ever really be intelligent in the way that we are, with the degree of capability and versatility we have. I cannot say for certain that's the case, but I have a strong suspicion that it is...
Computer systems may be developed which can pass the Turing test over a brief period when behavior is restricted to a small intellectual realm (e.g. half an hour of discussion about the plays of Shakespeare) but none which would pass given an extensive examination with no limits at all on material. Until such time as we develop an entirely new computing technology (possibly PCM based) which is truly analog but not limited in the way that early analog computers were limited, then I think a true artificial intelligence will not appear.
Whatever "true intelligence" might mean, if it means anything at all.
However, that doesn't mean that we will not see superhuman intelligences appear soon. For though I don't think it is likely that the internet will make possible even a human-level artificial intelligence, it may make possible creation of a human hive-mind that transcends the intelligence of an individual human.
Networking of computers won't do it, but networking of humans might.
For instance, in 2002 researchers analyzed some 300 million packets on the internet to classify their origins. They were particularly interested in the very small percentage of packets that passed through malformed. Packets (the message’s envelope) are malformed by either malicious hackers to crash computers or by various bugs in the system. Turns out some 5% of all malformed packets examined by the study had unknown origins – neither malicious origins nor bugs. The researchers shrug these off. The unreadable packets are simply labeled “unknown.” Maybe they were hatched by hackers with goals unknown to the researches, or by bugs not found. But a malformed packet could also be an emergent signal. A self-created packet. Almost by definition, these will not be tracked, or monitored, and when seen shrugged off as “unknown.”
« Older Rule 10a-1, otherwise known as the uptick rule, pr... | It started in 1956... Newer »
This thread has been archived and is closed to new comments
Buy a Shirt