Checkmate, Deep Blue
August 22, 2005 1:52 PM   Subscribe

Arimaa is the first game designed specifically to be hard for computers to play, while easy for people. With its billions of combinations and push-me-pull-you gameplay conditional value strategy, it's too much for brute force computing. And yet, it's simple enough for a child to play (or at least to explain). Play it now against people from all over the world (and lackwit computors).
posted by klangklangston (103 comments total) 4 users marked this as a favorite
 
In an attempt to show that computers are not even close to matching the kind of real intelligence used by humans in playing strategy games, we have created a new game called Arimaa

The entire home page of the game site is in this vein. I'm not sure they've actually proved that no computer can win this game yet, but surely it's not difficult to invent any number of games, playable on a chess board, in which the statistical complexity is greater than that of chess, and therefore perhaps harder for computers to play.

I doubt people will flock in droves to a game invented just for the purpose of foiling our computer overlords. Creating something interesting for people does not seem to have been part of the inventors' requirements.
posted by beagle at 2:10 PM on August 22, 2005


Octi, one of my favorite games, was designed to be computer resistant.
posted by etc. at 2:11 PM on August 22, 2005


Isn't Go already an example of an easy-to-learn strategy game resistant to traditional brute force search tree analysis? I don't see the point of artificially creating a game that people are way better than computers at when an elegant, age-old, popular game of this sort already exists.
posted by jcruelty at 2:22 PM on August 22, 2005


Beagle: Go ahead and play the game. The game is fun on its own, despite the almost cultish ad copy that's on their front page.
posted by klangklangston at 2:23 PM on August 22, 2005


It's interesting. And from the page, it seems that spurring new AI research is one of the primary aims of the creators of this game.

There's always room for another game.
posted by teece at 2:32 PM on August 22, 2005


Computers have yet to prove capable of the kind of lateral, nuanced, and multi-leveled thought for which the human brain is imminently qualified. The best games involve balancing a variety of strategies and a little intuition. The current state of artificial intelligence is proof enough that certain parts of the brain can be simulated, but the whole system cannot be emulated.

Playing games against humans is far more satisfying. What good are bragging rights against a solid-state opponent?
posted by ToasT at 2:33 PM on August 22, 2005


The current state of artificial intelligence is proof enough that certain parts of the brain can be simulated, but the whole system cannot be emulated.

You forgot to add the word "today" at the end of that sentence.
posted by Bort at 2:41 PM on August 22, 2005


Arimaa looks like Stratego on crack.

Anyway, if computers were really smart they could solve Sokoban :)
posted by sandking at 2:48 PM on August 22, 2005


A computer is just as intelligent as a hammer, or a washing machine. A computer is a tool, a piece of dead machinery. It doesn't "think" any more than a hammer thinks. We may trick ourselves into believing that a computer "thinks" or is "intelligent" because it appears to use words or symbols that are meaningful to us (not to the computer) and because we can't see exactly how it works (a hammer is instantly understandable). It isn't meaningful to use the word "intelligent" for non-living things.
posted by Termite at 3:02 PM on August 22, 2005


Err... how about 'tag'? It's simple, fast, heaps 'o strategy, and I defy any computer to catch a bunch of kids who can vault walls and dumpsters in order not to be 'it'.
As for rules-on-the-fly, I doubt a computer could argue the "no tag back 'cos it's dumb" rule for 15 minutes at the top of it's lungs, and then go back to playing like nothing ever happened.
posted by Zack_Replica at 3:17 PM on August 22, 2005


Bort: I suspect that true emulation is not feasible for the same reason that it's not feasible to create a perfect computational simulation of a supercell thunderstorm. At some point in the process of converting analog systems to digital, you are going to lose precision, and if cognition is chaotic to any degree, that lost precision will be a killer.
posted by KirkJobSluder at 3:20 PM on August 22, 2005


Actually the evader-pursuer problem is an area of active research in algorithmic robotics... I remember studying it back when I was in school. Has definite military applications. Tag... you're dead.
posted by jcruelty at 3:22 PM on August 22, 2005


Termite: I hear that from time to time. I'm curious: are you religious? That is, are you saying that human intelligence is the result of some divine spark, and thus not reproducible by humans?

If so, OK, we're done talking.

If no, well, then you're quite obviously wrong. As this machine inside my skull thinks just fine, and I am, as far as I know, "intelligent" to some degree.

I agree with the folks that made up this game that chess is an easy way to fool someone into thinking a computer is smart. But I don't think it follows to say that a computer can't be made smart.
posted by teece at 3:44 PM on August 22, 2005


It isn't meaningful to use the word "intelligent" for non-living things.

I'm in the camp that Intelligence is what intelligence does.

Just because we each have a pretty spiffy meat information processing facility doesn't mean that technology won't exceed humankind's powers of knowledge & reason.

IOW, I think in 50 to 100 years you'll wanting to be revisiting that statement.

I also think NLP (natural language processing) will someday be the next "internet" tech boom, since communication facilitation is a very real need. I think we'll have a machine pass the Turing Test in my lifetime, and if I could bet money on it I'd put my money on Google accomplishing this.
posted by Heywood Mogroot at 3:49 PM on August 22, 2005


oh, a Glimpse of the future (the first search return)... Now imagine that working with any question that has a factual answer.
posted by Heywood Mogroot at 3:55 PM on August 22, 2005


teece: I hear that from time to time. I'm curious: are you religious? That is, are you saying that human intelligence is the result of some divine spark, and thus not reproducible by humans?

Skepticism on this does not require religion. There are hundreds of very simple physical systems that appear to be very difficult to model.

I agree with the folks that made up this game that chess is an easy way to fool someone into thinking a computer is smart. But I don't think it follows to say that a computer can't be made smart.

I think we already have smart computers, but I don't think that the smart silicon computers we have, are analogous to the NCOH computers we are.
posted by KirkJobSluder at 4:00 PM on August 22, 2005


KirkJobSluder: You cannot predict what a particular chaotic system will do because of lack of inifinite precision, but you can certainly model one: "It is true that chaos theory dictates that minor changes can cause huge fluctuations. But one of the central concepts of chaos theory is that while it is impossible to exactly predict the state of a system, it is generally quite possible, even easy, to model the overall behavior of a system."
posted by Bort at 4:04 PM on August 22, 2005


Heywood Mogroot: Just because we each have a pretty spiffy meat information processing facility doesn't mean that technology won't exceed humankind's powers of knowledge & reason.

As far as I can tell, it did sometime back it the 80s. Of course, we've had machines that beat human powers of locomotion for almost 200 years. And yet, most people end up walking or running somewhere over the course of the day.

oh, a Glimpse of the future (the first search return)... Now imagine that working with any question that has a factual answer.

Facts and and a $1.50 will get you a cheap cup of coffee, and that's just about it.

Bort: You cannot predict what a particular chaotic system will do because of lack of inifinite precision, but you can certainly model one:

Certainly. And I think that such models are going to be quite useful for understanding cognition at some point in the future. However, creating a model useful for hypothesis construction and testing, is a very different prospect from creating a replica.

And in addition, much of the relgious faith in a mind-upload "singularity" depends on the ability to model particular chaotic systems as opposed to just creating something slightly better than the observational-behaviorist black box.
posted by KirkJobSluder at 4:11 PM on August 22, 2005


Heywood, ok, let's imagine:

How many civilians have been killed in the occupied territories in the last 5 years?

How many terrorists have been killed in the occupied territories in the last 5 years?

How many freedom fighters have been killed in the occupied territories in the last five years?

Saying that a representation could be made so that the noun-phrase "occupied territories" could be shown to be a couple of strips of land around 31.47N and 35.10E -- and we might be able to do this via some sweet tricks of looking at news stories over the last 10 or so years, and figuring out that the particular "occupied territories" that some americans use to refer to parts of the world is indeed that one (and not some other occupied territories, such as Western Sahara, or, more confrontationally, Taiwan).

But you damn sure can't answer those factual questions above. Can't be done. I'm not saying that a person could either, btw. But a person would know that he couldn't. Some "facts" are just a helluva lot more factual than others. And, the sad thing is, when you start trying to rigorously define words, in the way that would be required for putting them in a NLP (natural language processing) data structure, the words get slippery. Check "a bachelor is an unmarried man."
posted by zpousman at 4:12 PM on August 22, 2005


Well, I find it a little depressing that a mostly beloved childhood game can be reduced to a series of scripts.
We also used to play tennis ball tag, in which you threw the ball as hard as you could at the other guys. Bet that computer'd fold after being pegged in the hard drive! heh.
/pout
posted by Zack_Replica at 4:19 PM on August 22, 2005


However, creating a model useful for hypothesis construction and testing, is a very different prospect from creating a replica.

But what is it about infinite precision that makes you think that losing it would make emulating cognition/intelligence/consciousness (take your pick) not feasible? Would an analog computer allow for emulation of these things?

To me the particular state of my brain is not what defines "me". My current state can be reset, if you will, by a sufficient jolt of current or a strong blow to the head. What defines me, is a particular (extremely complex) algorithm and information combination. I don't believe I need infinite precision to define that anymore than I need pi to an infinite precision to define a circle.
posted by Bort at 4:32 PM on August 22, 2005


Skepticism on this does not require religion. There are hundreds of very simple physical systems that appear to be very difficult to model.

Um, skepticism is not what Termite offered, KirkJobSluder. Termite said: It isn't meaningful to use the word "intelligent" for non-living things. That's a blanket denial of the possibility of real AI, to me at least, unless we are just quibbling about the semantics of "intelligence" (are we?).

Skepticism about whether or not humans will ever figure out what makes a human brain intelligent is one thing. But saying it's impossible for a machine to be intelligent is quite another. And yes, indeed, it does require a belief in something religious or mystical.

A human brain is a hunk of bio-matter. It got there somehow, and does what it does somehow. Unless you say the intelligence is the cause of a divine spark, it is absolutely necessary to admit that the human brain is not fundamentally different than any human-built machine (it's simply very complex, and hard for us to figure out, and made in a way and with materials we don't usually use). It's a piece of matter, governed by the same physics as human machines, that somehow gives human beings "intelligence." It's not a fundamentally unique thing in the universe unless you believe it holds a mystical soul (which is fine, but then there isn't much point for us to talk about it in this context).

Again, skepticism isn't what I was talking about. True AI has been much harder than golden era sci-fi authors envisioned. It may be so hard that it won't happen any time in the near future, or maybe never, for whatever reason. But saying that true AI is impossible is not skepticism: it's mysticism. Intelligence has risen naturally in our brains by accident. By whatever process that is, it can be re-created by humans, at least in theory.

(What I am trying to say is something akin to this: human beings can, in theory, create a star, a shining one like our Sun. It is not a mystical thing. In practice, human being may never have the resources or knowledge necessary to create a star, but that is a fundamentally different thing than saying human beings can't create a star, and to refer to any such endeavors as a priori fruitless).
posted by teece at 4:37 PM on August 22, 2005



Termite: I hear that from time to time. I'm curious: are you religious? That is, are you saying that human intelligence is the result of some divine spark, and thus not reproducible by humans?


I think Termite's thinking of the chinese room argument, which doesn't require any religous conviction at all.
posted by fnerg at 4:41 PM on August 22, 2005


Good lord what were they thinking when they recorded the voice track in that tutorial?
posted by Wolfdog at 5:00 PM on August 22, 2005


teece, I generally agree with your point, but he didn't actually say that humans couldn't build something intelligent, he said that intelligence isn't meaningful for something that isn't alive.
Under that definition, we might be able to construct a living machine and have it be intelligent.

It seems most likely to me that intelligence is an emerging property of the complexity of our brains. But I cannot completely discount the possibility that it is instrinsically related to the stuff that makes up the brain. It may be that we can't build anything truly "intelligent" (I'll leave the definition of intelligence up to the reader) without using organic, living tissue. Just like we can't build a good computer out of granite because granite just doesn't have the necessary electrical properties to work for that task.

That, of course, opens up the big question of what properties are necessary and sufficient for Intelligence. I am not ready to answer that question yet...
posted by afflatus at 5:02 PM on August 22, 2005


Bort: But what is it about infinite precision that makes you think that losing it would make emulating cognition/intelligence/consciousness (take your pick) not feasible? Would an analog computer allow for emulation of these things?

Well, I must say that the process of making an analog computer is quite fun, but programming it for 18+ years is much more difficult.

I do think that since we eat, breathe, walk and swim in analog computers, that these do become valid sources of knowledge.

As an example, the latest edition of New Scientist has a very interesting article describing how supercomputer jobs running months are just now in the ballpark of precision to replicate an event that takes place hundreds of times a second in a supercollider, with a model that is considerably less complex than the human brain. Interestingly, the next step for producing higher precision is increasing the complexity. We are not even close to being able to predict the action of quarks in a proton in real-time. It is quite possible, even probable, that more than trivial models of actual human cognition are mathematically intractable.

What defines me, is a particular (extremely complex) algorithm and information combination. I don't believe I need infinite precision to define that anymore than I need pi to an infinite precision to define a circle.

You do, if you want to have any hope that your software doppelganger will make any of the same decisions, and therefore can be said to be more like you than a random person off the street. But you say that state does not matter, but information does. What's with that?

teece: Skepticism about whether or not humans will ever figure out what makes a human brain intelligent is one thing. But saying it's impossible for a machine to be intelligent is quite another. And yes, indeed, it does require a belief in something religious or mystical.

Without a definition of "intelligence" I don't think either of these statements are very meaningful. For example, I think we already know both how to make intelligent machines, and what makes the human brain intelligent. I don't think that either of these are revolutionary miracles.

It's not a fundamentally unique thing in the universe unless you believe it holds a mystical soul (which is fine, but then there isn't much point for us to talk about it in this context).

I think the universe results in quite a lot of fundamentally unique things. This does not require any mystical mumbo-jumbo, just a recognition that we live in a chaotic universe rather than a clockwork one.

My opinion is that trying to emulate human cognition rather than just cognition is most likely a waste of time. Likewise, we've learned a heck of a lot through nuclear fusion in the lab rather than striving to produce a star.
posted by KirkJobSluder at 5:11 PM on August 22, 2005


I don't think this game is as intractable as to be impossible for a computer to beat a skilled human. It just shifts the search for a good method into different ways of thinking about the game.
posted by KirkJobSluder at 5:33 PM on August 22, 2005


We are not even close to being able to predict the action of quarks in a proton in real-time. It is quite possible, even probable, that more than trivial models of actual human cognition are mathematically intractable.

this is a critical point, but i don't think that it's a proper framing of the question, as long as we're still talking in the context of Go, etc.

fundamentally, the architecture of a "computer" is extraordinarily narrow -- it's really nothing more than a single-neuron brain in some senses of the word. hence, its utility for approaching problems like chess, where the goals and rules are easy to define but hard to solve (this is what KJS is referring to when he says "model"), but not so much for problems like Go -- with Go, the issue is definitely not that the problem is mathematically intractable, it's that we've yet to develop a good way to "model" the problem -- problems of positionality make it prohibitively difficult.

the human brain, on the other hand, is not just comparable to a massively parallel supercomputer (especially when you consider that the word "parallel" in this context essentially means trying to make a bunch of separate processors synchronous -- i.e. turning them once again into one, faster processor), it's really not even parallel at all. neurological research indicates that a large part of the brain's processing capacity is based in the fact that it is asynchronous -- you can get radically different outputs depending on the rates and times when the individual neurons fire.

i'm not qualified to talk about these things, being neither an AI or neurology researcher, but i do take issue with what KJS is talking about insofar as the problem (as i understand it) isn't that games like Go are mathematically intractable inasmuch as the tools that we have available to solve them (right now, computers) don't have the capabilities to do so.

think of the difference between trying to put a nail or a screw into a board and only having a hammer available. that's the best way i can think of to oversimplify the chess/go problem.
posted by spiderwire at 5:37 PM on August 22, 2005


spiderwire: i'm not qualified to talk about these things, being neither an AI or neurology researcher, but i do take issue with what KJS is talking about insofar as the problem (as i understand it) isn't that games like Go are mathematically intractable inasmuch as the tools that we have available to solve them (right now, computers) don't have the capabilities to do so.

No, I'm not saying that Go is mathematically intractable. I'm saying that it's quite possible that non-trivial models of the human mind are intractable.

I think it might be quite possible to make a computer play world-champion class Go. But I think that a computer Go champion and a human Go champion might think in very different ways. This seems to be the case with Chess.
posted by KirkJobSluder at 5:48 PM on August 22, 2005


KJS: This seems to be the case with Chess.

actually, iirc the neurological research indicates that grandmasters tend to play almost entirely from memory rather than dynamic problemsolving (as most people do when playing chess), which would seem to indicate to the contrary. (your 'this' is dangling, so might have missed your meaning)

i'm just not sure what you mean by "mathematically intractable." this is pointed out upthread, but per chaos theory there's a big difference between "mathematically intractable" and "model"-able. (we can right now, in fact, write programs whose operation is "mathematically intractable" but that still produce a rational output. they don't even necessarily have to have an analog component.)

since the conversation is in the context of problem-solving, maybe the distinction we're missing is that a problem doesn't have to be "model"-able to be soluble, especially when we're talking about real-world tasks like games, or even game theory.
posted by spiderwire at 6:08 PM on August 22, 2005


in fact, as long as we're on the subject, chess itself is a mathematically intractable problem, at least in terms of our current technology. it's just that computers are able to evaluate potential outcomes from the current board setup far better than humans are, which is a direct result of their architecture and how it interacts with the rules of the game.
posted by spiderwire at 6:10 PM on August 22, 2005


Wolfdog: They were thinking that it instills the idea that even a child can understand the principles behind their game.

Although why a child reading a script is supposed to instill that particular idea is beyond me.
posted by Mr.Encyclopedia at 6:11 PM on August 22, 2005


sandking: Anyway, if computers were really smart they could solve Sokoban :)

They can't?
posted by JHarris at 6:19 PM on August 22, 2005


But you say that state does not matter, but information does. What's with that?

Yes, I explained that very poorly. Let me try a different analogy. I am a stream. My state (at least, the way I meant state above) is the location and velocity of all of the water molecules that I'm made of. My "information" is the abstract stuff that defines me - the banks, the bed, the spring, the way water flows down hill. You can shake up my state, with an earthquake for example, but once that stimulus subsides, the stream still exists.

I hope that is a better example. "I" live at a higher level than the infinite precision of whether neuron #132532 is firing once every 0.023 seconds or once every 0.024 seconds.

You need a lot of precision to calculate what will happen to a quark, just like you need a lot of precision to calculate the path of a water molecule down the stream. But you don't need as much precision to calculate higher level phenomenon about the stream, like how much pressure it produces, its temperature, etc.
posted by Bort at 6:20 PM on August 22, 2005


spiderwire: actually, iirc the neurological research indicates that grandmasters tend to play almost entirely from memory rather than dynamic problemsolving (as most people do when playing chess), which would seem to indicate to the contrary. (your 'this' is dangling, so might have missed your meaning)

I don't think we are communicating. It seems to be fairly commonly accepted among chess players and chess programmers that human players and computer players think in fundamentally different ways at all levels of play. The breakthroughs in recent chess engines such as Deep Fritz, and Hydra have not done much to shed light on how grandmasters actually think.

i'm just not sure what you mean by "mathematically intractable." this is pointed out upthread, but per chaos theory there's a big difference between "mathematically intractable" and "model"-able.

Perhaps I'm not using the terms properly. With any model there is a tradeoff between acceptable levels of error, and the complexity of the model. Even with relatively simple systems, this tradoff involves non-linear increases in computing power. It is quite possible that creating non-trivial models of the human mind may be one of those cases that can't be solved without either unreasonable levels or error or complexity well beyond what is realistic with a computer.

And we can in theory put an upper bound on what is realistic. For example, we can create numbers that can't be factored before the heat-death of the Universe, assuming the minimum possible time per iteration.
posted by KirkJobSluder at 6:38 PM on August 22, 2005


It's going to be a lot of work just to be able to answer questions like "What is the second tallest mountain in Asia?" Forget about answering questions like "Who was married to Mahler and Gropius?" for a long, long time if ever.

"ever" is a very long time. I took an NLP class in college, and while the task is daunting I think it is solveable. Well, what I'm saying that if & when it is solved well enough it's going to be quite a 'singularity' event that changes how the world works.
posted by Heywood Mogroot at 6:48 PM on August 22, 2005


Facts and and a $1.50 will get you a cheap cup of coffee, and that's just about it.

Fact parsing and machine translation are very similar tasks, or at least subsets of the same field. The knowledge amplification of fact parsing is very useful, IMV it is a greater jump in utility from WWW -> machine comprehension than paper almanac -> WWW.

With 12 languages with over 100 million native speakers, I think machine translation will also have a very very large market impact someday, again opening the world up as much if not more as the www did.
posted by Heywood Mogroot at 6:58 PM on August 22, 2005


Bort: Yes, I explained that very poorly. Let me try a different analogy. I am a stream. My state (at least, the way I meant state above) is the location and velocity of all of the water molecules that I'm made of. My "information" is the abstract stuff that defines me - the banks, the bed, the spring, the way water flows down hill. You can shake up my state, with an earthquake for example, but once that stimulus subsides, the stream still exists.

In the case of the stream, the errors in motion of the individual molecules tend to cancel themselves out. So we can successfully average out individual state. However, the stream analogy does not apply to all physical phenomena. Phenomena that are the cascading effects of small events for example.

I hope that is a better example. "I" live at a higher level than the infinite precision of whether neuron #132532 is firing once every 0.023 seconds or once every 0.024 seconds.

Well, I don't know what "you" have to do with it. Lets leave "you" out of the picture for a bit. Quite a bit of cognition seems to work by amplifying very small effects. The precision of neuron #132532 may matter quite a bit when it acts in competition with #132536. The neuron that is first out of the gate or the strongest will have it's signal amplified and translated into a behavior.
posted by KirkJobSluder at 6:59 PM on August 22, 2005


i think we're all agreed on the possibility of precisely "modeling" the human mind in the sense of being able to determine its state versus being able to perform our out-perform the human mind at simple tasks using various sorts of computers.

i just wanted to take one second to answer this:

I don't think we are communicating. It seems to be fairly commonly accepted among chess players and chess programmers that human players and computer players think in fundamentally different ways at all levels of play. The breakthroughs in recent chess engines such as Deep Fritz, and Hydra have not done much to shed light on how grandmasters actually think.

in a programming sense, this might be true, but there have been neurological studies on how grandmasters think, and the most salient fact i can recall about those studies was that chess grandmasters tend to play the game using the memory parts of their brains rather than the processing parts (this was from MRIs, iirc). this means that chess, for humans at the high level, is less about "thinking" in the sense that we're talking about here and more about just memorizing. (i can go get a cite for this study if this is a sticking point)

my point was that human memory is arguably the most "computer-like" part of our brains -- and at least at a very broad level, our own ability to play chess seems dictated by how well we're able to memorize, well, something, be it strategic states of the game or whatever -- and that the humans who are best at this particular game use the computer-like parts of their brains to play it.

point being (again, trying to bring this back into the original discussion), chess is just a flat-out bad metric for... well, whatever it is we're trying to prove here. it seems clear that chess grandmasters are probably generally smart as a bunch, but the structure of the game of chess tends to emphasize things that we as humans just aren't very good at -- at least, not compared to a computer (to wit, information storage and synchronous processing).

chess isn't a useful metric for computer intelligence or for computer vs. human intelligence because it's not a good metric for intelligence, period. this isn't to say that a knowledge base isn't a component of intelligence somewhere, but... well, you can take it from there.
posted by spiderwire at 6:59 PM on August 22, 2005


However, the stream analogy does not apply to all physical phenomena. Phenomena that are the cascading effects of small events for example.

i could be wrong, but this should apply to fluid dynamics as well. it's all about cascading effects, but the outcomes of those cascading effects can be modeled.

you could make a similar argument about the human brain -- since a single neuron has to be the "starting point" for a cognitive process, it's by definition a system that functions based on "cascading effects," but the outcomes are ultimately rational and bounded -- we can sit here and type stuff on the intarweb....
posted by spiderwire at 7:02 PM on August 22, 2005


levity here. by this guy.
posted by spiderwire at 7:05 PM on August 22, 2005


spiderwire: The meat of most modern chess programs centers on running simulations of the game forward from the current position several moves to identify the best lines for both sides. Database searches usually come in at the opening where the possibilities are constrained by the starting position, and the ending where there are fewer pieces to keep track of.

I would also disagree with human memory being "computer-like." The fact that grandmasters are leaning pretty hard on memory is not surprising to me because grandmaster prep involves playing and analyzing a huge volume of games. We know that human memory is not information retrieval, it's information reconstruction. In fact, I would argue that the nature of human memory is a key reason for the death of the "mind as computer" metaphor.
posted by KirkJobSluder at 7:32 PM on August 22, 2005


Arimaa: because a game in which a computer can win is not worth playing. (wow, do we have a depressing future ahead)
posted by dreamsign at 7:34 PM on August 22, 2005


The meat of most modern chess programs centers on running simulations of the game forward from the current position several moves to identify the best lines for both sides.

yeah, i'm a programmer, i kind of understand the basic theory :) (but it's really a lot more than that -- and again, i'm not speaking as an expert -- but it also involves a great deal of creative optimization of routes, e.g. culling of non-promising outcomes, etc.)

but again, i'm not discussing what the specific algorithms do. i'm merely saying the humans who are good at chess become so by being more computer-like. put another way, i'm trying to point out that if we're realling talking about "intelligence" here, we have to talk about what that means, and that chess is a remarkably poor criteria because humans only get good at chess by doing less thinking and more remembering. (and yes, to some degree this is a fuzzy distinction, but the fact is that as a percentage, grandmasters as they get better think less and recall more, whereas computers get good at the game by doing the opposite, as you point out.)

the reason that games like Go are a much better metric for intelligence is that success is predicated almost entirely on thinking rather than remembering. as far as i'm concerned, this is part of the reasons why computers stink at it so badly -- compared to humans, they're not very good at actual problem-solving. they're just not structurally designed for it. synchronous processing is good for information processing and retrieval, as well as deductive decision making, but not as good at adaptation, creative problem-solving, induction, etc. this is part of why NLP, for example, is such a goddamn hard problem, as well as getting a computer to play Go decently.

as far as decisionmaking that's not bounded by simple rules -- well, the Go and NLP problems makes it plain that we're not even very good at writing those programs yet, let alone do we really have any architectures capable of executing them properly. once we get there we can actually start talking about computer "intelligence."

but chess doesn't have much if anything to do with it. it's a tiny step into the realm of actual thinking, but not much more so than a search engine is a step towards NLP. it really just reinforces what we already knew -- that computers are better at certain processing tasks than humans are -- but all that means is that we managed to redefine a cognitive task as a processing task due to the relatively small bounds of the problem. it doesn't mean that our computers or the way we write programs are really all that much closer to real cognition.
posted by spiderwire at 7:57 PM on August 22, 2005


spiderwire: but again, i'm not discussing what the specific algorithms do. i'm merely saying the humans who are good at chess become so by being more computer-like. put another way, i'm trying to point out that if we're realling talking about "intelligence" here, we have to talk about what that means, and that chess is a remarkably poor criteria because humans only get good at chess by doing less thinking and more remembering. (and yes, to some degree this is a fuzzy distinction, but the fact is that as a percentage, grandmasters as they get better think less and recall more, whereas computers get good at the game by doing the opposite, as you point out.)

Well, to be honest, I think "intelligence" is not well defined, and "thinking" also not well defined here. For example, human memory is a very interesting phenomenon. We don't just pull an experience from a file, we recreate past experience around details that match what we are currently evaluating. The end result is that a memory is different, every time we remember it(1). The end result is that "remembering" is an amazingly creative process, both inductive as we try to build generalities around our experience, and deductive as we retrofit past reality to match our current theories of self and the world.

I agree that "intelligence" is a bad term, but I don't think that we can really say that chess is a bad field because it emphasizes certain types of intellignce. Go also emphasizes some types of intelligence, as does Arimaa. The thing is, it's really easy to play a shell game and say, that once we have a computer go master, then that was a bad metric because it can't create an ellegant math proof, or analyze the narrative structure of The Saragossa Manuscript.

(1) This is one of the reasons why I think memetics is a dead-end theory. Ideas and genes are not analogous to each other.
posted by KirkJobSluder at 8:40 PM on August 22, 2005


I would have liked to take part in this discussion, but when I posted last time it was midnight (Europe) and time to log off.

teece I'm curious: are you religious?

No, I'm not religious.

A clock keeps perfect time, but it doesn't know what time it is, it doesn't think, it's not intelligent. To use a clock, we don't have to believe that it consciously knows what time it is, we're not tempted to believe it is intelligent.

A computer is the same thing as a clock: it's a tool, it's a piece of machinery. When I use Word my computer doesn't think about the words I type anymore than a clock thinks about what time it is. But a computer is much more complex than any other tool and we're tempted to believe that it is - or could be - intelligent.

Computers behave as if they interact with us and since we are geared into interacting with living, conscious beings it's hard to resist the illusion that the computer is conscious. For example: last time I wanted to ask for phone number the operator had been replaced with a computer. I asked for foo company, the computer "listened", then read a number of possible matches and "asked" which one I meant, then it "told" me the number. When these things get a little more advanced it will be hard to resist the illusion that the computers are actually talking to us.

Heywood Mogroot
: I'm in the camp that Intelligence is what intelligence does.

Of course a lot of this is about how you define "intelligence" or "conscience". Definitions are not neutral. If you want computers to be intelligent/conscious you come up with a definition of consciousness (the Turing test) that makes the computer pass the test.

Today there is a strong tendency to believe/hope that computers will be intelligent and conscious. We desperately want them to become conscious. I wonder why.
posted by Termite at 1:06 AM on August 23, 2005


KJS: (1) This is one of the reasons why I think memetics is a dead-end theory. Ideas and genes are not analogous to each other.

The analogy breaks down depending upon what sort of fidelity we require in the viral transmission and reproduction of the gene/meme.

Yes, ideas are not identically experienced or understood in the various "hosts"; but neither are plain words. And that doesn't stop language being a fairly stable vehicle for transmitting meaning.
posted by spacediver at 1:22 AM on August 23, 2005


Termite: A computer is the same thing as a clock: it's a tool, it's a piece of machinery.

Well if we want to expand the metaphor of "computer" to accomodate any machine that serves a function, then would you not agree that our bodies are computers?

If our bodies are not machines, what are they?
posted by spacediver at 1:27 AM on August 23, 2005


Bort: You need a lot of precision to calculate what will happen to a quark, just like you need a lot of precision to calculate the path of a water molecule down the stream. But you don't need as much precision to calculate higher level phenomenon about the stream, like how much pressure it produces, its temperature, etc.

Interesting - I guess it depends on whether you consider your sense of identity/flavour of consciousness/etc. as being reflected in the exact "state of all the particles", or in the higher level emergent states. The latter seems more intuitive, as my own introspection doesn't reveal a constantly fluctuating sense of identity.

However this may be a limitation on my part, as perhaps I am simply not sensitive enough to discern this fine flux. I have read that experienced buddhists, while meditating on the concept of "change" are able to experience this flux - it is as if they can observe their own minds with a supremely fine temporal resolution. I believe this led to the insight, thousands of years ago, that there is no such thing as a self, as it is constantly dying and being reborn. (or something along these lines)
posted by spacediver at 1:34 AM on August 23, 2005


that there is no such thing as a self, as it is constantly dying and being reborn

whoa
posted by Heywood Mogroot at 3:11 AM on August 23, 2005


"Today there is a strong tendency to believe/hope that computers will be intelligent and conscious. We desperately want them to become conscious. I wonder why."

Reminds me of the old joke:
"Do you think there's intelligent life on other planets?
I don't know, but I haven't given up hope on finding it on Earth yet."
posted by spazzm at 5:14 AM on August 23, 2005


To clarify:
We need all the intelligence we can get.
posted by spazzm at 5:15 AM on August 23, 2005


"It isn't meaningful to use the word "intelligent" for non-living things."

Debating that statement without a clear definition of what it means to be "living" or "alive" is pointless.

Good luck.
posted by spazzm at 5:18 AM on August 23, 2005


Spacediver: If our bodies are not machines, what are they?

We are alive and computers are not. I don't believe in some mystic spark of life, call us living machines if you want, but there's a clear difference between living organisms and man made objects, such as a pair of scissors, a car or a computer. A computer is just another tool in the toolbox. It's not alive, even if we desperately want it to be. It doesn't think, any more than a pair of scissors thinks. There is so much work going on to perfect the illusion that computers think - and think better than us - that we need to remind ourselves of this. That's all I'm trying to do.

What is life? To decide if a virus is alive or not (it doesn't eat, it can't reproduce on its own) is perhaps tricky, but to draw the line living/non-living between a man and a computer is pretty straightforward, and we should not use the difficulties in the case of the virus to pretend that the line between the man and the computer is hard to see.

For me, intelligence is connected to being alive, being intelligent is what we do when we use the best of our abilities - memory, experience, associations, fantasy, judgment.
posted by Termite at 7:28 AM on August 23, 2005


spacedriver: Yes, ideas are not identically experienced or understood in the various "hosts"; but neither are plain words. And that doesn't stop language being a fairly stable vehicle for transmitting meaning.

Here is the problem. There has been about 150 years of research exploring the nature of ideas and symbols (semiotics). There has been an equal history of research exploring genes and inheretance (quantitative genetics). Memetics comes through late in the game and proposes that semiotics can be reduced to quantitative genetics. After all, signs and genes are both just packets of information.

But quantitative genetics works because of some key assumptions about the nature of the information passed from generation to generation. Many of these assumptions (context independence, minimal errors, uniform mutation rates, independent reassortment) don't work well with other forms of information.
posted by KirkJobSluder at 8:20 AM on August 23, 2005


A computer is just as intelligent as a hammer, or a washing machine. A computer is a tool, a piece of dead machinery. It doesn't "think" any more than a hammer thinks. We may trick ourselves into believing that a computer "thinks" or is "intelligent" because it appears to use words or symbols that are meaningful to us (not to the computer) and because we can't see exactly how it works (a hammer is instantly understandable). It isn't meaningful to use the word "intelligent" for non-living things.

Well, if you say it, it must be true!
posted by delmoi at 8:54 AM on August 23, 2005


Termite, if machines ever approach living organisms in terms of complexity, intricacy, robustness, novel behavior, etc. - say, in 5000 years - would you be willing to attribute life to them then? If not, why not?

No one is saying (in this thread, at least) that computers as they are today are alive or have human intelligence - just that they may some day. There is no known reason that a machine cannot one day be considered alive, even if we desperately want there to be.
posted by Bort at 9:11 AM on August 23, 2005


Bort: Termite, if machines ever approach living organisms in terms of complexity, intricacy, robustness, novel behavior, etc. - say, in 5000 years - would you be willing to attribute life to them then?

This question needs more time than I have right now, so for now I'll just say yes I would. Machines that are like us in every respect could perhaps be considered alive. But I doubt that such machines are possible and above all I'm suspicious of the motives for creating them.
posted by Termite at 9:24 AM on August 23, 2005


Termite:

You seem to be defining a computer as something that is essentially not alive. As I said, let's not talk about computers, but let's talk of machines.

Are you now drawing the distinction between man made, and naturally made machines?

I am not claiming that currently existing computers and artificial machines are alive, or conscious, or intelligent. I am responding to your general claim that "computers are not living things".

Computer is a bit of an ambiguous word, so let's use the word machine.

Surely you would not claim that "machines are not living things", since we are machines, and we are living.

So are you qualifying your claim to say that "artificial machines are not living things"?

And if so, are you talking about currently existing artificial machines, or artificial machines in general?

If the latter, then you must provide a reasonable argument as to why, in principle, an artificial machine could never be alive.

For example, suppose we had the technology to assemble a human body atom by atom. I'm pretty sure that the result would be a living creature no different from the human it was copied from.

Would you agree with this, and if not, why?

posted by spacediver at 9:25 AM on August 23, 2005


KJS: Memetics comes through late in the game and proposes that semiotics can be reduced to quantitative genetics.

Aye, I'd agree that if one were trying to use the same quantitative methods for modeling the transmission of memes as is used for genes, then one is being quite stupid. For one thing, even if genes combine to form blended phenotypes, the genes themselves can be reproduced in original unblended form in further generations (as far as I understand - I have a very basic understanding of genetics). In memes, if an idea evolves, then it is that evolved structure which is passed along and down from host to host- a true chinese whispers phenomenon. (is this what you mean by "independent reassortment"?)

I wasn't aware that people tried to use the same models for transmission between genes and memes - I learned about memes in Dawkins' "Selfish Gene", and have not further explored the theory - i.e. I haven't studied the quantitative aspects behind it.
posted by spacediver at 9:33 AM on August 23, 2005


It is quite possible, even probable, that more than trivial models of actual human cognition are mathematically intractable.

You can't be serious, can you? The method the brain uses to process information is known, and it is not mathematically intractable (I assume you mean non-differentiable like the Navier-Strokes equations)

It's called a neural network. No, you can't model it with a simple formula like F = MA, but it can be simulated neuron by neuron. The number of neurons in the brain is limited, so once you have a computer powerful to simulate each neuron in the brain, you will have a computer that can simulate a brain fully. These numbers are known not some mystical figure in the future. There are about 100 billion neurons in the brain. I don't know the exact amount of CPU time it takes to simulate a single one, but it's not that much.


I think the universe results in quite a lot of fundamentally unique things. This does not require any mystical mumbo-jumbo, just a recognition that we live in a chaotic universe rather than a clockwork one.


That sounds like mystical mumbo-jumbo to me. Chaos theory is not a panacea for philosophical discussions; it’s a branch of mathematics. Maybe you read Jurrasic Park, but that doesn’t mean you actually understand the math, which is, IMO necessary to claim it backs up your point.

Simple appeals to chaos theory by non-mathematicians is 'mystical mumbo-jumbo' as much as Descartes ruminations on the soul. (It's in the penile gland!)

Spiderwire:

fundamentally, the architecture of a "computer" is extraordinarily narrow -- it's really nothing more than a single-neuron brain in some senses of the word. hence, its utility for approaching problems like chess, where the goals and rules are easy to define but hard to solve

The architecture of the computer makes no difference. A computer is nothing like a single neuron, which can only add. It would take several dozen neurons to emulate a 32 bit logic gate, and modern CPUs have millions of logic gates. It could be done, of course, but it would be much slower.

Anyway, that’s beside the point I'm trying to make. Once the rules of a single Neurons behavior are known, the information that makes up that neuron can be stored in RAM, and the CPU can go to one saved neuron, set it's inputs, calculate its outputs, and then go on to the next neuron in memory. One by one at an extremely fast rate (flickering between neurons hundreds of millions of times per second). The result would be the same.

All computers that are compatible with the Universal Turing device are compatible with each other. And, it is impossible to build a computer more powerful (without respect to time) then a Universal Turing machine. Thus, a brain, existing in the real world is compatible with a Turing machine.

KJS:
No, I'm not saying that Go is mathematically intractable. I'm saying that it's quite possible that non-trivial models of the human mind are intractable.

And you're wrong. Sorry. This stuff was proven decades ago.


Well, I don't know what "you" have to do with it. Lets leave "you" out of the picture for a bit. Quite a bit of cognition seems to work by amplifying very small effects. The precision of neuron #132532 may matter quite a bit when it acts in competition with #132536. The neuron that is first out of the gate or the strongest will have it's signal amplified and translated into a behavior.


I really wish you would stop speaking so authoritatively, you really have no idea what you're talking about.

The end result is that a memory is different, every time we remember it(1).

Gasp, an actual footnote!

(1) This is one of the reasons why I think memetics is a dead-end theory. Ideas and genes are not analogous to each other.

No, just more unfounded opinion. Reading science articles by journalism majors in USA Today and Time is not the same thing as actually knowing something.

that there is no such thing as a self, as it is constantly dying and being reborn.

Once you hit your 20s, your brain stops growing. Brain cells are never replaced when they die.

For one thing, even if genes combine to form blended phenotypes, the genes themselves can be reproduced in original unblended form in further generations (as far as I understand - I have a very basic understanding of genetics)

This is not true. There are some types of recombination (such as taking one piece of DNA, copying it, and sticking it back on the same 'side' of a chromosome, giving you two copies of the same chromosome. That type of recombination has happened many times in the history of eukaryotes (cells with nucleuses, like humans, plants, and fungi)
posted by delmoi at 10:07 AM on August 23, 2005


giving you two copies of the same chromosome.

Er, I mean two copies on the same chromosome.
posted by delmoi at 10:07 AM on August 23, 2005


Anyway, this whole discussion has been very uninformed. With KirkJobSluder not defining specifically what he means when he says 'intelligence'. I don't think there is any reasonable definition of intelligence that a computer, properly programmed, could not pass.

If you can't come up with a definition, a spesific test, then it really isn't anything more then mythical mumbo-jumbo.
posted by delmoi at 10:11 AM on August 23, 2005


Delmoi:

Once you hit your 20s, your brain stops growing. Brain cells are never replaced when they die.

That's not the point. The flux that may occur within a brain is not a function of the number/type/distribution of the brain cells. Remember, synapses are extremely plastic even into old age. As such, the precise physical state of the brain is in a constant state of flux, so long as your brain is active.

This is not true. There are some types of recombination (such as taking one piece of DNA, copying it, and sticking it back on the same 'side' of a chromosome, giving you two copies of the same chromosome.

I'm not exactly clear on what this process entails - does it mean that the original segment of DNA can never be reproduced in a future generation, just as the original meme is never reproduced exactly from host to host?

Remember the context of what we're talking about - the quantitative differences between modeling the transmission of memes vs. the transmission of genes.
posted by spacediver at 10:14 AM on August 23, 2005


Actually, Delmoi, the neurons are still replaced in your brain, just at a slower rate.
I saw the study on Metafilter, but now can't find it.
posted by klangklangston at 11:03 AM on August 23, 2005


delmoi: It's called a neural network. No, you can't model it with a simple formula like F = MA, but it can be simulated neuron by neuron. The number of neurons in the brain is limited, so once you have a computer powerful to simulate each neuron in the brain, you will have a computer that can simulate a brain fully. These numbers are known not some mystical figure in the future. There are about 100 billion neurons in the brain. I don't know the exact amount of CPU time it takes to simulate a single one, but it's not that much.

The three-body gravitation problem is disturbingly simple. It is also impossible to solve except for a handful of cases. It is all moot if the complexity of the problem does not scale linearly.

On uniqueness: That sounds like mystical mumbo-jumbo to me. Chaos theory is not a panacea for philosophical discussions; it’s a branch of mathematics. Maybe you read Jurrasic Park, but that doesn’t mean you actually understand the math, which is, IMO necessary to claim it backs up your point.

How many people have your exact pattern of finger prints? How many places have the exact topography and geology of the Grand Canyon? How many thunderstorms produce identical patterns of lightning strikes?

The point is that given how many phenomena are strongly dependent on initial conditions, it's not surprising that we see a lot of uniqueness in the world out there.

On emulation:All computers that are compatible with the Universal Turing device are compatible with each other. And, it is impossible to build a computer more powerful (without respect to time) then a Universal Turing machine. Thus, a brain, existing in the real world is compatible with a Turing machine.


The Turing computer as you well know is just a theoretical construct. The fact that it can be solved by a Turing computer means squat about whether it can be solved using any computer humans are likely to build before the sun burns out.

And you're wrong. Sorry. This stuff was proven decades ago.

Perhaps, so. On the other hand, I've read athorities much wiser than me point out that simply factoring two very large numbers is likely beyond the possibility of any computer that can be constructed in the history of the universe. So I'll stick to my skepticism.

On memetics: No, just more unfounded opinion. Reading science articles by journalism majors in USA Today and Time is not the same thing as actually knowing something.

That's funny, I was wondering the same thing about you.

After all, my knowledge about human cognition is grounded in reading Pierce, Piaget, Eco (his non-fiction, not his fiction), Pinker, Damassio, and Loftus just to drop the big names, along about 200 papers in cognitive psychology in the last 10 years. (And yes, I actually have read these researchers as opposed to just SciAm news about them.) In specific to memetics, I can contrast the rather naive conception of a meme to Eco's semiotics and Loftus' theory of memory. I can compare the "evolutionary" model of meme diffusion to Rogers' Diffusion of Innovations. (Which rather like Darwin and unlike Dawkins, is actually built on a large body of evidence on cultural transmission.) In addition, I have a past life as a biologist so I know a bit more about quantitative genetics and evolution than your average Joe reading USA Today and Time.

Now if you actually want to engage in an informed discussion about the nature of memory or the virtues of memetics as a theory contrasted with other approaches in Cognitive Psychology, I'm quite open to that and I'd like to hear your theoretical basis. I'm rather less interested in entertaining trash talk from someone who seems to have very little understanding of what I'm talking about.

This is not true. There are some types of recombination (such as taking one piece of DNA, copying it, and sticking it back on the same 'side' of a chromosome, giving you two copies of the same chromosome. That type of recombination has happened many times in the history of eukaryotes (cells with nucleuses, like humans, plants, and fungi)

Certainly. But in quantitative genetics, this happens at such a low frequency that it's not a central focus. In fact, one of the things that led to the great synthesis of the early 20th century was the fact that you don't need to know what happens to individual chromosomes or even individuals in a population. You just ask, "if reproductive success is not equal over this distribution of phenotypes, what will be the distribution of phenotypes after N generations?"

Anyway, this whole discussion has been very uninformed. With KirkJobSluder not defining specifically what he means when he says 'intelligence'. I don't think there is any reasonable definition of intelligence that a computer, properly programmed, could not pass.

Well, if you had actually read what I was saying, you would find that I agree with you on both of these points. "Without a definition of "intelligence" I don't think either of these statements are very meaningful. For example, I think we already know both how to make intelligent machines, and what makes the human brain intelligent. I don't think that either of these are revolutionary miracles."

However, I think that creating an AI that can pass most or all reasonable tests of cognitive ability, and creating an AI that "thinks like a human being" are two very different projects. (And I must admit, I find the "Turing Test" to be a flawed test.)
posted by KirkJobSluder at 11:40 AM on August 23, 2005


Whoops, I should have said that Diffusion of Innovations is rather analogous to The Origin of Species in that it builds an argument for a theory from a large volume of case studies from all over the world. Memetics can trace its lineage back to a rather off-hand comment by Dawkins with rather weak anecdotal support.
posted by KirkJobSluder at 11:53 AM on August 23, 2005


Delmoi: All computers that are compatible with the Universal Turing device are compatible with each other. And, it is impossible to build a computer more powerful (without respect to time) then a Universal Turing machine. Thus, a brain, existing in the real world is compatible with a Turing machine.

Actually I remember reading that this isn't really true. I might be wrong, but I thought that connectionist (or dynamical) systems aren't classical symbolic processors, and therefore do not fit the Turing Paradigm. I guess it depends on what we mean by "computer".

Secondly, according to Jack Copeland:

The standard textbook proof that any finite assembly of Turing machines can be simulated by a single universal Turing machine involves the idea of the universal machine interleaving the processing steps performed by the individual machines in the assembly. The proof is sound only in the case where the machines in the assembly are operating in synchrony. Under certain conditions, a simple network of two non-halting Turing machines m1 and m2 writing binary digits to a common, initially blank, single-ended tape, T, cannot be simulated by universal Turing machine (here I am indebted to correspondence with Aaron Sloman). m1 and m2 work unidirectionally along T, never writing on a square that has already been written on, and writing only on squares all of whose predecessors have already been written on. (If m1 and m2 attempt to write simultaneously to the same square, a refereeing mechanism gives priority to m1.) If m1 and m2 operate in synchrony, the evolving contents of T can be calculated by the universal machine. This is true also if m1 and m2 operate asynchronously and the timing function associated with each machine, 1 and 2 respectively, is Turing-machine-computable. 1(n) = k (n, k ≥ 1) if and only if k moments of operating time separate the nth atomic operation performed by m1 from the n+1th; similarly for m2 and 2. Where 1 and 2 are both Turing-machine-computable, the universal machine can calculate the necessary values of these functions in the course of calculating each digit of the sequence being inscribed on T. If at least one of 1 and 2 is not Turing-machine-computable and m1 and m2 are not in synchrony, m1 and m2 may be in the process of inscribing an uncomputable number on T.

those squares should be triangles (deltas). If anyone wants the original document, let me know.
posted by spacediver at 12:53 PM on August 23, 2005


Um, fun game? I have no idea what most of you are talking about ...

Believe it or not, I downloaded this game back in February and said "eh." Thanks for the reminder to give it another look.
posted by mrgrimm at 4:56 PM on August 23, 2005


" If at least one of 1 and 2 is not Turing-machine-computable and m1 and m2 are not in synchrony, m1 and m2 may be in the process of inscribing an uncomputable number on T."

Um, doesn't that contradict his original statement that the number was uncomputable even if both ?1 and ?2 are turing-computable?

If ?1 and ?2 are both turing computable, the results are highly surprising. Otherwise it's just equivalent to saying you can't accurately predict the outcome of a random process. Which is not surprising in the least.

I'd like very much to see the original document, spacediver. I've emailed you my address.

The interesting thing about the UTM is not that it can emulate any computer, but that it can perform any possible computation. This means that if something is governed by rules, the UTM can accurately predict what it will do.

Let's consider our options:
1. Reality is governed by unbreakable rules such as Laws of Physics.
2. Reality is not governed by rules, everything is random.

If 1 is the case, as it seems to be, any aspect of reality (even human thoughts) can be accurately reproduced by an UTM. If 2 is the case - well, then we're in trouble since this means, for example, that an unsupported object in a gravity field does not always fall but might suddenly not fall.

Now, quantum physics offer an interesting third option to the above dilemma, as suggested by Penrose and others. But I'll leave that side of the argument to someone else.
posted by spazzm at 8:10 PM on August 23, 2005


spazzm: The interesting thing about the UTM is not that it can emulate any computer, but that it can perform any possible computation. This means that if something is governed by rules, the UTM can accurately predict what it will do.

This reminds me of the punchline, "assume a spherical cow."
posted by KirkJobSluder at 8:20 PM on August 23, 2005


Are you arguing that reality is not rule-bound, KirkJobSluder?
posted by spazzm at 8:29 PM on August 23, 2005


spazzm I'm quite ignorant on the finer details of computational theory, but does the UTM apply to non-symbolic paradigms of computing, such as connectionism/dynamical systems?
posted by spacediver at 9:33 PM on August 23, 2005


Spazzm:

1. Reality is governed by unbreakable rules such as Laws of Physics.
2. Reality is not governed by rules, everything is random.


I may be wrong here, or it may simply be a semantic issue, but are you sure these two options are exhaustive?

I guess it depends on what we mean by rules and "bound". I tend to view laws and rules as merely being descriptions about how reality unfolds - almost like an interpretive web through which we understand and predict phenomena. But to annoint these laws into a status of metaphysical primacy may be premature.

Or perhaps, as I suggested, my concerns are merely linguistic.
posted by spacediver at 9:37 PM on August 23, 2005


spazzm: Are you arguing that reality is not rule-bound, KirkJobSluder?

No, I'm just pointing out that a UTM is a theoretical construct, one that is not always compatible with such messy real-world constraints such as the number of particles that exist on Earth.
posted by KirkJobSluder at 6:45 AM on August 24, 2005


spacediver: "[...]does the UTM apply to non-symbolic paradigms of computing, such as connectionism/dynamical systems?"

Yes. Any computation, although I'm not sure what you mean by "paradigm" in the sense of computation.
Connectionist computation can be implemented on an UTM machine - they can even be implemented on everyday PCs, that's what neural network research is mostly about.

About the exhaustiveness of my little dichnonomy, you are completely correct.
But assuming that reality is part random, part rule-bound doesn't solve our little dilemma and prevent computers from thinking - it merely implies that an UTM would need to be hooked up to an appropriate source of randomness in order to think in the same way we do.
Alternatively it implies that our choices are random.

In either case, it raises some serious questions about free will.
posted by spazzm at 6:55 AM on August 24, 2005


KirkJobSluder: "No, I'm just pointing out that a UTM is a theoretical construct, one that is not always compatible with such messy real-world constraints such as the number of particles that exist on Earth."

Fair enough, I'd be willing to accept that the reason for the impossibility of computer intelligence lies in computational complexity.

But I'd like to see some proof.
Which will be hard to produce, given that a human brain has fewer connections than there are silicon atoms in a square metre of sand. And there's a lot of silicon on earth.
posted by spazzm at 7:06 AM on August 24, 2005


spazzm: Fair enough, I'd be willing to accept that the reason for the impossibility of computer intelligence lies in computational complexity.

I have said at least three times in this thread that I don't consider "computer intelligence" impossible, and in fact, I've said at least twice that we already have it by some measures.

Skepticism about our ability to create non-trivial models of a specific physical system, says very little about the impossibility of "computer intelligence."

But I'd like to see some proof.
Which will be hard to produce, given that a human brain has fewer connections than there are silicon atoms in a square metre of sand. And there's a lot of silicon on earth.


Well, if you are looking for proof, I think you are in the wrong field.

But what we can do is point to analogous problems in building computer software models of physical systems, and the existence of one-way problems based on very simple rules. At this point, I think the burden of evidence is on the people claiming that creating software models of the human brain is just a matter of throwing enough expertise, sillicon, and Watts at the problem. Especially given that a few more paradigm shifts might be in the cards for understanding how brains work.
posted by KirkJobSluder at 7:36 AM on August 24, 2005


"Well, if you are looking for proof, I think you are in the wrong field."

And what field would that be?
posted by spazzm at 7:38 AM on August 24, 2005


me:"[...] silicon atoms in a square metre of sand. "

What the hell have I been drinking? That's supposed to be cubic metre, of course. Sorry.
posted by spazzm at 7:44 AM on August 24, 2005


spazzm: And what field would that be?

Cognitive science, which is largely an empirical enterprise.
posted by KirkJobSluder at 11:01 AM on August 24, 2005


On memetics: [quote me: No, just more unfounded opinion. Reading science articles by journalism majors in USA Today and Time is not the same thing as actually knowing something.]

I wasn't talking about memetics there, I don't know much about it, but it does seem somewhat silly. The statement I made, and you quoted, was refering to your discussion of complexity, chaos theory, computability, etc, which you seem to have no idea about.
posted by delmoi at 12:21 PM on August 24, 2005


Spacediver:
Actually I remember reading that this isn't really true. I might be wrong, but I thought that connectionist (or dynamical) systems aren't classical symbolic processors, and therefore do not fit the Turing Paradigm. I guess it depends on what we mean by "computer".

Somebody may have thought that, but they were wrong.

Secondly, according to Jack Copeland:.

I actualy disproved that very point
on metafilter (or it might have been kuro5hin, it was a while ago) when someone came up with. If a turing machine that could change its processing speed, then you could write a program that would solve all computational problems in constant time, wether they halted or not!

If you can't do it, then there must be some sort of granularity of time which can be emulated by a turing machine.


But what we can do is point to analogous problems in building computer software models of physical systems, and the existence of one-way problems based on very simple rules.



Except, you havn't even shown why those things are mathematically analogous in any way shape or form. It's like saying because a 16 year old girl doesn’t know how to clone a human in a laboratory; she can't make a baby by fucking. In other words a complete non sequitur. Some physical systems are simple to model and others are hard. Why do you think the brain is hard? You don't say.

Turing Machines are a theoretical construct, yes, and they have the luxury of unlimited memory. If you limit their memory, however, they can actually be built. Not that anyone ever would, because anything they can do, you can do with a RAM machine (the theoretical model of modern computers).

And as far as modeling physical systems like thermodynamic systems, it can be done, on a small scale. But human neurons are much larger then individual molecules in a gas, and human brains are much smaller then a thunderstorm. The brain has about 1011 neurons, while a thunderstorm has (I would estimate) to be about 1030 to 1032 gas molecules. So simulating the human brain should be about 1020 times easier. Equivilant to saying because you can't ten thousand lightyears, you also can't walk one meter.
posted by delmoi at 12:56 PM on August 24, 2005


Delmoi: That's kinda assuming that there's a linear scalability between brains and thunderstorms.

Oh, and thanks everybody. This thread has been really entertaining, and I've enjoyed reading it more than I've actually enjoyed playing the game.
posted by klangklangston at 1:18 PM on August 24, 2005


Delmoi: That's kinda assuming that there's a linear scalability between brains and thunderstorms.

I'm just pointing out the diffrence in complexity.
posted by delmoi at 1:53 PM on August 24, 2005


demoi: I wasn't talking about memetics there, I don't know much about it, but it does seem somewhat silly. The statement I made, and you quoted, was refering to your discussion of complexity, chaos theory, computability, etc, which you seem to have no idea about.

Then why did you quote my discussion of memetics in composing your response?

Except, you havn't even shown why those things are mathematically analogous in any way shape or form.

Basic network theory. For any network of members g the maximum possible relations is g(g-1). Now granted, a network with the maximum number of connections is not very useful. But this provides a rough metric that unless something really funny is going on, the problem is not going to scale in a linear fashion.

And this is aside from the problem that we don't really know how a brain works. And no, there is more going on than a naive neural net.

Turing Machines are a theoretical construct, yes, and they have the luxury of unlimited memory. If you limit their memory, however, they can actually be built. Not that anyone ever would, because anything they can do, you can do with a RAM machine (the theoretical model of modern computers).

Assuming enough RAM and time. For example, many cryptographers claim that a brute force attack against keys currently in use is not possible with any RAM machine that can be built. This is due to the fact that brute force attacks don't scale linearly with key size.

And as far as modeling physical systems like thermodynamic systems, it can be done, on a small scale. But human neurons are much larger then individual molecules in a gas, and human brains are much smaller then a thunderstorm.

Impressive. But you are ignoring the fact that thunderstorms are modeled not with simulated molecules of gas, but with abstract rectangles of gas. A typical one I found had a vertical resolution of 500m and a horizontal resolution of 1500m, and a time resolution of .5 seconds. So it does seem that there are quite a few shortcuts taken with thunderstorms leading to some errors between the simulated and observed. The simulations are considerably less complex than the reality in the case of thunderstorms.

And as I've said earlier, I do think that computer simulations of human cognition will reach a point where we can do some useful hypothesis testing. But that's a different project from producing something that functions like a human being.

But the number of units is just one issue. Systems with a very small number of "atoms" interacting with each other can't be successfully simulated because errors in precision quickly multiply.
posted by KirkJobSluder at 3:03 PM on August 24, 2005


Basic network theory. For any network of members g the maximum possible relations is g(g-1).

I assume you meant g(g-1), and the minimum is zero. In any event, what does that have to do with the brain? We know a lot about brain structure, and it's certanly not possible for an individual neuron to have 10^9 dendrites (connecting it to every other neuron). Neurons are limited to nearby neurons and to a finite number of connections with those, and there are also seperate regions of the brain that do diffrent things, and communicate through standard channels. It's not a big, unstructured clump of neurons. IIRC most neurons have between 2 and 20 connections, and because the space is structuraly limited as well, that cuts down greatly on the number of possible connections (And yes, i know that diffrent neurotransmitters have diffrent distance and time effects, but each neuron can only emit one type of neurotransmitter).

And this is aside from the problem that we don't really know how a brain works. And no, there is more going on than a naive neural net.

Such as?

Assuming enough RAM and time. For example, many cryptographers claim that a brute force attack against keys currently in use is not possible with any RAM machine that can be built. This is due to the fact that brute force attacks don't scale linearly with key size.

Yes, but so what? Again this is the 'can't walk 10k lightyears, can't walk a meter' argument.

Impressive. But you are ignoring the fact that thunderstorms are modeled not with simulated molecules of gas, but with abstract rectangles of gas. A typical one I found had a vertical resolution of 500m and a horizontal resolution of 1500m, and a time resolution of .5 seconds.

Look, I know what the Navier Strokes equations are and how they can be modeled with cellular automata. That's not the point, and actually the smaller the squares, the more accurate the model, which is what you seem to be ignoring, or something. Frankly I'm having trouble understanding what it is you're trying to say, or why you're even brining it up.


But the number of units is just one issue. Systems with a very small number of "atoms" interacting with each other can't be successfully simulated because errors in precision quickly multiply.


I simply don't belive you in this case. Do you have any kind of refrence do back you up?
posted by delmoi at 4:16 PM on August 24, 2005


Spazzm:

spacediver: "[...]does the UTM apply to non-symbolic paradigms of computing, such as connectionism/dynamical systems?"

Yes. Any computation, although I'm not sure what you mean by "paradigm" in the sense of computation.
Connectionist computation can be implemented on an UTM machine - they can even be implemented on everyday PCs, that's what neural network research is mostly about.


Fair enough - I guess the next question would be whether a UTM can perfectly simulate an analogue machine (like a connectionist network that depends on analogue variables) - but then one wonders if it needs to perfectly simulate it (perhaps just a super high resolution of digitial sampling is enough to instantiate the connectionist system in question).


About the exhaustiveness of my little dichnonomy, you are completely correct.
But assuming that reality is part random, part rule-bound doesn't solve our little dilemma and prevent computers from thinking - it merely implies that an UTM would need to be hooked up to an appropriate source of randomness in order to think in the same way we do.


Ah I wasn't suggesting a partially random/partially ordered state of affairs. I was just suggesting that there might be a perfectly deterministic system which cannot be interpreted through rules. But perhaps this is a misguided intuition on my part. Either way, I don't really think that consciousness/intelligence are reliant on any such uncomputability. I think it's perfectly possible to instantiate consciousness in an artificial machine, although this is something of an opinion since, unlike Dennett, I can't claim to have understood what consciousness is yet.

I have also heard one speaker talk about how we need a new metaphor that goes beyond determinism/indeterminism - he suggested "organism" - kinda hard to see how anything other than determinism makes sense though...

(for the speaker, see Brian Swimme's video interview here)


In either case, it raises some serious questions about free will.

Bah - I think the concept of free will is pretty incoherent. Sure we may have free will in the sense that we can plan and control for the future - as can a sophisticated machine - but I don't think it makes any sense to say that we are morally responsible for our actions in anyway that warrants punishment which goes beyond the realm of utility.

Also, I'm not so sure that we can confidently say that quantum processes are indeterminate or random. Sure we can say that they are practically indeterminate in that we seem to have no way of predicting which way the waveform collapses. And sure we can say that the pattern of collapse is random - but that doesn't necessarily imply that the process is random. Afterall, you can use nonrandom algorithms to produce random output (there are different types of random).
posted by spacediver at 5:21 PM on August 24, 2005


Fair enough - I guess the next question would be whether a UTM can perfectly simulate an analogue machine (like a connectionist network that depends on analogue variables) - but then one wonders if it needs to perfectly simulate it (perhaps just a super high resolution of digitial sampling is enough to instantiate the connectionist system in question).

Any analog system is going to have noise. As long as the artifacts created by the digital system you're using are less then the 'noise floor' of the analog system you're trying to simulate the results will be the same.

(Also, all electrical systems are quantized, because all voltages are multiples of the voltage of the electron)
posted by delmoi at 5:44 PM on August 24, 2005


spacediver:"Fair enough - I guess the next question would be whether a UTM can perfectly simulate an analogue machine (like a connectionist network that depends on analogue variables) - but then one wonders if it needs to perfectly simulate it (perhaps just a super high resolution of digitial sampling is enough to instantiate the connectionist system in question)."

And there's the possibilit that reality is not analog, of course. If you get down on a small enough scale, energy is granular (quantum mechanics, see?) It's not such a big stretch to imagine that position and time might be granular as well.

spacediver:I was just suggesting that there might be a perfectly deterministic system which cannot be interpreted through rules.

That's a contradiction in terms. A system that is deterministic follows rules. If you know the rules, you know how the system will behave because you can emulate it.

delmoi:"Equivilant to saying because you can't [walk] ten thousand lightyears, you also can't walk one meter."

Well said.
posted by spazzm at 7:45 PM on August 24, 2005


delmoiAny analog system is going to have noise. As long as the artifacts created by the digital system you're using are less then the 'noise floor' of the analog system you're trying to simulate the results will be the same.

(Also, all electrical systems are quantized, because all voltages are multiples of the voltage of the electron)


makes sense - never thought about that aspect of it before.

spazzmAnd there's the possibilit that reality is not analog, of course. If you get down on a small enough scale, energy is granular (quantum mechanics, see?) It's not such a big stretch to imagine that position and time might be granular as well.

agreed.

That's a contradiction in terms. A system that is deterministic follows rules. If you know the rules, you know how the system will behave because you can emulate it.

yes that's why i thought it might be a misguided intuition. In some ways the idea of the universe-as-computer makes a lot of sense, but I also like to leave some element of mysterious unfolding...

you know what - perhaps we can have both in a sense.

If the microcosm is infinitely deep, and it is determinstic al the way through, that still doesn't mean you can't have al almost fractal-like beauty in its infinite complexity, since it never ends.

Course the interesting question is how deep we need to go in order to have the right sorts of emergent phenomena crop up in our digital simulations.
posted by spacediver at 7:53 PM on August 24, 2005


spacediver:" Afterall, you can use nonrandom algorithms to produce random output (there are different types of random)."

That's pseudorandom you're thinking of there. If you know the rules (random number generator algorithm) and the start conditions you can predict exactly the output of any pseudorandom number generator.

Quantum randomness is different in that, if our current theories are correct, they will always and forever be completely unpredictable - they have no underlying cause.
Since our current theories are very well-tested, we can confidently say that this is so.
How confiden we are is another matter, of course, depending on wether one subscribes to the "only a theory" school of thought or not.
posted by spazzm at 8:02 PM on August 24, 2005


Spazzm:

Quantum randomness is different in that, if our current theories are correct, they will always and forever be completely unpredictable - they have no underlying cause.
Since our current theories are very well-tested, we can confidently say that this is so.


Hmm - I always understood the randomness assumption to be derived from statistical measures. For example, a photon travelling through a half silvered mirror will be reflected half the time, and transmitted half the time - but since there is a 50% chance, we have no way of telling in advance which it is.

We therefore call it random.

But the actual process underlying such collapse is a complete mystery for now - it just appears to produce statistically random outputs - meaning that if we are to assume a random process, we can expect, and indeed observe, a random output.

But say there is a theory that subsumes quantum mechanics that isn't based on random processes, but rather is rule based (we just don't know the rules).

I'm not a statistician, and maybe there are indeed ways of dissociating "random" output of unknown complex algorithms, from random output of truly random processes.

My knowledge of quantum physics is also very limited - read a couple of Penrose's books about a decade ago but nothing really since.
posted by spacediver at 8:43 PM on August 24, 2005


delmoi: No, g(g-1). But of course, if you knew what you were talking about, you would have known this.

Lets start over again. Earlier in this thread, someone stated that we would have AI if we just created a computer simulation of a human brain at the level of neuron activity. My argument is that "emulating" the human brain is likely to be the hard and expensive way to go about getting AI. It may even be an impossible way except as a limited research tool for understanding how brains work.

Why do I think that emulating a human brain is likely to be hard and expensive? First, we don't have a complete theory of what goes on in a brain and how it works. Second, increasing the accuracy of simulations of real-world phenomena requires non-linear investments of computing power and memory. This means that for many physical systems, we may never have computer models that are "accurate enough" to beat observation of the real thing.

As I've said multiple times in this thread. I could be wrong. A grand theory of brain function could be just around the corner, and someone might cut through the difficulty of creating "accurate enough" computer models. I'm not willing to bet on it though.

I don't even think we need to understand how the brain works to create AI, just as we don't need to fully understand how fish swim in order to cross the Pacific, how mammals walk in order to travel more than 10 miles, or how birds fly in order to create an aeroplane.
posted by KirkJobSluder at 10:48 PM on August 24, 2005


Delmoi:

IIRC most neurons have between 2 and 20 connections, and because the space is structuraly limited as well, that cuts down greatly on the number of possible connections

I thought many had a thousand or more connections.
posted by spacediver at 11:03 PM on August 24, 2005


delmoi: No, g(g-1). But of course, if you knew what you were talking about, you would have known this.

Oops, you're right, what was I thinking? Anyway, this makes the problem far easier.

increasing the accuracy of simulations of real-world phenomena requires non-linear investments of computing power and memory.

Well, it depends on what you're simulating. All I know is that you've been posting all kinds of crazy innumerate nonsense throughout this thread in an annoyingly authoritative voice.
posted by delmoi at 8:07 AM on August 25, 2005


delmoi: Well, it depends on what you're simulating. All I know is that you've been posting all kinds of crazy innumerate nonsense throughout this thread in an annoyingly authoritative voice.

I'm not certain where you get that. I've repeated multiple times that I could be wrong in my current reading of cognitive science research. You may be correct that brains are nice, simple and trivial to simulate. I'll even state that I'd be quite happy to be wrong since my job involves finding new ways to "program" wet human brains.

I'll be more than happy to qualify my "innumerate nonsense" about computer simulations. I'd appreciate it though if you quit spouting all kinds of ignorant nonsense about cognitive science in an annoyingly authoritative voice.
posted by KirkJobSluder at 8:51 AM on August 25, 2005


I'd appreciate it though if you quit spouting all kinds of ignorant nonsense about cognitive science in an annoyingly authoritative voice.

The only "authoritive" statement I made about anything related to 'cognitive science' was this one

You can't be serious, can you? The method the brain uses to process information is known, and it is not mathematically intractable (I assume you mean non-differentiable like the Navier-Strokes equations)

It's called a neural network. No, you can't model it with a simple formula like F = MA, but it can be simulated neuron by neuron. The number of neurons in the brain is limited,
I suppose that might be wrong, in theory, but I know of no evidence that contradicts it. All the research that's been done on the structure of the brain (as far as I know) shows it to be a NN (although more complicated due to different neurotransmitters/hormones). If you have some evidence to the contrary, I'd love to hear it.

Beyond that, most of my comments referred to mathematics and computer science.
posted by delmoi at 11:05 AM on August 25, 2005


KJS: I thought this thread had died, and maybe I've missed something you posted, but I think you are misunderstanding the what chaos and quantum indeterminism and mathematically intracticble means in practice.

It is quite possible, today, to create an upright pendulum with several joints, say three of them, and then oscillate the pendulum up and down. It is mathematically intractable to determine what that pendulum will do when you turn it on given a set of initial conditions. Even very similar initial conditions result in different outputs. That's chaos.

Note, it is not at all impossible, or even hard, to build the chaotic pendulum (I've had a part in it).

It seems to me you are confusing the great difficulty in predicting or modeling something with math, and the actual building of something. I don't doubt that if we ever build an AI as intelligent as a human, that it will be essentially impossible to predict its behavior. But that is a very different thing from saying it is impossible to build the AI (or that it is even hard to build the AI).

Sorry in advance if I've misunderstood you. And I agree that "true" AI does not exist now and is quite non-trivial, but I see no reason to assume it is impossible. Indeed, the fact that you and I exist shows that AI is quite possible, unless you believe it is some divine spark that makes us intelligent.
posted by teece at 12:02 PM on August 25, 2005


Teece:

It seems to me you are confusing the great difficulty in predicting or modeling something with math, and the actual building of something. I don't doubt that if we ever build an AI as intelligent as a human, that it will be essentially impossible to predict its behavior. But that is a very different thing from saying it is impossible to build the AI (or that it is even hard to build the AI).

I may be wrong, but from what I can gather, KJS is not claiming it's impossible (in principle or in practice) to build an AI - he just seems to be advocating for an approach to AI that is not based on mere emulation of the human brain.
posted by spacediver at 1:34 PM on August 25, 2005


Note, it is not at all impossible, or even hard, to build the chaotic pendulum (I've had a part in it).

That sounds intresting. Do you have more info on this somewhere?
posted by delmoi at 9:00 PM on August 25, 2005


I don't have anything on the one I worked on, delmoi: it was a little, lame one that some friends were doing a senior undergrad physics lab with. But a "chaotic pendulum" search on Google turns up lots of neat stuff, and an "inverted pendulum" is what I was thinking of.

The specific pendulum I had in mind is described by David Acheson in From Calculus to Chaos, specifically in chapter 12. He built one, has pictures, and goes into intimate detail of the dynamics. His has 3 rods with 2 joints, hooked up to a frequency generator and a rotating arm at the base. The pendulum will actually "stand up" when activated in certain states. It's really fascinating, and does some amazingly counter-inuitive things. This page gives an idea of the inverted pendulum Acheson built. (Wait, here's Acheson's version, from this page of his).

(Acheson says his book is a text of applied calculus for students with knowledge of elementary Calculus. In my experience, it can be a hard slog if you only have a couple semesters of calculus. He gets into ODEs and PDEs pretty heavily, and a lot is left unsaid on these subjects).
posted by teece at 9:29 PM on August 25, 2005


delmoi wrote: A computer is nothing like a single neuron, which can only add

and then a little later wrote:
(And yes, i know that different neurotransmitters have different distance and time effects, but each neuron can only emit one type of neurotransmitter).

It's been many years since my neuropharmacology courses, so I did some quick searching and found this thread, which accords more with my memory. Strictly speaking, each neuron can only add, but if it is adding positive and negative inputs, it's a bit disingenuous to describe it as simple addition. Also, again, each neuron only has one neurotransmitter, but as described in the thread, the modulating effects of neuro-peptides make it more complex than you imply. It's a matter of degree, which doesn't take everything away from your points, but I don't think it goes without saying.

I don't agree with all that is discussed in that thread - particularly the part that goes "It’s amazing, isn’t it? The functioning of the CNS is mind-boggling (pun intended). No one knows how it all comes together. I don’t think we will ever fully understand the functioning of complex nervous systems, which I find rather ironical: our brains cannot understand how the brain works!" - I'm more 'optimistic' that nothing is beyond understanding eventually.

Back on topic, I wonder about the use of 'evolutionary' software to come up with algorithms for solving Arimaa and Go etc. Obviously, you need to be able to create at least a working model, but then you pit different variations of the software against one another in a way that allows the 'better' programs to 'breed' and pit them against one another, seriatum. More complex than "The Prisoner's Dilemma", but same in kind, no?
posted by birdsquared at 11:21 PM on August 26, 2005


« Older We aren't always fighting   |   Google Live (Greasemonkey Script) Newer »


This thread has been archived and is closed to new comments