September 11, 2012 6:25 PM Subscribe

The collectible card game Magic: The Gathering is Turing complete.

This means that one could, in theory, use a very particular arrangement of Magic: The Gathering cards to compute anything that any computer is capable of computing, albeit very slowly. This result seems to have been inspired by this post at Draw 3 Cards, a M:tG theory site (and cross-posted at StackExchange).

A few other games are Turing complete, including Minecraft, via its redstone circuits (more or less, since the Minecraft engine can't handle a truly infinite amount of memory). Quite a few other games, including Tetris and Minesweeper, are NP-Complete, which is similar sounding but unrelated.

(via Slashdot)
posted by jedicus (58 comments total)
31 users marked this as a favorite

This means that one could, in theory, use a very particular arrangement of Magic: The Gathering cards to compute anything that any computer is capable of computing, albeit very slowly. This result seems to have been inspired by this post at Draw 3 Cards, a M:tG theory site (and cross-posted at StackExchange).

A few other games are Turing complete, including Minecraft, via its redstone circuits (more or less, since the Minecraft engine can't handle a truly infinite amount of memory). Quite a few other games, including Tetris and Minesweeper, are NP-Complete, which is similar sounding but unrelated.

(via Slashdot)

Unlike cards or silicon.

posted by DU at 6:38 PM on September 11, 2012 [6 favorites]

Well, the obvious next step is to play Magic: The Gathering INSIDE Magic: The Gathering.

Looking forward to this.

posted by kilozerocharliealphawhiskey at 6:43 PM on September 11, 2012 [5 favorites]

Looking forward to this.

posted by kilozerocharliealphawhiskey at 6:43 PM on September 11, 2012 [5 favorites]

Griphus, you are just faster on the draw. A friend of mine told me how he got about four nested games going with Shahrazad, mostly just to see if it was possible. It was, barely -- they started to run out of available flat space in the apartment.

posted by jiawen at 6:48 PM on September 11, 2012 [4 favorites]

posted by jiawen at 6:48 PM on September 11, 2012 [4 favorites]

Ah, the Turing Trap: where everything is possible but nothing is of interest.

posted by ubiquity at 6:49 PM on September 11, 2012 [19 favorites]

posted by ubiquity at 6:49 PM on September 11, 2012 [19 favorites]

I'm sure I'm not the only Mefi that noticed that sendmail configuration language was Turing complete and contemplated writing a C compiler just to astound geeks around the world.

posted by sammyo at 6:52 PM on September 11, 2012 [2 favorites]

posted by sammyo at 6:52 PM on September 11, 2012 [2 favorites]

Actually, from my perspective, he's assuming I'm familiar with the basic idea of how Magic: The Gathering works. That said, alright, let's do this...

posted by rlk at 7:07 PM on September 11, 2012 [4 favorites]

The Tetris and Minesweeper comparison is apples vs. oranges. Those games are NP-hard *to find an optimal strategy*, which is a totally different creature from showing how to embed a Turing machine into an open-ended game world, which is a statement about the computational power of the game world.

posted by qxntpqbbbqxl at 7:09 PM on September 11, 2012 [1 favorite]

posted by qxntpqbbbqxl at 7:09 PM on September 11, 2012 [1 favorite]

I'm delighted by the thesis and devastated to see that my "out after Ice Age" ass doesn't recognize a single card used in the machine.

posted by cortex at 7:15 PM on September 11, 2012 [4 favorites]

posted by cortex at 7:15 PM on September 11, 2012 [4 favorites]

WE HAVE TO GO DEEPER!

posted by kafziel at 7:20 PM on September 11, 2012 [3 favorites]

Oh hell, not the Turing Fallacy *again*.

No, "Turing Complete" does*not* mean that theoretically it can compute anything that any computer is capable of. That is the Turing Fallacy.

What it means is, it can compute anything that can be computed on a Turing Machine.

posted by charlie don't surf at 7:22 PM on September 11, 2012 [3 favorites]

No, "Turing Complete" does

What it means is, it can compute anything that can be computed on a Turing Machine.

posted by charlie don't surf at 7:22 PM on September 11, 2012 [3 favorites]

Anything a real computer can compute, a Turing machine can also compute. For example: "A Turing machine can simulate any type of subroutine found in programming languages, including recursive procedures and any of the known parameter-passing mechanisms" (Hopcroft and Ullman p. 157).(wikipedia)

posted by jepler at 7:29 PM on September 11, 2012 [1 favorite]

Neat trick. I still prefer my

Timmerian Fiends/Followed Footsteps combo.

posted by radwolf76 at 7:30 PM on September 11, 2012

Timmerian Fiends/Followed Footsteps combo.

posted by radwolf76 at 7:30 PM on September 11, 2012

Yeah, I was completely able to read the way play works, but had no recognition of any of the cards at all.

I guess that's part of the genius of M:TG... gameplay is basic, so it's possible to step out for a while and still understand the mechanics of play even if there is no context about the actual cards in use.

posted by hippybear at 7:38 PM on September 11, 2012

Sorry, jepler, that is not correct. Turing Complete systems can only compute a limited subset of all possible computer operations.

Read the entry Turing Completeness.

*In computability theory, a system of data-manipulation rules (such as a computer's instruction set, a programming language, or a cellular automaton) is said to be Turing complete or computationally universal if it can be used to simulate any single-taped Turing machine..*

..Turing completeness is significant in that every real-world design for a computing device can be simulated by a universal Turing machine. The Church–Turing thesis states that this is a law of mathematics—that a universal Turing machine can,**in principle**, perform any calculation that any other programmable computer can. Obviously, this says nothing about the effort needed to write the program, or the time it may take for the machine to perform the calculation, or any abilities the machine may possess that have nothing to do with computation.

Emphasis added. Unfortunately, this theory involves a very restrictive definition of exactly what a computer is. The actual definition reduces to: A General Purpose Computer is something that can be emulated by a Turing Machine.

The problem here is that Turing Completeness involves functions. A Turing Machine uses a function to map one data set onto another. It is extremely easy to write algorithms that are not functions and are not Turing Computable.

posted by charlie don't surf at 7:40 PM on September 11, 2012 [2 favorites]

Read the entry Turing Completeness.

..Turing completeness is significant in that every real-world design for a computing device can be simulated by a universal Turing machine. The Church–Turing thesis states that this is a law of mathematics—that a universal Turing machine can,

Emphasis added. Unfortunately, this theory involves a very restrictive definition of exactly what a computer is. The actual definition reduces to: A General Purpose Computer is something that can be emulated by a Turing Machine.

The problem here is that Turing Completeness involves functions. A Turing Machine uses a function to map one data set onto another. It is extremely easy to write algorithms that are not functions and are not Turing Computable.

posted by charlie don't surf at 7:40 PM on September 11, 2012 [2 favorites]

Also this, if you really want to be a douche. (Moreso than, say, the douche who actually puts Shaharazhad into their deck.)

posted by Dark Messiah at 7:52 PM on September 11, 2012

We're so overdue for another silver-border set.

posted by radwolf76 at 7:56 PM on September 11, 2012 [1 favorite]

posted by radwolf76 at 7:56 PM on September 11, 2012 [1 favorite]

Ice Age is an interesting, well, age. A lot of people stopped pretty much right after it. A lot of people

I came back after Invasion to play occasional sealed deck tournaments, and I messed around with the (somewhat wretched, but still better than the current client) online MTG for awhile, and I've kept an ear to the ground over the years as new sets have come out... but I still vote for The Dark as my favourite set, for reasons I cannot articulate. Ice Age and Fallen Empires are up there as well.

posted by curious nu at 8:00 PM on September 11, 2012

Then why don't you give us a specific example so we can understand what you're talking about.

posted by metaplectic at 8:16 PM on September 11, 2012 [1 favorite]

Computable Functions

The basic characteristic of a computable function is that there must be a finite procedure (an algorithm) telling how to compute the function.

Note the two major properties:

1. The procedure must be finite.

2. The procedure must be completely specified.

The classic Turing Incomputable function is one that requires an endless supply of memory ("tape" in Turing jargon) and these can easily be generated by recursion or functions too complex to ever be completely evaluated (e.g. computing an irrational number like Pi). However, the other class of incomputable functions is a procedure that cannot be completely specified, usually because the procedure itself is infinitely long (e.g. calculating irrationals or uncountable sets). Or possibly it responds to inputs that cannot be accurately specified (e.g. a pseudorandom number generator seeded by atomic decay) or outputs that cannot be accurately and uniquely specified.

posted by charlie don't surf at 8:38 PM on September 11, 2012

Yes, that's why I describe NP-Completeness as "similar sounding but unrelated." I did so as a clarification since NP-Completeness in games has come up on MeFi before, and people who aren't so familiar with computer science might have been confused or thought there was some connection.

My statement was that being Turing complete means that the game can "compute anything that any computer is capable of computing." The fact that there exist non-computable functions does not contradict that statement, since neither a Turing machine nor a computer can compute them (unless you want to show me a computer that can solve the halting problem or somesuch).

It may also help to read "any computer" to mean "any existing computational device commonly known as a computer" rather than "any imaginable computational device."

posted by jedicus at 9:00 PM on September 11, 2012 [2 favorites]

Yeah I think the oldest card in the deck (if you can call it a deck) is Serra Aviary, from Homelands, but that came out just after Ice Age. Probably the most interesting use of a card from Homelands in history (for non-Magic players: Homelands is considered to be the worst of Magic's

In some sense it's not surprising that this was possible. There are something like 12,528 cards in the game. At some point things are bound to get goofy.

posted by jedicus at 9:18 PM on September 11, 2012 [1 favorite]

1. The procedure must be finite.

2. The procedure must be completely specified.

The classic Turing Incomputable function is one that requires an endless supply of memory ("tape" in Turing jargon) and these can easily be generated by recursion or functions too complex to ever be completely evaluated (e.g. computing an irrational number like Pi). However, the other class of incomputable functions is a procedure that cannot be completely specified, usually because the procedure itself is infinitely long (e.g. calculating irrationals or uncountable sets).

Simply proving that there are functions that are not Turing Computable doesn't prove your claim that the Church-Turing thesis is false, you also have to prove that something that a Turing Machine can't compute can also be computed using some other computational model and by broad consensus there is no such model. The title of that wikipedia entry is "Computable Functions" and not "Turing Computable Functions" by the way!

posted by kiskar at 9:25 PM on September 11, 2012 [3 favorites]

Sidestepping the discussion about charlie don't surf's unorthodox ideas about computability, can anybody shed any light on the 2-state 3-symbol universal Turing machine impemented here? Wikipedia is rather vague, but has this fishy-sounding sentence: "The (2,3) Turing machine also requires an infinite non-repeating input". I take this to mean that the tape must be initialized (except for the region occupied by the program description) with some complicated pattern, which seems to be something that the MTG implementation neglected, as they say that "we need a simulation of an infinite tape of empty blue spaces".

posted by Pyry at 10:15 PM on September 11, 2012

posted by Pyry at 10:15 PM on September 11, 2012

Er, no, Pi is a computable number. A Turing machine has an infinite tape, so... no problem. I can't remember how you define something as computable without requiring the machine to stop, but you can. Probably because you can get arbitrarily close to Pi in a finite amount of time, but that's not quite right.

Turing spent an entire paper defining computability (in terms of Turing machines), and then

posted by BungaDunga at 10:17 PM on September 11, 2012 [2 favorites]

Oh, there you go (not in terms of Turing machines, but the idea's the same).

posted by BungaDunga at 10:30 PM on September 11, 2012

Ok, so tracking down this exchange between Alexander Smith (who provided the purported proof of the (2,3) machine's universality) and others, it seems clear that for this (2,3) machine to work, the tape can't start out blank (as has been implemented in the magic cards), but must be initialized with a simple, but non-repeating, infinite pattern. So this does not yet prove that MtG is Turing complete, since without the initial pattern on the tape, it won't work at all. Maybe they could try to patch it to produce a properly initialized tape (which might be tricky, as the pattern is non-repeating), or try to implement a different (larger) universal Turing machine that can start with a blank tape (except for the program description).

posted by Pyry at 10:32 PM on September 11, 2012 [1 favorite]

posted by Pyry at 10:32 PM on September 11, 2012 [1 favorite]

The problem with the initial configuration of Smith's (2,3) machine is indeed a problem, but the author (of the Magic-construction) has already promised a fix using the (2,18) machine by Roghozin (which he calls Roganov, but I am fairly certain he must mean Roghozin).

I am also fairly certain that this version 5 will be quite fiddly, and far less fun to read. In his place, I would probably first try to simulate another universal model, perhaps 2-tag systems. (The state construction with phasing could be used to distinguish between deleting the first and the second symbol of the input.)

posted by erdferkel at 11:51 PM on September 11, 2012 [1 favorite]

I am also fairly certain that this version 5 will be quite fiddly, and far less fun to read. In his place, I would probably first try to simulate another universal model, perhaps 2-tag systems. (The state construction with phasing could be used to distinguish between deleting the first and the second symbol of the input.)

posted by erdferkel at 11:51 PM on September 11, 2012 [1 favorite]

This thread is full of confusion regarding Turing-completeness. I'm loathe to try and sort it out because I think the signal will get lost in the noise, but here goes: the Church-Turing thesis says that every effectively computable function (any function which can be computed on any conceivable computing apparatus) can be computed by a Turing machine. This is a thesis, not a theorem: that is, it isn't something we expect to prove, since we don't have a rigorous definition of "any conceivable computing apparatus", but something like a natural law which has been verified over and over again: each time we find a new model of computation, we have been able to simulate it with a Turing machine.

A "Turing complete" model of computation is one which can simulate any Turing machine. This is supposedly demonstrated here, by a rather roundabout method: the author shows that M:tG can simulate a particular*Universal Turing machine*, which in turn can simulate any other Turing machine.

In particular (and this is what I find interesting), as well as being able to simulate any Turing machine which computes a function, we could also simulate machines which don't terminate: which continue to compute forever. Crucially, Turing showed that it is*undecidable* whether a given machine, in a given state, with a given tape, terminates or not. Since M:tG can simulate any Turing machine, that means that there are game states for which it is undecidable whether or not the game will finish.

If we could replicate this effect with tournament legal decks (probably via token copies of the cards we need more than four copies of) then this leads to the following rather tasty scenario: imagine that the one halt state triggers an effect which deals lethal damage to us, and another deals lethal damage to our opponent. Then we could set up a game state where it is undecidable if the result is a draw, a win for us, or a win for our opponent...

posted by Omission at 12:50 AM on September 12, 2012 [4 favorites]

A "Turing complete" model of computation is one which can simulate any Turing machine. This is supposedly demonstrated here, by a rather roundabout method: the author shows that M:tG can simulate a particular

In particular (and this is what I find interesting), as well as being able to simulate any Turing machine which computes a function, we could also simulate machines which don't terminate: which continue to compute forever. Crucially, Turing showed that it is

If we could replicate this effect with tournament legal decks (probably via token copies of the cards we need more than four copies of) then this leads to the following rather tasty scenario: imagine that the one halt state triggers an effect which deals lethal damage to us, and another deals lethal damage to our opponent. Then we could set up a game state where it is undecidable if the result is a draw, a win for us, or a win for our opponent...

posted by Omission at 12:50 AM on September 12, 2012 [4 favorites]

Can someone explain why this way of implementing a Turing machine is interesting? I think I'm missing the point because I don't know anything about 'Magic the Gathering'.

What we seem to have is a set of cards; using these cards and some special rules we can hand-simulate a Universal Turing machine. Is that it? If it is, so what? We could hand-simulate a Universal Turing machine with sticks and pebbles. Like I say, I think I'm missing the point.

posted by Segundus at 12:53 AM on September 12, 2012

What we seem to have is a set of cards; using these cards and some special rules we can hand-simulate a Universal Turing machine. Is that it? If it is, so what? We could hand-simulate a Universal Turing machine with sticks and pebbles. Like I say, I think I'm missing the point.

posted by Segundus at 12:53 AM on September 12, 2012

Segundus: the cards are a game, the game has rules, and the rules can simulate the transitions of the machine without player choices. (Actually, that doesn't seem to be entirely true, as it matters in which order certain triggered effects resolve, and that seems to be up to the player).

We could also simulate a Turing machine with the pices from a Monopoly set, but not using the rules of Monopoly itself...

posted by Omission at 1:04 AM on September 12, 2012

We could also simulate a Turing machine with the pices from a Monopoly set, but not using the rules of Monopoly itself...

posted by Omission at 1:04 AM on September 12, 2012

This wouldn't be a draw; the rules account for these kinds of situations.

Every time a player would get priority, state-based actions are checked. [CR 704, "State-based Actions".] Given a game with two players and two abilities on the stack that deal lethal damage—one to you, one to your opponent—the first one to resolve will decide the game. After the second ability put onto the stack resolves, but before a player gains priority, the game will check life totals. The player at 0 life loses before the stack finishes resolving, and his or her opponent immediately wins the game [CR 104.2a, "Ending the Game".]

posted by Mikey-San at 1:20 AM on September 12, 2012

what have i done

posted by Mikey-San at 1:21 AM on September 12, 2012 [1 favorite]

posted by Mikey-San at 1:21 AM on September 12, 2012 [1 favorite]

Sorry, I wasn't clear. We set up a computation which can either a) trigger a fireball to our head b) trigger a fireball to our opponent's head, or c) not terminate. The game has a predetermined end, but we have no choice but to play out the (possibly infinitely long) game to find out who wins...

posted by Omission at 1:26 AM on September 12, 2012

posted by Omission at 1:26 AM on September 12, 2012

Ah. Yeah, I misread you. That would be the greatest kitchen table deck of all time, hands down.

posted by Mikey-San at 1:36 AM on September 12, 2012

posted by Mikey-San at 1:36 AM on September 12, 2012

I think that token copies won't work with this construction, as it requires the cards that make up the "engine" to phase out and back in again. If I recall correctly, phases out tokens are just gone forever.

posted by erdferkel at 1:53 AM on September 12, 2012

erdferkel you are correct, tokens cannot return once they leave the battlefield.

posted by Neale at 2:08 AM on September 12, 2012

posted by Neale at 2:08 AM on September 12, 2012

Phased-out tokens do indeed cease to exist, though the reason it isn't because they leave the battlefield. Phasing doesn't cause a zone change at all, actually, but there is a special clause in the phasing rules for tokens that causes tokens to cease to exist when their status becomes "phased out".

The whole section on phasing just reads like an apology for an ancient design mistake. One would hope they won't make such a mistake again under the "New World Order" design principles.

posted by Mikey-San at 2:22 AM on September 12, 2012

The whole section on phasing just reads like an apology for an ancient design mistake. One would hope they won't make such a mistake again under the "New World Order" design principles.

posted by Mikey-San at 2:22 AM on September 12, 2012

Another reason why this construction could be interesting: According to the rules of MtG, any loop of mandatory actions causes a draw:

So we don't even require the fireball to anybodies head; the draw would follow from this rule.

Sadly, this does not work with the construction that simulates the (2,3) machine, as this machine never stops. (One can see this from the transition table that is mentioned in the article.) On the other hand, a simulation of something like Roghozin's (2,18) machine would do the trick.

posted by erdferkel at 2:33 AM on September 12, 2012

104.4b If a game that’s not using the limited range of influence option (including a two-playerIf one is able to simulate a universal Turing machine in Magic, the applicability of this rule is undecidable (meaning that there is no way to tell if the sequence of actions stops, or if it doesn't stop, either by entering a periodic sequence of configurations or an infinite sequence of distinct configurations).

game) somehow enters a “loop” of mandatory actions, repeating a sequence of events with no

way to stop, the game is a draw. Loops that contain an optional action don’t result in a draw.

So we don't even require the fireball to anybodies head; the draw would follow from this rule.

Sadly, this does not work with the construction that simulates the (2,3) machine, as this machine never stops. (One can see this from the transition table that is mentioned in the article.) On the other hand, a simulation of something like Roghozin's (2,18) machine would do the trick.

posted by erdferkel at 2:33 AM on September 12, 2012

Hate to nitpick, but using Spellbinder doesn't completely remove player input: the use of "may" in the language permits its controller to break the sequence at will.

I'm still amazed this works at all, though.

posted by fifthrider at 2:34 AM on September 12, 2012 [1 favorite]

I'm still amazed this works at all, though.

posted by fifthrider at 2:34 AM on September 12, 2012 [1 favorite]

It isn't. There's a famous classroom demonstration of a Turing Machine with a roll of toilet paper and pebbles. This system of cards is just a way to make that more convoluted.

I would have found this OP to be more interesting if it was entitled "Magic the Gathering is equivalent to toilet paper and rocks."

posted by charlie don't surf at 4:50 AM on September 12, 2012

But saying that magic the gathering is equivalent to toilet paper and rocks misses the important distinction made earlier by Omission: * We could also simulate a Turing machine with the pices from a Monopoly set, but not using the rules of Monopoly itself...*

ObXKCD 1 2

But this stupid argument reminds me of my "bucket of plastic sporks" argument: we all agree that a bucket of sporks is, at the quantum level, passing through an essentially limitless number of states (well in excess of 2^(10^23), say), with an enormous number of transitions per second. There is only a vanishingly small probability that any state will be repeated, so with high probability there is a mapping from bucket-states to successive states in the execution of an arbitrary computer program. If only we could*find this mapping* we could do away with all the computers ever, and just use the bucket of sporks instead. Of course, this argument is stupid, and a bucket of sporks is not a model of computation.

posted by jepler at 5:27 AM on September 12, 2012

ObXKCD 1 2

But this stupid argument reminds me of my "bucket of plastic sporks" argument: we all agree that a bucket of sporks is, at the quantum level, passing through an essentially limitless number of states (well in excess of 2^(10^23), say), with an enormous number of transitions per second. There is only a vanishingly small probability that any state will be repeated, so with high probability there is a mapping from bucket-states to successive states in the execution of an arbitrary computer program. If only we could

posted by jepler at 5:27 AM on September 12, 2012

Not the same thing, but a friend of mine implemented the Euclidean algorithm using Magic cards. Now, admittedly, everyone thought this was such a massive waste of time that we never let him explain how it worked. But 'amusing to someone' sort of approximates 'interesting'.

posted by hoyland at 6:00 AM on September 12, 2012 [1 favorite]

I find it interesting on two levels. First, it's quite possibly the first game whose rules are so complicated that they (unintentionally) embody the model of computation upon which all computers are based. That's as opposed to Minecraft, which intended for redstone to perform computational tasks, and which is run on a computer anyway, which is kind of cheating.

Second, a common critique of Magic is that the neverending drive to churn out more expansions has led to the game becoming untenably baroque and crufty. That the game is now* (most of the necessary cards are fairly recent) Turing complete is sort of a mathematical proof that the game is too complicated.

* Bearing in mind that the construction in the article might not actually work but that there is reasonable hope for a future version that does.

posted by jedicus at 6:22 AM on September 12, 2012 [1 favorite]

Your interpretation misses the point. As stated in the explanations, the author tried to construct a situation in Magic where a long sequence of automatic actions and forced choices is triggered.

Now there are two possibilities: Either this sequence terminates at some point (and the turn proceeds normally), or it does not (which would trigger rule 104.4b I mentioned above, and result in a draw).

If one simulates the right kind of Turing machine, there is no general way of deciding which of these holds, as this would require solving the halting problem. So, apart from conceding, one would have no other option than executing this sequence, hoping that it terminates or arrives at a recognizable loop, while having no guarantee that this would happen.

While this situation is quite pathological and has probably no relevance to "real" play, it shows that the behavior that emerges from the mechanics of Magic can be surprisingly hard to predict.

posted by erdferkel at 6:25 AM on September 12, 2012

ubiquity: *Ah, the Turing Trap: where everything is possible but nothing is of interest.*

<nitpick>Perlis's original epigram (#54 here) actually refers to Turing tar-pits, not the Turing trap.</nitpick>

posted by stebulus at 7:27 AM on September 12, 2012 [2 favorites]

<nitpick>Perlis's original epigram (#54 here) actually refers to Turing tar-pits, not the Turing trap.</nitpick>

posted by stebulus at 7:27 AM on September 12, 2012 [2 favorites]

To restate what a couple other folks have said, the difference is in how much work you can make the preexisting and ostensibly unrelated machinery of the game do toward that simulation. With sticks and pebbles you can indeed hand-simulate a UTM, but the sticks and pebbles aren't providing anything other than a bit of friction and mass unless you've carved and organized them in a

Anyway, Turing implementations are mostly interesting as a fun exercise in relating/organizing/hacking systems. M:tG being inclined toward Turing Completeness isn't important or anything, it's just neat. It's a thought experiment, a lark. It's neat to prove (or disprove!) the ability to make an isomorphism between apparently unrelated systems.

posted by cortex at 8:17 AM on September 12, 2012 [1 favorite]

The "special rules" are just the rules of M:tG.

That fact isn't always clear in the explanation (under "Turing complete" in the post), e.g., here:

In fact, we use six copies of Teysa, along with a Mirror Gallery to allow her to coexist with herself. They're modified to read as follows:The modification here is not external to the game; it is made possible by the powers of certain other cards, which are mentioned just before the bit I've quoted, but whose role here is not explicitly pointed out.

- Phased-In Teysa A says "Whenever a red creature dies, make a green Ally."
- [etc]

posted by stebulus at 10:23 AM on September 12, 2012 [1 favorite]

I can't help but wonder where Conway's Life fits into this - a game whose rules are so

posted by Slyfen at 1:14 PM on September 12, 2012

jedicus: It may also help to read "any computer" to mean "any existing computational device commonly known as a computer" rather than "any imaginable computational device."That's pretty much the problem with jepler's Wikipedia quote above: it does not define what it means by "real computer."

Anything a real computer can compute, a Turing machine can also compute.I don't believe it's assured that a Turing Machine can replicate the results of analog computers, for instance.

(wikipedia)

posted by IAmBroom at 1:15 PM on September 12, 2012

Or many digital ones. This is the problem, not only does the program need to be completely specified and finite, the computer also needs to be completely and accurately specified. This is easy on a simple hypothetical computer. This is very difficult in the real world. Let me give you a strange example.

The first computer I owned was a SOL-20 with an 8080A 2mhz processor and a whopping huge 32k of RAM. The 8080A instruction set is simple enough to function as a Turing Machine, although with a finite (and small) amount of storage. It has an interface to store data on an audio cassette, and a port to drive the cassette, so theoretically, given an infinite supply of tape, it might be considered a Universal Turing Machine. But it isn't.

There was a very obscure but common problem with these computers: soft errors. Most of the chips were packaged in ceramic, including the RAM and CPU. Unfortunately, the ceramic contained an unusually high concentration of radio-isotopes so occasionally an atom would decay, sending a particle into the thin circuit layers, and even more rarely, changing the binary state of a memory cell. A soft error could also occur in CPU registers or any other digital component on chip. This was extremely frustrating back in the early primitive days of microcomputers. We'd leave programs running for days, even leave the computer idling and occasionally it would crash for no reason, even on very simple algorithms that could be mathematically proven to have no bugs. This was exceptionally frustrating for my professors, who were satellite instrumentation builders, and they had to write microprocessor programs that could keep running accurately for years, and recover from soft errors, even if they happened inside the CPU's RAM storage and not out in ECC RAM where it could use error correction.

Even today on vastly improved computer hardware, soft errors of this type do occur. And they are totally unpredictable. Terrestrial background radiation or cosmic rays might zap your chips in just a way to change a bit at random. Even systems with ECC RAM (which are relatively uncommon) can have soft errors. These errors might occur extremely rarely, and even more rarely in a position to alter the results of a program. But the probability of an error is not zero.

So there is the problem. How can you completely and accurately specify the function of a computer that is subject to random errors, as a result of random particle decay? Even if we knew the laws of physics completely enough to predict with complete accuracy when a specific atom would decay, then we'd have to model every atom in the computer that could affect the outcome. I suspect this would require modeling every atom in the universe that could ever affect the computer, which basically means all of the matter in spacetime. And let's throw Godel in there. Even if we could calculate the atomic decay using a small, finite computer, then what's to say that computer is accurately computing it when it also might have soft errors? We'll need a computer to check those calculations. And another computer to check that computer. And so on.

posted by charlie don't surf at 4:55 PM on September 12, 2012 [1 favorite]

Similarly, an 88-key electronic keyboard can't play all the songs that a piano can, if only because a piano string can break and affect the performance in a way the synthesizer can't simulate.

posted by jepler at 5:24 PM on September 12, 2012

posted by jepler at 5:24 PM on September 12, 2012

I don't know whether charlie don't surf is trying to make the point that TMs don't perform I/O (a radio-isotope decay being an input to the computer), or that TMs are not proven to be equivalent to Probabilistic Turing Machines (a radio-isotope decay being a [presumably small] chance that the machine will take an alternate transition to the high-probability one that is normally specified in the state table).

It apparently is an open question whether TMs and PTMs are equivalent:

It seems sort of like this is equivalent to asking whether there's any adequate PRNG (pseduorandom number generator); if you had an adequate PRNG, then you could make a tape for a UTM that executed a PTM with the same class of overhead as it takes to execute a deterministic TM. On the other hand, if you have this tape for your UTM that executes PTMs but you can't determine which part of it is a PRNG, you could just write a PTM that is itself an adequate PRNG: in state 0, write 0 and halt with probability .5; write 1 and halt with probability .5.

posted by jepler at 5:56 PM on September 12, 2012

It apparently is an open question whether TMs and PTMs are equivalent:

One of the central questions of complexity theory is whether randomness adds power; that is, is there a problem which can be solved in polynomial time by a probabilistic Turing machine but not a deterministic Turing machine? Or can deterministic Turing machines efficiently simulate all probabilistic Turing machines with at most a polynomial slowdown?wikipedia, of course

It seems sort of like this is equivalent to asking whether there's any adequate PRNG (pseduorandom number generator); if you had an adequate PRNG, then you could make a tape for a UTM that executed a PTM with the same class of overhead as it takes to execute a deterministic TM. On the other hand, if you have this tape for your UTM that executes PTMs but you can't determine which part of it is a PRNG, you could just write a PTM that is itself an adequate PRNG: in state 0, write 0 and halt with probability .5; write 1 and halt with probability .5.

posted by jepler at 5:56 PM on September 12, 2012

If the analog computer works with arbitrary precision, this is true AFAIK. If you restrict the precision (which, according to my lay person's understanding of physics, is quite necessary), you don't get more than with TMs.

Please be careful, you are mixing up concepts. The part you quote refers to

This is already a far more refined way of simulation, and a wholly different question. In the definition of PTMs without resource bounds that I would consider canonical, PTMs and TMs are equivalent (see this paper).

It's the same as with deterministic and nondeterministic TMs: Unrestricted, both are equivalent (which is easily shown), but if you impose a polynomial time bound, you get P vs NP (which is a more complicated question than in the unrestricted case).

posted by erdferkel at 3:27 AM on September 13, 2012

Indeed. In fact, I seems that (even with an unbounded tape) it is a weaker model than a UTM.

You don't need to specify the function, you just need to know that it (or, to be fair, a reasonably precise approximation) exists. Considering this, it is not difficult to construct an infinite class of probabilistic TMs, one for each possible combination of possible probabilities (up to an obscenely large precision). One of these PTMs simulates your computer with arbitrarily high precision. You won't know which one, but that is not Church's or Turing's problem. (Each of these PTMs can then be converted into a quite large and quite slow deterministic Turing machine, and again, you don't know which one you actually want.)

The Church-Turing thesis does not claim that Turing machines can simulate other models of computation efficiently, or that there is a constructive way of generating the right Turing machine from another model, or that you can check reliable whether your Turing machine simulates correctly, or that it would be a good idea to use a Turing machine simulation of some other device. It only states that a machine exists, which is a pretty powerful statement, but in many cases pretty useless.

Why would we need to invoke Gödel? You can observe this already with Turing -- an immediate consequence of Turing's paper on Turing machines is that these equivalence questions are undecidable, even if you compare Turing machines to Turing machines instead of your computer with errors.

When one proves that some concept is Turing complete, the interesting news (apart from the technical details of the construction in the proof) is not that this concept can simulate Turing machines, or that you could simulate it in a Turing machine, but that this concept has the same bunch of problems and hard questions that Turing machines have, which means that you get a bunch of negative results for free (e.g., see my comment on rule 104.4b above).

The computability definition of "it is possible" for Turing machines is very generous, which makes many positive results utterly impractical (which is one of the reasons why the field of complexity theory exists, where we have the same phenomenon on a smaller scale), but on the other hand, this makes the negative results even stronger ("even under a very generous definition of possible, this is impossible").

posted by erdferkel at 3:30 AM on September 13, 2012 [1 favorite]

Thanks, erdferkel. "can do" and "can do efficiently" are clearly different cans of worms.

posted by jepler at 4:58 AM on September 13, 2012

posted by jepler at 4:58 AM on September 13, 2012

« Older NYT Op/Ed on 9/11: 'The Deafness Before the Storm'... | "To understand how air-force n... Newer »

This thread has been archived and is closed to new comments

posted by polywomp at 6:32 PM on September 11, 2012 [3 favorites]