Tic-Tac-Toe, 1952; Checkers, 1994; Chess, 1997; Go, 2016
January 27, 2016 12:29 PM   Subscribe

The game of go, seemingly the last hold-out for games at which the most skilled humans can beat the best computer programs, has perhaps fallen at the hands of a computer.

Alphago, a program utilizing a Google AI algorithm, has beaten Fan Hui, the reigning European champion, at an even game.

This is a strong achievement, as previous computer go programs were only able to beat professional human players by utilizing a handicap.
posted by klausman (80 comments total) 25 users marked this as a favorite
 
I just finished writing this post. Glad I previewed!

The game of Go has long been viewed as the most challenging of classic games for artificial intelligence owing to its enormous search space and the difficulty of evaluating board positions and moves. Here we introduce a new approach to computer Go that uses ‘value networks’ to evaluate board positions and ‘policy networks’ to select moves ... Our program AlphaGo achieved a 99.8% winning rate against other Go programs, and defeated the human European Go champion by 5 games to 0. This is the first time that a computer program has defeated a human professional player in the full-sized game of Go, a feat previously thought to be at least a decade away. Paper in this week's Nature; here's the descriptive blog post from Google Research.

Also, missing the SkyNet tag.
posted by RedOrGreen at 12:34 PM on January 27, 2016 [6 favorites]


Tic-Tac-Toe, 1952; Checkers, 1994; Chess, 1997; Go, 2017

What about Global Thermonuclear War?
posted by The Bellman at 12:36 PM on January 27, 2016 [13 favorites]


was this in 2017?
posted by So You're Saying These Are Pants? at 12:37 PM on January 27, 2016 [2 favorites]


What's next up the complexity ladder from here? Poker?
posted by qntm at 12:38 PM on January 27, 2016


Next will be auto racing and finance.
posted by psycho-alchemy at 12:38 PM on January 27, 2016


was this in 2017?

Awesome! I was so excited about the news I forgot the year.
posted by klausman at 12:39 PM on January 27, 2016


Suddenly general AI doesn’t look all that far away, given the advances that modern deep learning is bringing to the field. We know it’s possible - we ourselves are such systems after all. It looks like we might be one or two algorithmic breakthroughs away from something that can be said to think for itself as much as we do.
posted by pharm at 12:41 PM on January 27, 2016 [4 favorites]


While Kasparov losing a short match to Deep Blue in 1997 is a dramatic milestone, human GMs were beating computers for a few more years after that. Deep Blue was sort of an expert system, with custom search hardware, designed to play chess against Kasparov, not software running on a general purpose PC.

By about the mid 2000s though, it was indeed all over, and human vs. computer chess matches became of no interest.
posted by thelonius at 12:42 PM on January 27, 2016


Mod note: Gonna go BACK in tiiiiiiiiiime. Carry on.
posted by cortex (staff) at 12:43 PM on January 27, 2016 [5 favorites]


Suddenly general AI doesn’t look all that far away

It'll probably arrive around the same time as commercial fusion reactors.
posted by thelonius at 12:44 PM on January 27, 2016 [14 favorites]


What's next up the complexity ladder from here? Poker?

IIRC, heads-up Texas Hold-Em has been solved (or whatever term you would use for a game that has an element of chance). How are computers doing at Bridge?
posted by It's Never Lurgi at 12:44 PM on January 27, 2016 [1 favorite]


The solitaire game Join Five (also called Morpion) was specifically designed to be doable for humans, and very difficult to write an AI for.
posted by miyabo at 12:47 PM on January 27, 2016 [1 favorite]


Big deal. Show me a computer that can play Here I Stand without breaking down in digitized tears and I'll be impressed.
posted by Etrigan at 12:47 PM on January 27, 2016 [2 favorites]


Suddenly general AI doesn’t look all that far away, given the advances that modern deep learning is bringing to the field.

This is not a post from 1986, 1996, 2006, 2026...
posted by Sangermaine at 12:48 PM on January 27, 2016 [4 favorites]


What happens when a computer solves Call of Cthulhu? What, I ask you!!!
posted by thatwhichfalls at 12:48 PM on January 27, 2016 [1 favorite]


It'll probably arrive around the same time as commercial fusion reactors.

Which it will probably have designed.
posted by Devonian at 12:49 PM on January 27, 2016 [4 favorites]


GMs were beating computers for a few more years after that.

I love the idea of a human GM and a bunch of giant blinking monolith mainframes gathered around a kitchen table playing D&D.
posted by Sangermaine at 12:50 PM on January 27, 2016 [22 favorites]


What's next up the complexity ladder from here? Poker?
Calvinball
posted by This is Why We Can't Have Nice Things at 12:51 PM on January 27, 2016 [12 favorites]


I can still trounce an AI in 52 Pickup.
posted by zippy at 12:53 PM on January 27, 2016 [2 favorites]


What happens when a computer solves Call of Cthulhu? What, I ask you!!!

Case Nightmare Green
posted by fings at 12:55 PM on January 27, 2016 [13 favorites]


>> What's next up the complexity ladder from here? Poker?
> Calvinball


Yes, exactly. 'Go' was it, the whole enchilada, the game where intuition and aesthetics were essential to the play, and no computer was going to be able to brute force Go like Chess. Well, they didn't - instead, this program is basically analyzing a position like a human would.
posted by RedOrGreen at 12:56 PM on January 27, 2016 [3 favorites]


What's next up the complexity ladder from here? Poker?

Mornington Crescent.
posted by Mr. Bad Example at 1:00 PM on January 27, 2016 [14 favorites]


The game of go, seemingly the last hold-out for games at which the most skilled humans can beat the best computer programs, has perhaps fallen at the hands of a computer.

There's still one more.
posted by Gelatin at 1:01 PM on January 27, 2016


What's next up the complexity ladder from here? Poker?

Double Cranko
posted by octothorpe at 1:02 PM on January 27, 2016


I still feel pretty good about my chances versus SkyNet in 1000 Blank White Cards.

Or I did until I saw the Deepak Chopra bot. Just add point numbers and seascapes to those and I'd be totally outmaneuvered. "Each of us is reborn in universal force fields. +300 points"....
posted by miles per flower at 1:04 PM on January 27, 2016 [2 favorites]


I love the idea of a human GM and a bunch of giant blinking monolith mainframes gathered around a kitchen table playing D&D.

DM: You attempt to open the pod bay doors, HAL.

HAL: (rolls a 1) I'm sorry, Dave, I'm afraid I can't do that.
posted by Gelatin at 1:05 PM on January 27, 2016 [7 favorites]


They haven't "solved" go any more than chess has been solved (i.e. come up with an always-optimal strategy). But they've annihilated the competition. Just the evaluation function *alone* with no lookahead beats most other go programs. And it apparently has no prior knowledge of the game, it's just a very well-trained deep neural network. Holy hell.

I'll look forward to the upcoming match with Lee Sedol to see if we will welcome our new computer go-playing overlords.
posted by RobotVoodooPower at 1:06 PM on January 27, 2016 [12 favorites]


I bet robots would fucking rule at Hungry Hungry Hippos.
posted by The Card Cheat at 1:07 PM on January 27, 2016 [4 favorites]


I'd be curious to know where card games like Magic or Hearthstone lie in terms of complexity for AI. Even if we just gave the AI one Hearthstone deck, it'd still have to identify not just the board state, but also attempt to determine what their opponent's deck might be.

Not that I'm looking forward to my favorite card games being overrun by bots, or even deckbuilding being done by AI, but I'm very curious!
posted by explosion at 1:07 PM on January 27, 2016


The emergence of strongish AI seems like a given at this point in history, but I'm not sure if this is a result that changes that evaluation in any special way. I mean, winning at go isn't noticeably different in kind from winning at chess, is it? It's a constrained system with a handful of well-defined rules.

I suppose another way to put this would be that if you believe strong AI is forever in the realm of the mythical, I'm not sure this changes anything that the last n "computers are better than humans at $game" outcomes didn't already cover. There's some kind of magical barrier between human cognition and the rest of computation or there's not, but "human now less efficient at finding a win state for yet another ruleset" doesn't exactly seem like it answers that question more now than it did 20 years ago.
posted by brennen at 1:08 PM on January 27, 2016 [9 favorites]


I could totally beat this thing at beer pong, though.
posted by dazed_one at 1:10 PM on January 27, 2016


And it apparently has no prior knowledge of the game, it's just a very well-trained deep neural network. Holy hell.

If true, that's huge. No brute forcing, no cramming game positions, but a beautiful and elegant solution that has big, significant implications for other domains.
posted by leotrotsky at 1:15 PM on January 27, 2016 [3 favorites]



The emergence of strongish AI seems like a given at this point in history, but I'm not sure if this is a result that changes that evaluation in any special way. I mean, winning at go isn't noticeably different in kind from winning at chess, is it? It's a constrained system with a handful of well-defined rules.

I suppose another way to put this would be that if you believe strong AI is forever in the realm of the mythical, I'm not sure this changes anything that the last n "computers are better than humans at $game" outcomes didn't already cover. There's some kind of magical barrier between human cognition and the rest of computation or there's not, but "human now less efficient at finding a win state for yet another ruleset" doesn't exactly seem like it answers that question more now than it did 20 years ago.


The 'how' here is different, and potentially a very big deal.
posted by leotrotsky at 1:16 PM on January 27, 2016 [4 favorites]


I would imagine even a pretty simple script that was able to figure out pot odds and count visible cards would do better at Hold 'em than most people sitting around a casino table at 6:00 PM on week day. Beating pros on a regular basis would be a lot harder... you do have a lot of hand history training data for the pros, though.
posted by codacorolla at 1:16 PM on January 27, 2016


What's next up the complexity ladder from here? Poker?

Fizzbin.
posted by The Bellman at 1:22 PM on January 27, 2016 [3 favorites]


Fizzbin.

Except on Tuesday.
posted by Gelatin at 1:23 PM on January 27, 2016 [2 favorites]


From the project blog:

We trained the neural networks on 30 million moves from games played by human experts, until it could predict the human move 57 percent of the time

It's still a big deal. I've had to go back to worrying since this news.
posted by Strange_Robinson at 1:28 PM on January 27, 2016


This is not comparable to the strength of computers at chess in 1997. It's more like 1988, the first time a computer won against a grandmaster.

The link for Fan Hui informs us that he holds a rating of 2-dan professional. Although the gradations of pro ratings are subtle and largely mysterious to me, he would not be expected to win against the very strongest professionals, the ratings for whom apparently go up to 9-dan nowadays.
posted by sfenders at 1:30 PM on January 27, 2016 [9 favorites]


Attn go nerds: I found the SGF files here. Apparently AlphaGo doesn't know how to finish playing - in the only game that was decided on points (the rest were by resignation), they had to manually stop the game after the robot started playing weird throwaway nonsense.

for the uninitiated: unlike chess, where the game is obviously over the moment a king is checkmated, go only ends by resignation or when both players agree they can no longer make productive moves, at which point they both pass and tally up the points.

I've uploaded the five games to Eidogo in case anyone wants to take a gander:
Game 1 (Monday)
Game 2 (Tuesday)
Game 3 (Wednesday)
Game 4 (Thursday)
Game 5 (Friday)
posted by theodolite at 1:34 PM on January 27, 2016 [8 favorites]


I could totally beat this thing at beer pong, though.

As xkcd tells us, our robot overlords can already beat us at beer pong.
posted by pharm at 1:35 PM on January 27, 2016


The full paper.
Gorgeous news (though I am slightly disappointed that I can't seem to find any indication on the system behind being developed in Go). It is a very exciting time to be following this, and the technique used to train AlphaGo should also be applicable for developing stronger meta-gaming systems.
posted by bouvin at 1:39 PM on January 27, 2016 [3 favorites]


What's next up the complexity ladder from here?

Cards Against Humanity.

Computers beat us at Mornington Crescent years ago the first time someone populated a Markov Chain script with a rude trainspotter's dictionary.

Robotic or human/robot remote piloted all-electric Formula racing sounds exciting in a way. You could take risks that you wouldn't take with live people in the cockpit, and the physical designs could radically change without people in them.

But this is also obviously against a main core tenant of Formula racing, the human element. The races themselves would likely be boring for humans to watch, but maybe not given a formula and ruleset that encouraged diversity in approach. It's not like F1 isn't fairly robotic even with humans behind the wheel.

Or, you know, allowed for full contact robot car wars. Imagine 300 mile per hour open wheeled electric downforce sleds allowed to nerf each other or even drive over or under or through.
posted by loquacious at 1:39 PM on January 27, 2016 [4 favorites]


When a computer can beat me at Calvinball, then I'll worry.
posted by SansPoint at 1:41 PM on January 27, 2016


According to the full paper, there were also five "informal" games, where I guess everyone wore flip-flops or something. The only difference seems to be that the informal games allowed much less time for each move - three byoyomi periods of 30 seconds, which is almost blitz speed. Fan Hui won two of those, which is interesting.
posted by theodolite at 2:02 PM on January 27, 2016


"human now less efficient at finding a win state for yet another ruleset" doesn't exactly seem like it answers that question more now than it did 20 years ago.

I think this is a real achievement, but... yeah. In each case, the winning machine is really lots of people analyzing a problem domain, coming up with strategies that are suited to automation on state-of-the-art software and/or hardware, and then building a system that implements them.

What some people seem to think that proves is that we'll someday build a general artificial intelligence.

What it seems to indicate to me is that there are probably very few domains (particularly domains with well-defined rules and predictable conditions) for which humans *can't* design an automated system that will do as well or better than a well-trained human given a decade or few of research and effort.

But there are enough Go-sized domains out there -- each different enough from one another that an outstanding performance in one area might be sub-human in another -- that I still don't the straight line to strong AI that other people seem to see. And there are a lot of domains with poorly defined rules and difficult to predict conditions.

Dunno. Maybe it'll turn out there are techniques that work well enough across different domains that we'll see it, though. Hope we figure out an economic system that doesn't require most people to sell labor to be considered worth anything to anyone before then.
posted by wildblueyonder at 2:07 PM on January 27, 2016 [3 favorites]


I think this is a real achievement, but... yeah. In each case, the winning machine is really lots of people analyzing a problem domain, coming up with strategies that are suited to automation on state-of-the-art software and/or hardware, and then building a system that implements them.

Not this time: This system is built out of neural nets trained on real world go games & then on games played against itself. It’s not like the chess codes or the earlier attempts at go which try and codify elements of the game directly - it’s much closer to the way our brains codify knowledge about games.
posted by pharm at 2:16 PM on January 27, 2016 [2 favorites]


Hope we figure out an economic system that doesn't require most people to sell labor to be considered worth anything to anyone before then.

lol.
posted by brennen at 2:17 PM on January 27, 2016


What a phenomenal result! I mean they still have a ways to go before beating the very best human players, but it's such a leap over previous AI. Really interesting to marry some neural networks to traditional min-max strategies, is that a common technique? I was also surprised they were able to meaningfully train the thing with reinforcement learning. My understanding is that's basically tantamount to learning to play by only playing yourself. I'm surprised that leads to a global optimum. But I didn't read the full paper, so maybe I'm wrong.

The other commonly played game like Go or Chess that I believe is at the forefront of research is Shogi, aka Japanese Chess. Its piece-dropping mechanic means it has a branching factor way bigger than chess. I'm not up on the current state of research there.
posted by Nelson at 2:24 PM on January 27, 2016 [1 favorite]


"Strong" and "general" AI are not well-defined concepts, so this can't be a move toward either. This is actually because we don't have a good enough definition of human intelligence to be able to quantify what strong AI would mean.

As far as a move forward in AI, it's hard to tell. It sounds like this one worked roughly like: existing neural networks (which are good at pattern matching) were fed thousands and thousands of Go games and were able to use this information to play games and determine strategies that could win. These things are more proof of work done than advances in themselves.

These are programs where interconnections similar to human neurons are simulated in a traditional computer. So they "teach" the network what to look for and because of how its nodes are interconnected it becomes "better" at recognizing patterns. That's pretty cool, right? And researchers and big companies alike keep pushing this technology forward, and it has a bunch of uses, both obvious and not.

But it's never going to start "thinking" like a person does, and even a more refined network won't do that. It's just a guessing machine, albeit a really, really good one. Creating a better neural network is not going to change that.
posted by graymouser at 2:43 PM on January 27, 2016 [2 favorites]


What a phenomenal result! I mean they still have a ways to go before beating the very best human players, but it's such a leap over previous AI. Really interesting to marry some neural networks to traditional min-max strategies, is that a common technique?

I don’t know, but the paper implies that it’s novel:
“In this work we have developed a Go program, based on a combination of deep neural networks and tree search, that plays at the level of the strongest human players, thereby achieving one of artificial intelligence’s “grand challenges” 32–34. We have developed, for the first time, effective move selection and position evaluation functions for Go, based on deep neural networks that are trained by a novel combination of supervised and reinforcement learning. We have introduced a new search algorithm that successfully combines neural network evaluations with Monte-Carlo rollouts. Our program AlphaGo integrates these components together, at scale, in a high-performance tree search engine.”
It’s certainly novel to Go playing programs, but then no-one had effective neural net position analysers for Go before.
posted by pharm at 2:44 PM on January 27, 2016


Snark aside, this is still a big fucking deal. Prior to today the conventional wisdom was that it was doubtful that a computer would ever be able to beat even an intermediate level human player. Assuming this proves out(*), that's a massive change in the state of the art.

They say they aren't sure whether they're going to commercialize it. The interesting thing there is that the solution is really just a big matrix of numbers. To run the result you wouldn't need a massive supercomputer like with Deep Blue or Watson. The hard work is in the training, but the result should be rather compact. You could probably run it on a Raspberry Pi or an iPhone. If you want to talk about scary SkyNet shit, this matrix of coefficients is like an impenetrable black box that nobody can really explain how it works or how it actually picks moves, just that it's really good at Go. It's probably the closest imaginable thing to an alien brain. (Although calling it a brain is probably not a good idea, because it can't do anything but this very specific singular task.)

(*) As a general principle, until it beats the world's best player you have to always account for an extremely small chance that it's a hoax and they just had the world's best player standing off in another room dictating moves.
posted by Rhomboid at 2:47 PM on January 27, 2016 [3 favorites]


.
posted by lastobelus at 2:54 PM on January 27, 2016 [1 favorite]


Dunno. Maybe it'll turn out there are techniques that work well enough across different domains that we'll see it, though.

There's no particular specialized knowledge used to build this system. I assume some of the researchers play Go, but they didn't need to really know the game to come up with this. All the parts are "off-the-shelf", so to speak. In other words there are not any new concepts here that aren't seen in the nets doing image classification or whatever else.

Also on preview: the value and policy functions alone are just big matrices and could probably give you good results, but the actual system uses those to guide Monte Carlo tree search, which still probably requires a lot of horsepower.
posted by vogon_poet at 2:54 PM on January 27, 2016 [1 favorite]


According to mashable for the Fan Lui match Alphago was run on "hundreds of CPUs".

Texas Hold'em is only (weakly) solved for heads up limit. "Solving*" full-table no-limit would be some number of orders of magnitude more difficult.

*Where solving means prevailing against the machine would not be possible in a match of unbounded length.
posted by lastobelus at 3:09 PM on January 27, 2016


1202 CPUs and 176 GPUs. The iPhone version may be slightly less strong.
posted by sfenders at 3:11 PM on January 27, 2016 [5 favorites]


I guess I missed the part where they said it was a hybrid solution and assumed it was a pure neural net.
posted by Rhomboid at 3:15 PM on January 27, 2016


There's a single CPU version too. The Google Research Blog entry has more detail on it than the links in the FPP. In particular note the chart of rankings for "AlphaGo" vs "Distributed AlphaGo". The plain AlphaGo is on a single CPU and is estimated 2 dan. The distributed AlphaGo is estimated 5 dan.
posted by Nelson at 3:41 PM on January 27, 2016


Note that (per the paper) the non-distributed version was still run on 48cpu/8gpu hardware. A few steps up from a phone, sure, but not much in terms of orders of magnitude.

(I work at Google, but on something completely unrelated. This was an interesting surprise today.)
posted by ethand at 4:20 PM on January 27, 2016 [1 favorite]




Actually, now that I've read the paper it turns out there is a little bit of domain knowledge programmed in the rollout policy in the form of local patterns (a common computer go heuristic) but not nearly as much as other go engines.
posted by RobotVoodooPower at 5:56 PM on January 27, 2016 [1 favorite]


I'll start worrying when a computer can consistently beat a human at Candy Land.
posted by rlk at 6:36 PM on January 27, 2016 [2 favorites]


Actually, now that I've read the paper it turns out there is a little bit of domain knowledge programmed in the rollout policy in the form of local patterns (a common computer go heuristic) but not nearly as much as other go engines.

Good catch. Although it looks like they were at least able to beat previous performance using just the raw board as input.
posted by vogon_poet at 7:20 PM on January 27, 2016


loquacious: “Robotic or human/robot remote piloted all-electric Formula racing sounds exciting in a way.”
“Roborace designs ready soon,” Eurosport, 25 January 2016
The design of the first driverless car to be used in the new Formula E support series Roborace could be revealed as early as next month.
posted by ob1quixote at 7:34 PM on January 27, 2016


"My computer beat me at chess...so I beat it at kickboxing."

- Demetri Martin
posted by hearthpig at 7:53 PM on January 27, 2016 [5 favorites]


There are also a bunch of go-specific features added to the neural net input vector for each intersection like number of liberties, captures, ladders, legality of move, etc.
posted by RobotVoodooPower at 7:57 PM on January 27, 2016 [1 favorite]


The 1997 Kasparov match highlighted a huge advantage of computers in matches*: they have no history, no psychology, and no sense of shame. Kasparov resigned in a position where he could have made a draw in (iirc) game 2. People on the Internet figured this out while he was sleeping. The draw was a little tricky but certainly in the realm of things that Kasparov normally sees at once. In a 6 game match, his self-confidence destroyed after this error, he had a very unpleasant and difficult task in trying to recover. By the last game he was hopeless; he tried to swindle the thing into making a bad sacrifice, and it crushed him instead. He proceeded to embarrass himself with wild accusations of the IBM team cheating, since he thought Deep Blue's play was too human like.

Beat a computer in a humiliating display of chess domination, and it's fine for the next match game. It will play at 100%. It's team will have patched up the bad line in its opening book, or fixed the error in the program's evaluation that led it to play so weakly.

*Match: a series of chess games, for example, one played to decide a championship; not a fancy word for a single game of chess, like people seem to love saying.
posted by thelonius at 8:33 PM on January 27, 2016


A few steps up from a phone, sure, but not much in terms of orders of magnitude.

Considering that each one of those CPUs is going to be better than the one in the phone, and then the GPU, it's got to be at least two orders of magnitude. That's a fairly big difference. Peering at figure 4, thinking about how other go programs' KGS ranks compare to how they play on my computer, and extrapolating wildly, I could imagine this approach reaching 3-5 kyu on a phone. Somewhere better than amateur shodan level on my not-the-best home PC. Still a big improvement. It'd be good fun for beginners like me to practice against, considering it plays more human-like moves than the others.
posted by sfenders at 8:58 PM on January 27, 2016


they have no history, no psychology, and no sense of shame.

Lee Sedol at Sensei's Library, AlphaGo's next opponent. "...he has chosen to live and die with showdown battles ... can be emotional and displays the lack of objectivity from time to time ... explosive, creative, daring, powerful, and flamboyant "

Should be an interesting match.
posted by sfenders at 9:01 PM on January 27, 2016 [2 favorites]


The emergence of strongish AI seems like a given at this point in history

I wish I had any such faith, but I think it's very likely we'll just keep accumulating task-specific solverbots for the foreseeable future.

Which, to be clear, is great! There are hundreds of tasks I would love to relieve humans of the error-prone drudgery of. But it's hardly the "hard takeoff" stuff of the SF novels.

There remain just as many volumes of unknown-unknown, what-is-even-conceptually-going-on questions as there have been for decades, when it comes to more-general models of intelligence. The neural net folks are finally getting their tools to perform well, but they're performing on just as circumscribed a set of tasks as they always have. Just doing better at them.
posted by ead at 11:47 PM on January 27, 2016 [7 favorites]


Probably people mean different things by "strong" AI, too.
posted by thelonius at 5:40 AM on January 28, 2016


Probably people mean different things by "strong" AI, too.

Yeah, it's kind of a mix of "robots that can think" from science fiction and "all-powerful singularity machines" from Kurzweil-style futurism. But there's no definition of strong AI that can be tested, so actual AI researchers just go on creating better rational agents and not worrying about it.
posted by graymouser at 5:45 AM on January 28, 2016


Weak AI: The current field of chatbots.
Narrow AI: Can play Go.
Strong AI: Reads metafilter, decides Go looks interesting, and learns to play.
posted by sfenders at 6:36 AM on January 28, 2016 [1 favorite]


People have been playing Mornington Crescent with computer assistance since 2008!
posted by Stark at 6:42 AM on January 28, 2016


Prior to today the conventional wisdom was that it was doubtful that a computer would ever be able to beat even an intermediate level human player.
Go AIs were already playing at the level of strong amateurs (for example, Zen is currently 6d on KGS). But there is indeed a big gap between that level and professional play.
posted by dfan at 7:07 AM on January 28, 2016


my friend, the nationally ranked Go player; YEAH WELL CALL ME WHEN COMPUTERS ENJOY PLAYING GO AND NOT JUST CALCULATING NUCLEAR MISSILE TRAJECTORIES
posted by The Whelk at 9:58 AM on January 28, 2016 [3 favorites]


I'd be curious to know where card games like Magic or Hearthstone lie in terms of complexity for AI. Even if we just gave the AI one Hearthstone deck, it'd still have to identify not just the board state, but also attempt to determine what their opponent's deck might be.

Not that I'm looking forward to my favorite card games being overrun by bots, or even deckbuilding being done by AI, but I'm very curious!


I suspect these games are 99 percent domain knowledge, 1 percent deep thought. The biggest challenge is formalizing the game state and rules. In the early days, we had Instants, Interrupts. At some point Mana Sources were added. I'm pretty sure there was a PC game that included AI, and that process led to introducing the stack in 5th ed.

But even after all these years, there's some weirdness. If I play a spell, and you counterspell it with Bone to Ash, can I play Deflection to change counterspell's target to itself? And if I can, do you draw a card or not? There's several of these corner cases in MtG you still have to deal with when designing your AI.

But assuming the rules can be modeled, and your model is correct, playing the game is fairly simple. Deckbuilding is interesting, but it wouldn't be too hard to use some sort of genetic algorithm to produce a strong set of decks for a given AI. Metagame analysis could be interesting, but can sometimes come down to rock-paper-scissors.
posted by pwnguin at 1:57 PM on January 31, 2016


I'm pretty sure there was a PC game that included AI, and that process led to introducing the stack in 5th ed.

The stack may not have been called such, but from way earlier than 5th ed. the sequence of multiple faster-than-sorcery/summon spells being played had already been conceived of as occurring in last-in- first-out. This was complicated by interrupts, but the general idea held.

I do agree that playing would be pretty simple. Metagame analysis would be interesting given more than one equally competent AI. Presumably, competent AI would be able to detect degenerate combos in need of nerfing via judge's ruling before humans would be, so I can see wacky situations developing between AI of the form: "HAL knows that Watson knows that comboing card X plus Y is totally broken and it also knows that card Z is an effective countermeasure, but it also knows that Watson knows that it knows that" etc. Like even if this is the only combo found strong enough to be worth considering in this manner, simply picking whether to mainboard the combo and sideboard the counter or vice versa is not easily decided IMO.
posted by juv3nal at 11:15 PM on January 31, 2016


The Dominion AI Provincial is apparently pretty good against top players, but there isn't the entrenched meta-game for Dominion that there is for Go, Chess, or even Magic. One notable thing that it can do is it can predict how valuable a card is. In the article I linked, they can reset the price of various cards and figure out if Wharf, for instance, would still be worth a buy if it was 6. It basically does it by iterating millions of games with various rulesets.

I wish that every pay version of online Dominion wasn't such trash (or that the free version hadn't been shut down).
posted by codacorolla at 6:55 AM on February 1, 2016 [1 favorite]




Can we make an AI that designs games that humans think are interesting to play?
posted by miyabo at 10:36 AM on February 3, 2016 [1 favorite]


« Older If I Could Only Fly   |   "Stop wearing Keith Haring shirts." Newer »


This thread has been archived and is closed to new comments