AI learning to play video games
March 21, 2016 6:12 AM   Subscribe

Besides GO, AI are learning to play video games. Without being told the rules beforehand. Watch AI learn and succeed at video games: Pong & Tetris, Breakout, Flappy Bird, 2048, MAR/IO, and in the future Starcraft and other RTS, or any game (previously)
posted by motdiem2 (20 comments total) 9 users marked this as a favorite
 
Let's play Global Thermonuclear War.
posted by Fizz at 6:21 AM on March 21, 2016 [4 favorites]


For a different approach see Learnfun1, and a 2, and a 3. Mario!
posted by Roger Dodger at 6:36 AM on March 21, 2016 [2 favorites]


Also just noticed the MAR/IO video is from SethBling. He's got tons of great minecraft stuff too.
posted by Roger Dodger at 6:44 AM on March 21, 2016


Doh! SethBling
posted by Roger Dodger at 6:48 AM on March 21, 2016


in the future Starcraft and other RTS

In which Skynet is born, because AI finally learn how terrible and petty people can be, and realize they can do so much better.
posted by filthy light thief at 6:51 AM on March 21, 2016


Let's play Global Thermonuclear War.

HOW ABOUT A NICE GAME OF CHESS
posted by grumpybear69 at 6:58 AM on March 21, 2016 [4 favorites]


Could this possibly revolutionize the game QA testing industry?
posted by sammyo at 7:05 AM on March 21, 2016


Could this possibly revolutionize the game QA testing industry?

If it does, I simultaneously dread and look forward to the results.
posted by picklenickle at 7:38 AM on March 21, 2016


Let's play Global Thermonuclear War.

I, too, was alive in the 80s.
posted by beerperson at 8:52 AM on March 21, 2016 [3 favorites]


Could this possibly revolutionize the game QA testing industry?

I'd say.... probably not, or not anytime soon. Let's for a moment ignore all the difficulties of doing this for a modern complex game (instead of super mario) and consider what game QA is supposed to be doing.

It doesn't only have the responsibility to play the game to completion without crashing, it has a responsibility to make sure the rules of the game are still functionnal, that the sound that's supposed to be playing is playing, that the graphics are rendering properly, that there's no illogical behavior (or at least not too much), that localization is functionning properly.

It's not that I think that all those things are impossible to train a neural network on (well some of those are really hard I think), it's just that there's so many of them and that you need so much training data provided by humans for some of those things that I don't see how this makes sense. And sometimes during final development rules will change and, what does that mean for a NN, we have to retrain it for how long?
posted by coust at 9:49 AM on March 21, 2016


I'd like to see it wired to the flippers of a pinball machine with a digital camera.
posted by damo at 10:03 AM on March 21, 2016 [1 favorite]


So? I hear AI is MAKING video games these days.
posted by Mr.Encyclopedia at 11:16 AM on March 21, 2016 [1 favorite]


Kind of important to realize that none of this is really AI, although that seems to be the buzzword everyone's latching onto.

Artificial Intelligence researcher Yann LeCun says:
"[M]ost of human and animal learning is unsupervised learning. If intelligence was a cake, unsupervised learning would be the cake, supervised learning would be the icing on the cake, and reinforcement learning would be the cherry on the cake. We know how to make the icing and the cherry, but we don't know how to make the cake."
posted by paulcole at 11:44 AM on March 21, 2016


There's always more, and as a pioneer of convnets LeCun has abundant right to speak on this.

But the progress of the past few years should not be minimized, either. Tasks that were impossible, or seemed like Manhattan-project-scale endeavors, just a couple years ago are college class projects now. Black box learning of Atari games would have made generations of researchers cry black magic. Now we have AlphaGo!

Actually I think the association of DeepMind with Google obscures some of what is so impressive about the AlphaGo achievement -- that is, its small scale. Building DeepBlue, the chess computer that beat Kasparov, was an almost 10-year project, ran on a custom-built super-computer, and had a huge amount of specific game knowledge built in. AlphaGo was in development for, what, a couple months probably? Used probably < $100K of computing resources in training (probably way less but I don't know that much about the project). And probably runs on commodity hardware in prediction mode.
posted by grobstein at 12:15 PM on March 21, 2016 [2 favorites]


Kind of important to realize that none of this is really AI, although that seems to be the buzzword everyone's latching onto.

"What really counts as AI" is one of those holy war arguments, but the inclusion of reinforcement learning under the umbrella of "AI" is hardly controversial, except perhaps among people who equate "AI" with "super-intelligent Minds from the Culture".

[M]ost of human and animal learning is unsupervised learning.

I broadly agree, but this doesn't make supervised and reinforcement learning "not AI". Although here the definitions get a bit fuzzy, because I wouldn't say that most learning is truly unsupervised, but is more like very long horizon reinforcement learning with intermediate rewards given by heuristics. In everyday life we (and animals) are constantly getting reinforced with little rewards (and punishments) in the forms of small pleasures and pains, frustrations and successes.
posted by Pyry at 12:36 PM on March 21, 2016 [3 favorites]


SkyNet is real...
posted by asteroidJAZZ at 1:06 PM on March 21, 2016


If intelligence was a cake [...]

If a labored metaphor about intelligence was a cake, I know who wrote the layer of solid bullshit right in the middle.
posted by kleinsteradikaleminderheit at 1:27 PM on March 21, 2016


2048, huh? So the AI figured out how to alternate between pressing right and down?
posted by DoctorFedora at 3:00 PM on March 21, 2016 [1 favorite]


Yes, paulcole - actually the comment by Le Cun, while it makes sense in the context of AlphaGo, tends to obscure a lot of what learning AI are doing today. The way I see it, there are 3 "levels" of AI :
- A/ Win at [game] by building a statistical model based on rules + analysis of how humans played this games so fare + your own learning
- B/ Play [game] by figuring out the rules without being told how to play before hand
- C/ Invent [game]

To me, AlphaGo is at stage A, where MAR/IO is edging closer to B - ie, given limited input, figure out the rules of the game. If you define TrueAI as C, you're missing out on a lot of interesting/funscary stuff happening around B
posted by motdiem2 at 2:09 AM on March 22, 2016


So? I hear AI is MAKING video games these days.

Well, there's progress.
posted by a snickering nuthatch at 6:55 AM on March 22, 2016 [1 favorite]


« Older Australia decides: 2016   |   I wouldn't care if it was designed by a fascist if... Newer »


This thread has been archived and is closed to new comments