Do you want Skynets? Because that's how you get Skynets.
January 27, 2019 11:09 AM   Subscribe

A few days ago, Starcraft's Twitch channel posted a series of games between Starcraft professional TLO and an AI known as Alphastar, created by the not-at-all-ominously-named Deepmind research group. It's an interesting watch if you like professional Starcraft or just want to witness yet another vector of human extinction being born.
posted by Laura Palmer's Cold Dead Kiss (48 comments total) 16 users marked this as a favorite
 
MetaFilter: yet another vector of human extinction being born
posted by Fizz at 11:34 AM on January 27, 2019 [10 favorites]




The one thing I take comfort in is that they can't handle warp prism harass and they don't have enough respect for how force fields can be used at choke points. I envision a future human resistance based around narrow corridors, sliding doors, and helicopters which act like they're going to land but never actually do.
posted by Balna Watya at 11:38 AM on January 27, 2019 [14 favorites]


On the one hand, the agent has some unfair advantages here. Big one is it can see every tile of the map at once, as long as it has a unit there. it’s not actually cheating, it can’t see anything that should be impossible, but it is an advantage. A human would have to constantly jump around to view different locations. This ability was turned off for the match it lost.

Similar to DOTA/OpenAI it also has inhumanly perfect clicking. They did insert latency so it’s not totally cheating but it still did things that would be nearly impossible for any human, and leaned on them heavily.

It will be interesting to see if someone can come up with a way for an agent to control games like these that people agree is “fair”, so that it only has human like input capabilities. I’ve seen some people suggest introducing random noise that gets worse as the frequency of actions increases.

On the other hand, this was their prototype system after under a year of development and a few months of training. there will be contrarian naysayer takes but this is really impressive. it will get better, too.

The key thing about these games is that you can easily simulate them as much as you need. Each agent got about 200 years worth of training time. Obviously that doesn’t work for real world tasks unless you have a very good, yet computationally cheap, simulation.
posted by vogon_poet at 11:49 AM on January 27, 2019 [5 favorites]


It will be interesting to see if someone can come up with a way for an agent to control games like these that people agree is “fair”, so that it only has human like input capabilities. I’ve seen some people suggest introducing random noise that gets worse as the frequency of actions increases.

Worth emphasizing that this is a huge achievement that seemed unreachable as recently as a year ago, even with superhuman speed and vision.

That said, DeepMind tried to limit the AI in ways that would make its input-output capabilities on par with humans. All the agents are limited to a human-like APM (actions per minute) in the 200s. As has been pointed out above, though, the comparison between AI APM and human APM is not really apples to apples. The large majority of human pro actions are redundant or meaningless; they click to keep their hands warm and so on. So if the AI gets the same number of actions as a human, it is actually getting a much larger number of effective actions. That said, this problem could be addressed by throttling the AIs actions even further down, perhaps to an average of 60 or so. The training process is long and expensive enough that DeepMind probably didn't actually try this; maybe they would worry that their agent couldn't win under these restrictions. But maybe it could.

Regarding using the camera like a human vs. having direct map access, the final exhibition game vs. MaNa was played by an agent that had to use the camera like a human. Of course, that agent lost, but in testing with the other AIs it was about as strong as the direct-map-access agents. If you can scan the camera around quickly and accurately, no particular reason to think you can't approximate the same level of awareness that is achieved with direct map access.
posted by grobstein at 11:59 AM on January 27, 2019 [1 favorite]


If the AI adheres to the rules of the game, why should it be limited? What’s the point of that? (Asking for a friend. Totally not a robot/reptilian/sentient cloud.)
posted by Don.Kinsayder at 12:04 PM on January 27, 2019 [4 favorites]


All of these discrete games are on a ticking clock. There’s no way to compete with modern neural nets, and they are only getting better. AlphaGo Zero, the successor to AlphaGo, is twice as good as AlphaGo and requires zero human training data.
posted by lazaruslong at 12:10 PM on January 27, 2019


That said, DeepMind tried to limit the AI in ways that would make its input-output capabilities on par with humans. All the agents are limited to a human-like APM (actions per minute) in the 200s.

But that still allows for <5 second bursts of really high APM. A more realistic restriction would be to set a hard lower limit on the time between clicks.
posted by quaking fajita at 12:17 PM on January 27, 2019 [1 favorite]


I would be interested to see how well a "cyborg" team of a human player + AI does. That is, instead of trying to hobble the AI to a human APM, let the human use an AI as well to execute perfect micro.
posted by Pyry at 12:52 PM on January 27, 2019 [2 favorites]


The question of restrictions reveals an interesting dimension. Like, the point of developing AI like this isn’t to win at video games. Instead, games represent an accessible venue for human vs computer competition and skill assessment on a gradient. It was checkers in the 60s, it’s Starcraft now. Or I should say, it was Starcraft.

Restricting the AI APM misses the point - AI is by design not nearly as constrained by metrics like that. Hobbling the machine is irrelevant. The goal in the immediate limited scope is to build an AI that can beat the best humans. No reason to limit the APM which is one of the ways that we are outclassed by the AI.

The next step is to make Alphastar Zero, which will be better than Alphastar (as well as all conceivable humans) AND will require zero human training data.

Discrete games are not human winnable ever again. The only limiter is if Deepmind et al have chosen to focus on your game of choice yet.
posted by lazaruslong at 12:58 PM on January 27, 2019 [1 favorite]


There’s no way to compete with modern neural nets

at discrete tasks, right? where the rules are clearly defined and everyone agrees on what the objective is, and both sides know theyre in opposition to each other, and it's a mano a mano contest, sure. thankfully, that is not what strategic or tactical thinking actually is.
posted by wibari at 12:59 PM on January 27, 2019 [1 favorite]


Yes, so far at discrete tasks (the sentence right before the quoted one).
posted by lazaruslong at 1:00 PM on January 27, 2019


Restricting the AI APM misses the point - AI is by design not nearly as constrained by metrics like that. Hobbling the machine is irrelevant. The goal in the immediate limited scope is to build an AI that can beat the best humans. No reason to limit the APM which is one of the ways that we are outclassed by the AI.

There is a reason. The reason is to ensure the AI wins by better decision-making rather than just being able to execute faster. The former is a much more interesting breakthrough.
posted by grobstein at 1:02 PM on January 27, 2019 [13 favorites]


Sorry that may have come across as snarky.

Regarding strategic or tactical thinking, I reckon we would need to have a discussion regarding those definitions. One would certainly say that chess and go require strategic and tactical thinking. But no human will ever beat AI at those games again.

I dunno. It’s a very interesting time in this field. This stuff is taking place in the domain of games but has implications for the slope of development for strong AI, which unlike chess is an extinction level threat. I’m fascinated and horrified.
posted by lazaruslong at 1:03 PM on January 27, 2019


Grobstein: that’s fair. The tech description does say that the AI is still operating with economy of attention and 350ms lag, so I’m not sure that we can say the success is due to faster execution, but it’s a fair point from the perspective of research. Practically speaking though, being superior at the game regardless of whether you use the advantages inherent in being a machine will be irrelevant.
posted by lazaruslong at 1:06 PM on January 27, 2019


Personally, I don't have much doubt that even if AlphaStar isn't currently better at short-term "decision-making" than humans, something like it will get there soon. The more interesting question to me is whether, by playing huge numbers of games and exploring large parts of the strategy space, it can figure out new and interesting play styles that humans haven't yet discovered.

For instance, what stood out to me as most unusual about AlphaStar's play in these games was its tendency to build way too many probes early on. As the commentators said, conventional wisdom is that this is a mistake, because you get diminishing returns on mining income. But the advantage seems to be that it can lose some probes without taking too much of a hit to its income, which in turn lets it get away with being more aggressive.

Maybe it's too soon to ask this question, but since these games were played, have any pro Protoss players talked about or experimented with this strategy?
posted by teraflop at 1:13 PM on January 27, 2019 [2 favorites]


but, I mean, most the best SC players are the best because of their actions per minute are so high. There isn't that much strategy. In the first game against Mana, Mana loses because he gets out maneuvered in the micro at his base entrance. The only really strategic moment was actually a human mistake, Mana didn't think the AI would come up a ramp because the human meta has basically decided to not go up ramps. But the AI saw that it could outfight the units present and went up the ramp anyway despite the taboo.

I think this is true of nearly all live video games, the best FPS players have razor thin reactions and amazing aim. The best RTS players have super high APM, the best moba players as well. Once an AI can manage the basic action of the game and can execute a basic strategy... at that point we need to engineer in flaws to make it "fair"
posted by French Fry at 1:14 PM on January 27, 2019


teraflop: yes yes yes yes! You are talking about one of thing things that is most fascinating to me.

Consider: in games like Go and Chess and Starcraft, the best players in the world did not get to be the best without studying and ingesting all of the built up wisdom over time by other players. For Go, that's a few thousand years. For Starcraft, that's 1998.

When AlphaGo Zero was teaching itself to be the best Go player on the planet, it independently re-discovered many of the human strategies that were developed over centuries. In many cases, it discarded those strategies as inefficient and invented new ones. Human Go players today are studying the games by AlphaGo Zero in order to learn from the AI. That's where we're at with Go which has a search space of ~10^170 compared to chess's ~10^50. And that's after humans had a few thousand years to amass knowledge and build on it. Starcraft has had 21 years.

There's zero chance that AI fails to come up with strategies that humans have not yet discovered. Within a year or two, Team Liquid and all the rest will be studying Alphastar games to get an edge on their human competitors.
posted by lazaruslong at 1:31 PM on January 27, 2019 [1 favorite]


If the AI adheres to the rules of the game, why should it be limited?

As far as I can tell the AI doesn't even use a physical screen, mouse, and keyboard, so it's not so much that it's following the rules, but it's following different rules.
posted by ODiV at 1:37 PM on January 27, 2019 [1 favorite]


teraflop and lazaruslong: throughout the games, the commentators repeatedly noted that the AI "overproduced" probes and that the pro community would consider that to be an inefficient strategy.

In the final game of the video, which the human MaNa won, he used the AI's strategy of overproducing probes. As the AI continues to learn the game, I suspect that the entire meta of the pro Starcraft scene will be upended.
posted by Laura Palmer's Cold Dead Kiss at 1:54 PM on January 27, 2019 [2 favorites]


Yes, I totally agree. The meta will be redefined by what the AI does, and how humans study and replicate it.
posted by lazaruslong at 1:56 PM on January 27, 2019


Something else to consider is that victory states and conditions are fundamentally different for humans and AI. For humans we tend to think qualitatively - victory is the result of a bunch of wooly decisions that result in a range from squeaker to domination.

This is not how AI works. In complex games, AI tends to win by a tiny margin. It will therefore make choices that can seem non-sensical to human players, because it's looking at all of the game state and potential futures and calculating towards that victory in which the material advantage is tiny tiny tiny but the chances of winning are very high. That's just not how humans usually think.
posted by lazaruslong at 2:01 PM on January 27, 2019 [2 favorites]


Regarding strategic or tactical thinking, I reckon we would need to have a discussion regarding those definitions. One would certainly say that chess and go require strategic and tactical thinking. But no human will ever beat AI at those games again.

I'm inclined to argue, humans have to use their strategic and tactical thinking to play chess and go with their human brains.

But also, there's some underlying mathematical structure to the games, and there are ways to approximate aspects of that structure without recourse to any strategic or tactical thinking.
posted by vogon_poet at 2:12 PM on January 27, 2019


Hmm. Could you expand on that a bit?
posted by lazaruslong at 2:17 PM on January 27, 2019


AlphaGo/AlphaZero do use Monte Carlo tree search which actually does seem human-like to me (I have definitely played out different possibilities in chess). You can reasonably call that "tactics" IMO.

But actually they were also able to just learn a value or policy network that's still pretty good on its own. Meaning it's just a big neural network (i.e. series of linear operators w/ some squashing) that takes in a board position and spits out either the probability of winning, or the move that will hopefully be closest to winning. There's no memory of past moves and no model of the opponent.

That aspect of the system doesn't seem tactical or strategic at all: it just seems like a really good approximation of some really weird function.

Interestingly, Deepmind has been putting some papers where they try to hybridize deep learning with old-school symbolic AI. So who knows what the future holds.
posted by vogon_poet at 2:35 PM on January 27, 2019 [1 favorite]


Eh, I don't think that's correct.

There's a big difference between the neural networks used in AlphaGo and AlphaGo Zero. AlphaGo Zero invented the Two Headed Monster which combined the policy and value networks into one network. This network does in fact learn from past moves and opponent models - it's just that the past moves and the opponent models are all self-play games from which it learned the game tabula rasa, not human training data. It's not just really powerful roll-outs in MCTS with a shitload of computing power - this is a fundamentally different paradigm we're talking about. In fact, as the power of Deepmind's AlphaGo Zero network went up, it's energy consumption dramatically decreased.
posted by lazaruslong at 3:00 PM on January 27, 2019 [1 favorite]


AI has always been good at games (arguably games are the only thing AI has ever really been good at), yet, despite the huge advances that deep learning has made, I still wouldn't trust a learned system to robustly distinguish cheese and petrol.
posted by Pyry at 3:53 PM on January 27, 2019 [4 favorites]


Sure, I guess there's some history of all the training games and opponents "baked in" to the network. But having trained the network, you can just hand it a game state as a big array and it will spit out the probability of winning and which move to take. The architecture is apparently just a variant of ResNet, originally designed for image classification tasks, hard to see how anything in the computation the network performs can be thought of as strategic. It just seems like a general regression problem.

Once training is done, if you were to drop the MCTS entirely, effectively doing zero-step rollout, the policy network would probably still play a very good game of Go, yet its decisions would depend on nothing other than the current game state. whereas a human would be thinking in general terms of how their opponent has been playing over the whole game, what both of their long term goals might be, and so on.

I guess another way to say it is that I think strategy, tactics, and planning are a very human way to store and access information on what moves to make. But these neural networks don't need to rely on them -- they can store the information in some other way, in their weights, explicitly implementing a function. Both make use of something like MCTS rollouts, though.
posted by vogon_poet at 3:53 PM on January 27, 2019


Although I guess I don't know for sure that there isn't something like planning being represented in those layers that nobody understands. Nor will anybody ever, probably. The joy of deep learning!
posted by vogon_poet at 3:57 PM on January 27, 2019 [1 favorite]


Sure, I guess there's some history of all the training games and opponents "baked in" to the network. But having trained the network, you can just hand it a game state as a big array and it will spit out the probability of winning and which move to take.

Yes....that's the point of these networks. Having been trained, they do stuff that's better than people. I'm not following what logic then concludes that the AI is not doing strategy.
posted by lazaruslong at 4:19 PM on January 27, 2019


Put another way: Take a high level Starcraft player. Having been trained on tons of games of Starcraft, hand them a game state and they will spit out what move you should take and how likely you are to win. The second answer will be qualitative. The AI will do the same thing, with a quantitative second answer, and will beat the human. I'm not seeing a big difference there, save who the winner is.
posted by lazaruslong at 4:20 PM on January 27, 2019 [1 favorite]


Meanwhile, Skynet's air force prototypes are coming along nicely.
posted by homunculus at 4:48 PM on January 27, 2019 [3 favorites]


It's the process by which that happens that is different, I think. (no way to know for sure)

the neural network just evaluates a function consisting of repeated convolutions and matrix multiplications that have been carefully tuned to be correct.

the human has a whole set of mental representations of different goals, strategies and the connections between them and possible moves, and considers all these in light of experience to decide what to do. i don't think there's anything like those mental representations inside the network.
posted by vogon_poet at 4:49 PM on January 27, 2019


I think perhaps you may be placing too much importance / value on the qualia of human methods versus those of machines. It seems like they both are using similar techniques to achieve similar ends, but you're quite right that humans experience the process of utilizing those techniques differently. Also, if the idea is that the network doesn't have mental representations, that's just a non-starter and not useful to discuss, because the network doesn't currently have a mental anything. When you say that the neural network "just" evaluates a function etc., the "just" part reads as a negative. I'm not sure what that means. The humans lose against the AI. By most metrics of game playing, that means the strategy employed by the opponent is superior.
posted by lazaruslong at 4:54 PM on January 27, 2019 [1 favorite]


Discrete games are not human winnable ever again. The only limiter is if Deepmind et al have chosen to focus on your game of choice yet.


G L O B A L
T H E R M O N U C L E A R
W A R

Y / N ?
posted by darkstar at 6:05 PM on January 27, 2019 [3 favorites]


This is a thread full of AI and computer games nerds, so I'm going to talk about crochet.

I periodically get the itch to pick up new hobbies, and a bit over a year ago, I became interested in picking up textile arts. You know: weaving, knitting, that sort of thing. Each kind of art is more or less similar—they all produce fabric—but they each do so in different ways, using different tools. I ended up settling on crochet. Why?

Well, I know it isn't true in any rational sense, but for me, every time we build a machine to do a task, I feel like a little piece of our humanity and history has been stolen from us. After Deep Blue, Chess lost something. After Chinook, Checkers lost something. After AlphaGo (and doubly so after AlphaGo Zero), Go lost something.

I don't quite know what it is, exactly, that was lost. It's not like I can play Checkers or Chess or Go at even a barely competent level. But it's certainly true that we humans spent thousands of years learning to play Go, and that a few computer nerds with a couple years and a few million dollars of CPU time managed to supersede all of that history and knowledge and culture by literally throwing it all away and starting from scratch. And the new history and knowledge and culture isn't ours, it's AlphaGo Zero's. And, hell, it's probably all lost anyway because its tons of data and it'll take too many human lifetimes to pore over those games and understand how Zero developed. We're merely left with the end result, in all it's platonic purity.

And it is beautiful, in it's way. And certainly those who remain playing these games will study the machine and learn from the machine and play better. Maybe some kids will get really excited about the new depths we can fathom, but I expect there will be more kids who simply won't get into the game, because it's not our game any more.

Or something. I don't know. As I said, it's more of an irrational feeling than anything concrete. But I know I'm discouraged when a machine is better than I'll ever be at any particular thing I may wish to learn to do. And sure, machines are presently silver bullets at specific problems, but every year goes by and the space of things that still belong to us shrinks a little more.

Anyway, I settled on picking up crochet because, unlike weaving and knitting, nobody's managed to automate it yet. Whenever I see something with the distinctive texture of crocheted fabric, I can smile to myself, and say "somebody made that with their own hands."

That's dear to me. And every time a machine takes another thing from us, it's dearer.
posted by ragtag at 6:47 PM on January 27, 2019 [8 favorites]


I still wouldn't trust a learned system to robustly distinguish cheese and petrol.
posted by Pyry


You just need a switch!
posted by Carillon at 6:48 PM on January 27, 2019 [1 favorite]


Global Thermonuclear War

DeepMind are trying hard to not have their stuff used by militaries.

"Executives at DeepMind, an A.I. pioneer based in London that Google acquired in 2014, have said they are completely opposed to military and surveillance work, and employees at the lab have protested the contract. The acquisition agreement between the two companies said DeepMind technology would never be used for military or surveillance purposes."
posted by rhamphorhynchus at 7:07 PM on January 27, 2019 [2 favorites]


Basically, despite the devs’ efforts to nerf AlphaStar’s abilities, they still left it with four clear-cut advantages which undermine the AI strategy claims.

1. The AI is allowed to see the whole map (at least, of what is potentially visible) at once.
2. The AI is allowed a 100% accurate click.
3. The AI is allowed to process its tactical decisions at superhuman speed.
4. The AI is allowed a maximum burst click rate which far exceeds what is humanly possible.

The last three result in a “micro” (an ability to micromanage individual units in a battle) with inhuman precision and timing, which is crucial to winning encounters.

As a result, the AI does not actually have to employ as much insight or strategy beyond playing an otherwise conventional game, because it can (1) completely dominate in micro, (2) have a much higher efficiency in build/train/move actions, and (3) engage the human opponent on multiple fronts simultaneously.

Notably, in the final exhibition match, when the devs removed just one of those advantages (#1), the human won. Watching that match, it was clear that the AI, not being able to see everywhere at once, was confused by the human player’s actions, just as a less-skilled human might have been in the fog of war resulting from limiting the view.

That’s not to say that the AI isn’t an incredible technical development. But its strength, so far, is still its mechanical advantage of speed and accuracy, as one would expect of a machine.
posted by darkstar at 7:19 PM on January 27, 2019 [5 favorites]


Discrete games are not human winnable ever again. The only limiter is if Deepmind et al have chosen to focus on your game of choice yet.

Holy shit, I just realized how the (human) world ends.

It's great to talk about the game state space of Go and Chess, and even when computers win at Starcraft, we're all still fine co-existing as, 'eh, whatever, they're just games'. But what game requires the AI to pass a six - sided Turing test, be proficient in persuasion and negotiation, and have just enough Machiavallianism and treachery to be truly dangerous?

We are so fucked if Deepmind ever cracks Diplomacy.
posted by Arandia at 1:24 AM on January 28, 2019 [5 favorites]


According to this Vox article, AlphaStar has been given a slower-than-human reaction time and the the latest version has to manipulate the camera rather than viewing the whole map at once. It does still have a superhuman ability to precisely coordinate and maneuver units, but for the most part I'd say it really is playing fair, at least in the most recent games.

Just because I want to see if they can do it, I'd love for DeepMind to take on robot soccer next.
posted by Kilter at 5:24 AM on January 28, 2019


Yeah, and fog of war is still in effect. Even with the version that didn't have to manually move the camera, the agent still 'switched contexts' at around the same rate as human players. And this graph from DeepMind suggests that with more training, the camera-moving version will be as strong as the zoomed-out one. They did restrict the APM on average, and gave the AI a 350 ms lag.

I will say I agree with darkstar that the burst APM is a problem, and should be restricted much more than it is.
posted by lazaruslong at 5:34 AM on January 28, 2019 [1 favorite]


Well, I know it isn't true in any rational sense, but for me, every time we build a machine to do a task, I feel like a little piece of our humanity and history has been stolen from us. After Deep Blue, Chess lost something. After Chinook, Checkers lost something. After AlphaGo (and doubly so after AlphaGo Zero), Go lost something.

I get this feeling too. After losing to AlphaGo, Ke Jie said "I would go as far as to say not a single human has touched the edge of the truth of Go." This is a sad thing.

At the same time, I'm not very good at video games, and had experiences a long time ago where friends started to exclude me because they were spending all their time playing either Starcraft II or MOBAs and I wasn't good enough to play with them, so especially for these video game tasks, I always root for the computer. I want all their competitive rage to have been for nothing, and I am confident in saying that DOTA2 is not important to human culture. (A little less so about SCII, but still...)
posted by vogon_poet at 12:24 PM on January 28, 2019 [1 favorite]


I feel that way too, but it's worth noting that many Go players, for example, feel the opposite way. It's a very exciting time for high level players because they are learning new strategies and creative gameplay from the AI.
posted by lazaruslong at 12:32 PM on January 28, 2019 [2 favorites]


I was chatting with a younger colleague about Deep Mind, and I mentioned Deep Blue, which had defeated Garry Kasparov; he replied 'oh yeah, I've got that on my phone'.
posted by not_that_epiphanius at 2:11 PM on January 28, 2019


For those who didn't dig into all of the videos on the article, this one's really worth a watch: micro'd zerglings taking down siege tanks.

This shows just how absurd computer micro is, and why the AI techs would want to restrict it in order to make the AI learn more macro.
posted by explosion at 8:14 PM on January 28, 2019 [6 favorites]


Oh man I got goosebumps watching that. Fuuuuuuuuuuuuck. AI is an existential threat to humanity but goddamn it's beautiful.
posted by lazaruslong at 6:55 AM on January 29, 2019 [1 favorite]


AI has always been good at games (arguably games are the only thing AI has ever really been good at), yet, despite the huge advances that deep learning has made, I still wouldn't trust a learned system to robustly distinguish cheese and petrol.

Dad?
posted by Literaryhero at 3:24 AM on February 6, 2019


« Older Best Poop Books   |   If It Talks Like a Fascist… Newer »


This thread has been archived and is closed to new comments