“There Is An Entity That Cannot Be Defeated”
November 29, 2019 8:49 AM   Subscribe

 
I hope he'll be alright. Giving up an identity held since childhood could be rough.
posted by otherchaz at 9:06 AM on November 29, 2019 [2 favorites]


This is how you get supervillains.
posted by Pastor of Muppets at 9:07 AM on November 29, 2019 [10 favorites]


I don't understand this decision. Professional chess has survived being totally dominated by AI just fine. The game has changed some but it's just as competitive and vital as before.

Professional Go is a huge thing in Japan, Korea, and China. Top ranked Go players can make a very good living giving lessons, and the very top folks like Se-dol can do well with tournament prizes. AI or no AI. Arguably AI has made the game ever more interesting; like the move 37 referenced in the article, which once studied gave Go taught humans new theories for approaching the game. Why quit now?

The article frames it as him realizing he can never be the best, so might as well quit now. But that's a little like a marathon runner quitting because a bicycle can beat him at a 26 mile race. Weird. I suspect there's more going on with Se-dol's psychology, but this article is too busy with its basic "AI beats humans" view.
posted by Nelson at 9:31 AM on November 29, 2019 [10 favorites]


I think there's more going on here than what has been detailed in the article. From another article:

He added that his decision was also inspired by his dispute with the KBA over how the organization uses membership fees. According to Yonhap, Sedol is suing the organization for his fees.
posted by lazaruslong at 9:34 AM on November 29, 2019 [5 favorites]


Some other random thoughts:

1. Lee famously lost his series to DeepMind agent AlphaGo. The documentary on Netflix is pretty good, and I recommend it!

2. AlphaGo was made obsolete very quickly by AlphaGo Zero. Not only did Zero absolutely crush AlphaGo (100-0), but it did it while also a) learning much much faster b) using ZERO human training data [which is actually bonkers] and even c) using less energy consumption.

3. I think it's important to be generous with the Go community especially for the next couple decades. Go as a game is at least one order of magnitude more complex than chess. It is also ancient. Much in the same way that Chess players study and have access to a rich history of tradition and library of openings, strategies, defensives, so to do Go players. In the course of AlphaGo and AlphaGoZero's development, the AI learned or (w/ Zero due to no human training data) re-discovered many of these strategies. It has also discarded some established Go game theory wisdom /strategies as being inefficient, while inventing new stuff that human players never considered. After AlphaGo, there was disappointment and resentment for sure, but also a lot of excitement that the AI was making novel contributions and revisions to the canon. Zero is the strongest player in history and will likely only ever be beaten consistently by improved versions of itself. But for many Go players, that just means there's someone new in town to learn from.
posted by lazaruslong at 9:44 AM on November 29, 2019 [27 favorites]


Zero is the strongest player in history and will likely only ever be beaten consistently by improved versions of itself. But for many Go players, that just means there's someone new in town to learn from.
Until a man is twenty-five, he still thinks, every so often, that under the right circumstances he could be the baddest motherfucker in the world. If I moved to a martial-arts monastery in China and studied real hard for ten years. If my family was wiped out by Colombian drug dealers and I swore myself to revenge. If I got a fatal disease, had one year to live, and devoted it to wiping out street crime. If I just dropped out and devoted my life to being bad.

Hiro used to feel this way, too, but then he ran into Raven. In a way, this was liberating. He no longer has to worry about being the baddest motherfucker in the world. The position is taken.
posted by tclark at 9:51 AM on November 29, 2019 [35 favorites]


I can sympathize with Go players who are feeling down at this turn of events. The hero of a certain Saturday morning series in the late 80s informed his fans that no machine will ever out-think a human. Guess not.
posted by Fukiyama at 10:11 AM on November 29, 2019 [2 favorites]


It’s funny how there was no era where competition between programs and top Go players was viable and interesting. They went straight from the era where the best programs were not a serious opponent for elite players to the AlphaGo era. With chess, there was about 10 years where the man- machine games were close.
posted by thelonius at 10:11 AM on November 29, 2019 [10 favorites]


I'll cheerfully second the recommendation for the Netflix documentary, it really is a fascinating story. One thing that struck me was how personally Lee took the games, and the defeat, so this is a little worrying. I really felt for him when I watched the doc.

But yeah, absolutely amazing work from the AlphaGo team. I can't help wonder how they feel about this - I know I've had some long nights considering the people my code has made redundant in the past.
posted by Gamecat at 10:15 AM on November 29, 2019 [3 favorites]


It’s funny how there was no era where competition between programs and top Go players was viable and interesting. They went straight from the era where the best programs were not a serious opponent for elite players to the AlphaGo era. With chess, there was about 10 years where the man- machine games were close.

I was thinking about the difference between how Lee has responded to AI dominance and how chess players have responded and I think this point maybe explains a lot. Chess has some time to get used to it, and to get used to using computers as a tool even before they were unbeatable.

Also I think computers became finally unbeatable in chess a little more quietly? Kasparov vs. Deep Blue was the big exhibition, and it was still relatively close, in fact took two matches, and then from that point incremental advances in engine design and Moore's Law just marched on until even your home computer could beat top players. AlphaGo was dominant out of the box.
posted by atoxyl at 10:30 AM on November 29, 2019 [3 favorites]


I put the end of GM vs computer competition at the match that Michael Adams played with Hydra, about 2004 iirc. Adams was still a top ten player, and he just got wrecked.
posted by thelonius at 10:41 AM on November 29, 2019


I can't read Korean so can't check original sources on this, but I'd be surprised if Lee's decision was really primarily motivated by AlphaGo. Though Lee is clearly one of the top players of the last 20 years (and has a strong claim to be best overall in that timeframe), he was already not world #1 anymore when he played AlphaGo in 2016, and he's had a long career. It's not weird for him to retire at this point.

Also, anecdotally, I've heard that many top-tier pros are actually excited about the existence of good AIs, as they can now train more effectively (instead of only being able to train vs. each other, which is obviously much harder to arrange). There are even pro-strength open source AIs now. The biggest western go server (which, to be clear, has a fraction of the userbase of the big East Asian servers) has hooked a strong AI up to automatically review games and identify mistake moves, which is super useful for studying (here's a random game between two 9-dans I found, but you could look at any 19x19 game).

Overall, yeah, it's "sad" that a 3000-year-old game has been "beaten" by AI, but honestly to me it seems like the go community is energized by that development since it'll help people as a whole get better than we already were.
posted by acroyear2 at 10:43 AM on November 29, 2019 [2 favorites]


yes, exactly!
posted by lazaruslong at 11:08 AM on November 29, 2019


It’s sort of like giving up painting because someone invented the camera.
posted by fishhouses at 12:01 PM on November 29, 2019 [3 favorites]


"Traditional" chess engines are also relatively transparent. Fundamentally the concept is brute force applied to basic chess theory that was developed by human masters - and then that concept was improved upon via algorithmic optimization and statistical analysis and controlled refinement of parameters through self-play and so on. That never quite worked for Go until we got to the black box neural networks.
posted by atoxyl at 12:03 PM on November 29, 2019 [4 favorites]


Zero absolutely crush[ed] AlphaGo (100-0), but it did it while … using ZERO human training data

If Zero learned anything from its wins against AlphaGo, then it has acquired all of AlphaGo's 100,000 human training games.
posted by scruss at 12:29 PM on November 29, 2019 [1 favorite]


acroyear2 linked this but I'll call it out more explicitly: Leela Zero is an open source AlphaGo Zero clone that's been training on donated computer resources. It continues to get stronger (ignore the absolute number on the Elo y-axis, it's nonsense, but the relative growth is real.)

Also relevant, a new result from the DeepMind team: Mastering Atari, Go, Chess and Shogi by Planning with a Learned Model
Constructing agents with planning capabilities has long been one of the main challenges in the pursuit of artificial intelligence. Tree-based planning methods have enjoyed huge success in challenging domains, such as chess and Go, where a perfect simulator is available. However, in real-world problems the dynamics governing the environment are often complex and unknown. In this work we present the MuZero algorithm which, by combining a tree-based search with a learned model, achieves superhuman performance in a range of challenging and visually complex domains, without any knowledge of their underlying dynamics. MuZero learns a model that, when applied iteratively, predicts the quantities most directly relevant to planning: the reward, the action-selection policy, and the value function. When evaluated on 57 different Atari games - the canonical video game environment for testing AI techniques, in which model-based planning approaches have historically struggled - our new algorithm achieved a new state of the art. When evaluated on Go, chess and shogi, without any knowledge of the game rules, MuZero matched the superhuman performance of the AlphaZero algorithm that was supplied with the game rules.
posted by Nelson at 12:38 PM on November 29, 2019


If Zero learned anything from its wins against AlphaGo, then it has acquired all of AlphaGo's 100,000 human training games.

Zero reached the ELO needed to achieve those wins against Go with only playing itself, starting tabula rasa. You can read more about it here.
posted by lazaruslong at 1:00 PM on November 29, 2019 [4 favorites]


acroyear2 linked this but I'll call it out more explicitly: Leela Zero is an open source AlphaGo Zero clone that's been training on donated computer resources. It continues to get stronger (ignore the absolute number on the Elo y-axis, it's nonsense, but the relative growth is real.)

The chess version of Leela has also reached the point of being competitive with - though not yet quite consistently better than, last I checked - the top "traditional" chess engines in engine tournaments.

(Google released a paper some time ago showing their AlphaZero approach trouncing the top traditional engine Stockfish, but some players believe the test conditions weren't quite fair).
posted by atoxyl at 1:35 PM on November 29, 2019 [2 favorites]


that's the super exciting part about zero to me - the clear potential for generalization. applying nearly identical methodologies to other games (albeit with specific types of games for now) is a super exciting step. if DM can figure out how to generalize to something like protein folding...well....that will change the world.

some cursory googling shows that apparently they are well on their way with alphafold.
posted by lazaruslong at 1:53 PM on November 29, 2019 [3 favorites]


One thing that still amazes me is neural networks seem to actually work. And the big innovation is just "we made them a lot bigger".

I studied neural networks back in the 90s. They seemed very promising as a research topic in that era. There was the initial excitement about them in the 50s, then Minsky and Papert's hugely damaging Perceptrons book came out and people got busy with other approaches. But neural networks had a resurgence in the 80s / early 90s when back propagation became commonly understood. That had some successes until they maxed out again and everyone got discouraged.

And then the current resurgence starting around 2010, where folks began to realize that the problem with existing neural networks is they were just too small. Those things I was studying in the 90s had two, maybe three layers. And literally megabytes of training data. Laughably small by modern standards. Modern networks like Alphago Zero have hundreds of layers, convoluted in complex ways. And in supervised learning they are trained on terabytes of data.

The amazing thing is the core algorithms seem to scale. The bigger the network, the more it can learn. It just took us 60+ years to realize it and to have computers big enough to implement it.

The downside is it's very hard to understand what the network is doing or how it works.
posted by Nelson at 4:41 PM on November 29, 2019 [8 favorites]


It’s sort of like giving up painting because someone invented the camera.
The metaphor I always liked was weightlifters getting discouraged by the existence of forklifts.
posted by Aardvark Cheeselog at 5:36 PM on November 29, 2019 [3 favorites]


The downside is it's very hard to understand what the network is doing or how it works.

That is... scary.

I am old enough to have spent the first decade of my life without television, let alone computers & internet. I worry we are starting to lose control of technology.

I am no technophobe. My first career was in electronics, and I love the internet, retina screens, modern medicine, electric cars, etc.

Just not a uncritical technophile either. The risks are piling up faster than we can assess and manage them, and increasingly so, IMHO.
posted by Pouteria at 6:04 PM on November 29, 2019 [2 favorites]


The metaphor I always liked was weightlifters getting discouraged by the existence of forklifts.
Reminds me of the Warm Considering playing the elevenstring.
posted by rhamphorhynchus at 6:21 PM on November 29, 2019 [2 favorites]


The downside is it's very hard to understand what the network is doing or how it works.

This made me think of a passage from The Human Condition (Hannah Arendt, 1958):
The first boomerang effects of science's great triumphs have made themselves felt in a crisis within the natural sciences themselves. The trouble concerns the fact that the "truths" of the modern scientific world view, though they can be demonstrated in mathematical formulas and proved technologically, will no longer lend themselves to normal expression in speech and thought. The moment these "truths" are spoken of conceptually and coherently, the resulting statements will be "not perhaps as meaningless as a 'triangular circle,' but much more so than a 'winged lion'" (Erwin Schrödinger). We do not yet know whether this situation is final. But it could be that we, who are earth-bound creatures and have begun to act as though we were dwellers of the universe, will forever be unable to understand, that is, to think and speak about the things which nevertheless we are able to do. In this case, it would be as though our brain, which constitutes the physical, material condition of our thoughts, were unable to follow what we do, so that from now on we would indeed need artificial machines to do our thinking and speaking. If it should turn out to be true that knowledge (in the modern sense of know-how) and thought have parted company for good, then we would indeed become the helpless slaves, not so much of our machines as of our know-how, thoughtless creatures at the mercy of every gadget which is technically possible, no matter how murderous it is.
posted by aws17576 at 11:31 PM on November 29, 2019 [7 favorites]


The downside is it's very hard to understand what the network is doing or how it works.

Sort of? I think it's a situation where the answers are precise but unsatisfying.

How does it work? Well, a lot of matrix multiplication and nonlinear gates... (with back prop to figure out what should be in the matrices.) Why does it think this picture is a dog? Because it's features correlate well with the features it observed during training.

There are already tools that get to about human levels of explanation. Why did you think this is a dog? Well, this region is the dogs face, and there's a tail over here. Why do YOU think it's a dog?

Building more efficient networks will, imho, lead us to build things that are also more traditionally explainable and expert driven. For example, getting keyboard prediction into a cheap phone involves doing some pretty smart clustering, directly driven by linguistics research. And so on.

I get that the apocalyptic mode is a bit of an easier position to accept. But Arendt want talking about the current crop of AI research. And indeed, this situation - where we "will forever be unable to understand, that is, to think and speak about the things which nevertheless we are able to do" - didn't really come to pass over the last sixty years. Most people don't understand how their phones work, but this is arguably not much different from people not understanding how fire works. And they're certainly able to speak, think, and argue about the implications of phone technology.

[/unpopular puffin]
posted by kaibutsu at 10:32 AM on November 30, 2019 [2 favorites]


Oh I don't have an apocalyptic view of understanding what a network is doing. But I sure am interested in a deeper understanding than "that's what it was trained to do". It's just such a complex system. Kind of like natural intelligence, huh?

For example the intermediate convolution layers in the AlphaGo-style AIs seem to be doing particular kinds of feature detection on the board state. Recognizing small groups with life, recognizing large structures with influence near the edge, etc. But so far the state of the art isn't very good for teasing out the details of exactly what. "What is the neural network doing?" is an active area of research and I'm optimistic we'll get some good tools eventually.
posted by Nelson at 10:39 AM on November 30, 2019


« Older “You Can Hear The Suffering”   |   “Fashion is fun.” Newer »


This thread has been archived and is closed to new comments