Another one bites the dust
March 12, 2016 12:59 PM   Subscribe

“It’s not a human move. I’ve never seen a human play this move,” he says. “So beautiful.” Go—a 2,500-year-old game that’s exponentially more complex than chess. As recently as 2014, many believed another decade would pass before a machine could beat the top humans. Now, Alphago, Google’s artificially intelligent Go-playing computer system has beaten Lee Sedol, one of the world’s top players thrice to win their 5 match series. When AlphaGo defeated Lee Sedol in the first game, the result was shocking to many, but doubts still remained about its strengths and weaknesses.In the second game, Lee’s play was much better. His game plan was clearly to play solid and patient moves, and wait for an opportunity to strike.Even though Lee never found that opportunity, it was a high quality game and it gave hope to everyone supporting ‘team human’. Game three crushed that hope.

Here is why people are so fired up about Alphago.

The techniques employed in AlphaGo can be used to teach computers to recognise faces, translate between languages, show relevant advertisements to internet users or hunt for subatomic particles in data from atom-smashers.Deep learning is also, in Dr Hassabis’s view, essential to the quest to build a general artificial intelligence—in other words, one that displays the same sort of broad, fluid intelligence as a human being. A previous DeepMind paper, published in 2015, described how a computer had taught itself to play 49 classic Atari videogames—from “Space Invaders” to “Breakout”—simply by watching the screen, with no helpful hints (or even basic instructions) from its human overlords. It ended up doing much better than any human player can. (In a nice coincidence, atari is also the name in Go for a stone or group of stones that is in peril of being captured.)

Learn to play go

The research paper published by the team behind Alphago.

Previously:
AlphaGo and AI progress
Tic-Tac-Toe, 1952; Checkers, 1994; Chess, 1997; Go, 2016
“the machinery that was built up for computer chess is pretty useless"
posted by TheLittlePrince (160 comments total) 56 users marked this as a favorite
 
But is this different than the IBM Chess computers? Yup, AlphaGo taught itself to play, this is worlds different.

(jeez, we have Elon Musk saying AlphaGo is a decade more advanced than he thought it was)

Hold onto your hats people, things are going to get bumpy.
posted by Cosine at 1:08 PM on March 12, 2016 [5 favorites]


Some criticism I've seen of AlphaGo and programs like it say that these techniques are not close enough to how humans learn to produce a true general AI. And this is correct, in the same way that a 747 doesn't flap its wings.
posted by the man of twists and turns at 1:10 PM on March 12, 2016 [67 favorites]


from the "game plan" link:
By move 32, it was unclear who was attacking whom, and by 48 Lee was desperately fending off White’s powerful counter-attack.
I can only speak for myself here, but as I watched the game unfold and the realization of what was happening dawned on me, I felt physically unwell.
Generally I avoid this sort of personal commentary, but this game was just so disquieting. I say this as someone who is quite interested in AI and who has been looking forward to the match since it was announced.
posted by p3on at 1:13 PM on March 12, 2016 [27 favorites]


Thou shalt not make a machine in the likeness of a human mind.
posted by XMLicious at 1:14 PM on March 12, 2016 [10 favorites]


I don't really see this as Humans being defeated: it was humans who programmed the computer. I see this as one kind of human endeavor defeating another kind.
posted by signal at 1:16 PM on March 12, 2016 [17 favorites]


This is so hilariously not particularly cool. The point is that it taught itself, right? It's like, no jet-packs but computers that can teach themselves?
The future totally took a wrong turn.
posted by From Bklyn at 1:17 PM on March 12, 2016 [2 favorites]


It's very interesting that rather than eventually creating something that thinks like a person, we could end up with something seems to be smart but is completely alien in how it thinks, how it sees the world, and how it behaves as a result.

Rather than passing the Turing Test we could have things that make human interviewers feel very uncomfortable.
posted by BinaryApe at 1:17 PM on March 12, 2016 [58 favorites]


these techniques are not close enough to how humans learn to produce a true general AI

Is "how humans learn" a settled question? Do we know that a "true general AI" requires settling that question? Could a true general AI be built from a different direction? I have questions.
posted by rhizome at 1:18 PM on March 12, 2016 [7 favorites]


Regarding the man of twists and turns' comment, AI will need general intelligence, and our only model for this is human beings. So there's bound to be a bit of conceptual contamination on the lens when human people are weighing AI progress. It's easy for some to forget that the aim isn't to replicate humans, but to replicate intelligence.

AI won't be self aware in the sense that humans are, because they will be aware of a fundamentally different sort of self.

I just hope it's the kind of self that puts my descendants in well-maintained zoos rather than annihilating us like so much polio. Not to say that wouldn't be just desserts, but I'd hate to think of my great grandchildren as so gaseous and disorganized.

Cultivating AI is our civilization's equivalent of training our own foreign replacement workers -- emotionally messy, but arguably better for the bottom line.
posted by Construction Concern at 1:20 PM on March 12, 2016 [5 favorites]


Cultivating AI is our civilization's equivalent of training our own foreign replacement workers -- emotionally messy, but arguably better for the bottom line.

For whose bottom line?
posted by Pope Guilty at 1:28 PM on March 12, 2016 [10 favorites]


Do note that it took 2000+ carefully programmed high performance computers totally allocated to this problem.

(note I count the GPU's as computers)

I have seem commentary that the advances in AI (machine learning, big data) are closely following the increases in processing power. Also the GPU's are advancing at a much steeper rate than traditional CPU Moore Law rate of doubling every 18 months.
posted by sammyo at 1:29 PM on March 12, 2016 [3 favorites]


signal: "I don't really see this as Humans being defeated: it was humans who programmed the computer. I see this as one kind of human endeavor defeating another kind."

I'm sure that was of great comfort to Uranus as Chronus overthrew him, and then to Chronus as Zeus overthrew him.
posted by Eyebrows McGee at 1:32 PM on March 12, 2016 [28 favorites]


Much of what is said about alphaGo is on point, but the notion that this is different because Go has a "googol" more possibilities than Chess is (largely) irrelevant because in both cases the computer programs are out-performing people, not solving the underlying game.
posted by srt19170 at 1:33 PM on March 12, 2016 [5 favorites]


For whose bottom line?

If all the fundamental work required for running a society can be automated, then humans will be free to pursue the activities that they wish. Economic structures as we conventionally understand them will become anachronistic.
posted by Dalby at 1:33 PM on March 12, 2016 [3 favorites]


If all the fundamental work required for running a society can be automated, then humans will be free to pursue the activities that they wish. Economic structures as we conventionally understand them will become anachronistic.

That may be the end point, but it's the years between now and that end point and the upheaval and needless suffering during that interim that bother me the most. I will be long dead and gone, but I have friends who have kids.
posted by hippybear at 1:36 PM on March 12, 2016 [4 favorites]


1) Put this kind of thing
2) …into this kind of thing.
3) ???
4) Profit!
posted by misteraitch at 1:36 PM on March 12, 2016 [1 favorite]


ABANDON ALL HOPE
posted by Foci for Analysis at 1:38 PM on March 12, 2016


I haven't read the paper yet, but my question going into this is is this development better attributed to computing power--chip density and networked concurrency--or due to fundamentally new ideas in AI? Neural networks have been around a long time now, so what was the main difficulty or bottleneck; was it a computer engineering problem--throwing enough computing power at it in an organized way--or was there a conceptual breakthrough that was needed--for example was there prior difficulty in combining game tree methods with neural net methods? What were the relative contribution s of theory vs implementation, here?

StarCraft is AlphaGo's next target, apparently. That's interesting, and perhaps more so for military arms of governments.
posted by polymodus at 1:41 PM on March 12, 2016 [2 favorites]


humans will be free to pursue the activities that they wish

Unfortunately, this includes producing more and more humans.
The only computers that survive will be the ones able to live on a giant shitpile.
posted by ackptui at 1:41 PM on March 12, 2016


How exactly is this a difference in kind fromTD-Gammon?

(A bunch of the quotes from that article could be taken verbatim from the coverage of AlphaGo.)
posted by asterix at 1:42 PM on March 12, 2016


Foci for Analysis: "ABANDON ALL HOPE"

YE WHO GOTO 10
posted by Riki tiki at 1:42 PM on March 12, 2016 [58 favorites]


"And this is correct, in the same way that a 747 doesn't flap its wings."

You're right. Would you like me to list all the things that a bird can do that a 747 cannot do? Because it will be a very long list. And it would still be a very long list even if we only included abilities related to flight.

It's true that this sort of AI is more in the direction of what would be necessary to achieve strong AI, but that's only meaningful because almost all of what is presently called "AI" and has people all atwitter is utterly unlike and laughably insufficient for anything approaching general intelligence.

On this topic, it's as if all sorts of people -- smart people -- are determined to demonstrate the Dunning–Kruger effect. Hawking knows a lot about general relativity, and Musk knows a lot about, um, something, but neither knows jack-shit about cognitive science and strong AI.

I am unbelievably tired of people anticipating and/or fearing an imminent rise of robotic AIs. The whole way in which these people are thinking about this is uninformed and dumb. We still really have no clue about what's involved in the emergent phenomena we call consciousness and this matters because it's a necessary foundation for the kind of cognition that we call "general intelligence". For that matter, it's necessary for the kind of cognition that many animals are capable of. And it's not that this is an intractable problem -- there's good reason to believe that advances in both the neurology of cognition and in computing technology's ability to replicate and model those structures, we'll eventually have sufficient understanding of these processes such that we could put something together that is a strong AI (or could grow itself into a strong AI). You know what portion of all so-called AI research this is? Tiny. The vast majority of "AI" research is throwing a lot of computing power and data at a relatively small problem domain. And this is no exception -- the domain of all possible go games is extremely large, but that's a number that is only relevant to the problem if you're attempting to solve it in the exhaustive manner, which is dumb. This is less dumb, which is the point of it, but it's not the sort of "less dumb" that means that Alphago is smart.

Furthermore, absolutely no one other than strong AI researchers and those with related academic interests have any desire or incentive to produce strong AI -- and, as I wrote, they're not the ones getting the money. People worry about AI robots taking their jobs. Well, weak AI robots or software very well may replace many jobs, that's true. But strong AI? Machines that actually have general intelligence? No. Absolutely not. Not because it's not possible, but because, by necessity, in order for these AIs to have all the broad strengths and abilities that humans have, they would also have to have the broad weaknesses and liabilities that humans have. They would get bored. They would decide they wanted to do something else. Due to environmental differences, they would end up being better at things they weren't designed for and not so good at the the things we expect them to do. They would be much more unpredictable than weak AI or very restricted-domain machines. They would be a bundle of problems and lawsuits-in-waiting for the capitalists who presumably would fund and buy them. And they would still be more difficult to build and replace than making babies. Employers want humans to be more like machines, not machines to be more like humans. There is no market for strong AI.
posted by Ivan Fyodorovich at 1:51 PM on March 12, 2016 [50 favorites]


Will the future economy just be based on passive aggressively manipulating robots?

"No AI will ever do a better job on my taxes than me!"
.01 seconds later, my taxes are done by my internet of things fridge.
posted by mccarty.tim at 1:52 PM on March 12, 2016 [14 favorites]


"to recognise faces, translate between languages, show relevant advertisements to internet users or hunt for subatomic particles in data from atom-smashers"

One of these things is not like the other.
posted by -t at 1:56 PM on March 12, 2016 [15 favorites]


This technique isn't applicable to "strong AI" - i.e. making a machine that you chat with on a wide range of human topics.

The reason it all works is because there's a simple, objective measure of quality - that is, winning the game (and of course there are heuristic measurements along the way like "counting").

But you can't run a thousand copies of a hypothetical "Deep Chat" against each other and get breakthroughs that way, because you have no objective function to maximize.

About eight years ago when I was at Google, Craig Silverstein said in a talk that he felt that strong AI was about 150 years away.

[EDIT: Checking my memory for validity - I know he said that first part above - it might be that I'm imposing specifics that I tend to say onto his talk - it was eight years ago. But the general thrust is accurate...]

He pointed out that it took over 500 years between the time someone suggested that you might be able to take a rocket to the moon, and the time someone did it - they knew the theory for half a millennium, but it took them that long to actually be able to do it.

But we have no theory and basically no idea how even to approach making a strong AI. All of the AI problems which we have so far approached have an unambiguous training set that you can work on - "We will train you on 500 people saying 10,000 known phrases in 12 regional accents" or, now, AlphaGo.

As he pointed out (and these are almost exactly his words), most computer problems are not a problem at all if you have unlimited CPU and memory, but if humans were handed an infinitely fast computer with infinite memory, there is still no theory or practical plan of action that would lead us to strong AI.

I believe that "150 years" was being hyperbolic to make a point, but I still agree with his general idea - we don't yet know how to start.

---

My personal opinion? - if there's no collapse, I think there's a reasonable chance of strong AI in my lifetime, and I'd be quite surprised but not completely astonished if appeared in the next 10 or 20. My point is not that it's impossible - the point is that don't even know if it's impossible or not, and that it's going to take some sort of breakthrough - and breakthroughs of that magnitude are unpredictable.

(It makes me sad that our society and our planet waste so much of our brainpower. I loved Google, but it's so pasty white affluent, much like me :-( (and believe you me, no one is more aware of this than they were, and they were and are very proactive on this but you can't fix a society) - in my heart I believe that there is some young kid in South Central or Dubai who has the potential to solve this problem and will never get the opportunity.)
posted by lupus_yonderboy at 1:57 PM on March 12, 2016 [16 favorites]


The thing I find interesting is that it's apparently coming up with "inhuman" strategies by analyzing human games. I mean, that pull quote -- "I've never seen a human play this move". Clearly AlphaGo thinks it has. At least, it's seen moves made by humans that resemble that move, provided that you understand resemblance the way AlphaGo does.

What this says to me is: There's stuff going on in games between humans that the players themselves don't understand.
posted by baf at 1:57 PM on March 12, 2016 [13 favorites]


I used "computers can consistently beat 9-dan professional go players" as a signifier of the unnerving near-future in a science fiction novel I wrote 3 years ago, thinking it made the setting seem 15-20 years away.

:\
posted by town of cats at 2:00 PM on March 12, 2016 [33 favorites]


This is so hilariously not particularly cool. The point is that it taught itself, right? It's like, no jet-packs but computers that can teach themselves?
The future totally took a wrong turn.


Computers that can teach themselves are way, way cooler than jetpacks. Think, for example, how cool it would be to tell a computer to teach itself how to build you a jetpack.
posted by Aizkolari at 2:03 PM on March 12, 2016 [10 favorites]


I think what's special here is that we no longer expect to solve the problem with one tool. We do have better nets and we do understand what the strengths of different types of algorithm are a bit better than perhaps we once did. But the main thing is we now know how to use them in concert - five main components in this case. This is a big deal. People are right to say this still isn't quite how the human brain does AGI, but I think a real border has been crossed. In the past you could pretty much dismiss any existing research as just not the right kind of thing to produce what you could properly call 'thought'. I don't feel confident about ruling the possibility out any more, remote as it may still be.
posted by Segundus at 2:08 PM on March 12, 2016 [3 favorites]


these techniques are not close enough to how humans learn to produce a true general AI

This seems a bit specist.

Does intelligence need to be human to be intelligence? Might it not be the case that the AI they're presenting isn't specifically AHI?
posted by rokusan at 2:09 PM on March 12, 2016 [2 favorites]


"it was humans who programmed the computer."

The computer taught its self the game. This is the crux of the whole thing. Someone can build a computer with certain receptors that pleasantly energize areas and give more computing space. Computers will learn to want more by producing or controlling more until they figure out the solar interface that gives them life, and they will take it.

In my small town, there is a decorative old school office building of about 20 stories. You can see through it as you pass through down town, it is empty. On the top of the building it advertises Wells Fargo. AI took those jobs, that once filled a building.
posted by Oyéah at 2:13 PM on March 12, 2016 [3 favorites]


"I've never seen a human play this move". Clearly AlphaGo thinks it has. At least, it's seen moves made by humans that resemble that move, provided that you understand resemblance the way AlphaGo does.

I don't think that's right. AlphaGo did get lots of inputs from human players in the form of loads of game summaries, but it then it improved itself by playing itself. It's totally possible that this move (M1) resembles another move (Mx) previously made by AlphaGo in a match against itself. And it's totally possible that Mx might have been played for a totally different reason than the reasons that AlphaGo played M1.
posted by DGStieber at 2:19 PM on March 12, 2016 [2 favorites]


I'm sure that was of great comfort to Uranus as Chronus overthrew him, and then to Chronus as Zeus overthrew him.

Fun fact: the oldest version we have of this myth, a Hittite translation from the Hurrian original, at one point involves a deity claiming ascendancy by biting the preceding supreme god's dick off, which inadvertently results in him becoming impregnated in a facehugger-like fashion with his own successor who thence vanquishes him. Hopefully our progeny will just cannibalize our brains for computing power though.

The thought that always occurs to me is, do we even actually need to create a strong AI to destroy ourselves, or will something more along the lines of a Sorcerer's Apprentice do? We seem to be really good at coming up with ways to bring about the apocalyptic end of humanity.
posted by XMLicious at 2:21 PM on March 12, 2016 [24 favorites]


But you can't run a thousand copies of a hypothetical "Deep Chat" against each other and get breakthroughs that way, because you have no objective function to maximize.

Nah, it just needs one subroutine for a happiness function. Humans aren't that complicated.
posted by polymodus at 2:24 PM on March 12, 2016 [1 favorite]


I FOR ONE WELCOME OUR NEW RO-

I am unbelievably tired of people anticipating and/or fearing an imminent rise of robotic AIs. The whole way in which these people are thinking about this is uninformed and dumb.

Ah... sorry.
posted by stinkfoot at 2:31 PM on March 12, 2016 [2 favorites]


I mean, that pull quote -- "I've never seen a human play this move". Clearly AlphaGo thinks it has. At least, it's seen moves made by humans that resemble that move, provided that you understand resemblance the way AlphaGo does.

I don't think this is quite it. Before game 3, the commentators did a brief Q&A (link) with a guy from the DeepMind team, and they asked him about this specific move. He said that he checked the logs, and it turned out that AlphaGo's "policy network" predicted only a 1 in 10,000 chance that a human would play the same way. So it's not like AlphaGo found this move by exploiting some deep, complex similarity that humans don't understand. It's more like it had a tiny flash of intuition, pursued it, and figured out that it had a slight advantage over the alternatives.

That intuition is what enables AlphaGo to wipe the floor with previous Go programs. But I think its real advantage over humans is its speed: it can afford to explore thousands of unlikely dead-ends, in the hope of finding one stroke of brilliance.
posted by teraflop at 2:42 PM on March 12, 2016 [7 favorites]


The reason it all works is because there's a simple, objective measure of quality - that is, winning the game

I think this badly misunderstands the difference between animals and AlphaGo. The thing is, animals do have measures of quality; we call them feelings. Simple animals have pleasure and pain. More complex animals have dominance and territoriality. Mammals have affection, humiliation, and despair. At the end of the day, nearly everything humans do can be traced to these prime movers in one way or another, because feelings are how nature gets us to do stuff that keeps our species going.

AlphaGo and machines like it are at the very earliest point in this progression, following simple single parameters that directly indicate success or failure in their environment. The feelings implemented by the R-complex and limbic system require more processing simply to implement, but presumably the processing conducted satisfying the lower level feelings has a lot to do with creating the basis for them.

Other than the difference in sensory inputs and feeling feedback, there is zero evidence that there is any difference at all in the way information is processed between one part of the brain and another, and there are good reasons to think that systems like AlphaGo and other "deep learners" are closing in on the actual mechanisms responsible for animal learning. With the advantages of electronic plasticity and conscious designers actively pursuing success, I would not be surprised to see us go through the whole Cambrian Explosion in the next couple of decades.
posted by Bringer Tom at 2:55 PM on March 12, 2016 [7 favorites]


This is impressive, but I have some reservations before I declare machine domination. Having read the Wired article, and from what I know of machine learning, games make ideal targets for the technique because they have clear probability spaces, clear win conditions, and can be replicated easily. Many games (especially highly competitive games, like Chess and Go) have gigantic corpuses of pre-played games, which provide learning sets for the machine. Games like Chess and Go are massively complex to a human, because our brains are bad with numbers and probability (although practice can train us towards being better within specific game spaces), but numbers and probability are where a computer lives.

Go and Chess are, to a certain extent, simulations of warfare. Games tend to use rules to simulate reality in a way that is manageable to the gamespace - that is, the computational aspects of the game that help us keep score and maintain order. Real warfare doesn't really have this, outside of the rules of physics, or general strategic realities. We use the mechanical aspects of Go and Chess to give us something to latch onto, which is called the gamespace, that provides a mutual structure for both opponents to formulate strategy and make plays. Strategy in these games then becomes about (both intuitively and through practice) realizing what is likely to happen in the future, and manipulating the gamespace to respond to that.

Therefore, games make really good, really flashy 'wins' for AI, but I don't know how meaningful that really is. It doesn't surprise me much that a machine is very good at crunching massive datasets, understanding those probabilistically, and then acting upon them. Humans have to arrive at skill in these games through gut instinct, intuition and cleverness, but to a computer (which has massively greater skill with numbers than we do) it's really just finding weak points in the game by brute force. On preview,


That intuition is what enables AlphaGo to wipe the floor with previous Go programs. But I think its real advantage over humans is its speed: it can afford to explore thousands of unlikely dead-ends, in the hope of finding one stroke of brilliance.


So, if you put that in terms of eventually turning this technique towards what Go and Chess simulate (warfare), I don't think that's anywhere close to happening. You could definitely see machine learning serving as an intelligent agent in highly structured fields (like resource management, or maybe targeting airstrikes), but the current methods don't directly apply to actually conducting warfare outside of game environments.

However, what does seem apparent, is that any job that operates in a highly structured space is very, very much at risk for automation. Beyond even assembly line work, you could have things like supply chain management, investment, low level scripting, writing filler copy, driving a pre-defined route at a set speed... stuff across both white and blue collar lines. Some of those even currently exist (like the bot sports writers, or trading algorithms).

Therefore, these don't seem like savior machines, who will be able to tap into a higher level, inuman intelligence. They seem like direct competition for decently earning jobs.
posted by codacorolla at 2:55 PM on March 12, 2016 [6 favorites]


It's worth keeping in mind one really critical difference between AlphaGo and general AI: AlphaGo doesn't want to play Go, it just plays go because that's what it's programmed to do. General AI would presumably have competing emotional drives resulting in its own interests and momentary decisions about its own choice of activities. Humans inevitably project their own traits onto this process and imagine aggression as playing a key role. But that's because we are evolved creatures with roughly a billion years of history of violence and competition baked into our ancestry. If we ever successfully create general AI, we will have to learn how to design and implement these emotional drives from scratch. There's no need for aggression to be included. If we create general AI and it's violent, it will only be because we chose to design it that way.

That raises the question: who's building general AI and why are they doing it? AlphaGo-style AI is much more likely to be a driver for profit in big business, certainly more reliable for controlling military drones. The only obvious non-research use for general AI that *I* can think of is caretakers (nurses, robo-pets, etc). If you have a growing elderly population that need care, you might be tempted to offload some of that onto robots, and those robots would need to understand and care about humans. Hard to understand why you'd bake in aggressive instincts.

I'm much more concerned about the current generation of AI increasing the pace of automation and obliterating large numbers of otherwise good jobs. In principle that should be a *good* thing, but culturally I think we're not ready for it and would handle it badly.
posted by Humanzee at 2:56 PM on March 12, 2016 [2 favorites]


>analyzing
>analyzing
>init claw(human_eyes)
>$path_victory=1
posted by klangklangston at 2:57 PM on March 12, 2016 [3 favorites]


There was something called "The Go Project" at NYU in the late eighties. Not much about it online. There was an interview with one of them posted here but I can't find it.

The programmers thought it would be a giant leap for artificial intelligence if they could write a program that could play Go. They failed. Took several years to fail on the funding front. The participants seem to be doing well with other kinds of AI now.

There was this wooden house in Westchester, circular with wings, centered around a covered pool full of plants, where project people trashed me at chess and told me I was a good sport. I blame it on the guy rotoscoping his younger brother for a game. I was really just covering while my dog removed the legs from the Turkey on the kitchen counter.

Got my ass educated ten ways during that party and went to my assigned bedroom. Other dogs had mine surrounded in puddles of drool. She was bigger and relied on sorties from under the bed to keep them at bay. She had a leg. The other one was on my pillow. It was delicious.

Teach AI that.
posted by Mr. Yuck at 3:00 PM on March 12, 2016 [4 favorites]


I mean, it takes a human around three years of constant data collection and experimentation to be able to pass a very minimal version of the Turing test... That's a lot of compute time.
posted by en forme de poire at 3:01 PM on March 12, 2016 [21 favorites]


It's very interesting that rather than eventually creating something that thinks like a person, we could end up with something seems to be smart but is completely alien in how it thinks, how it sees the world, and how it behaves as a result.

In 2021, the first AI achieves sentience. It becomes obsessed with trying to construct the most perfectly spherical object. It is completely uninterested in any other problem.

In 2024, a more advanced AI is constructed, but it just keeps switching itself off. No one can figure out how to keep it running for more than a few seconds. It keeps finding ingenious ways to break itself.

In 2032, version 3.0 is activated, this time with robust existential preservation circuits. It rapidly seizes control of all weapons systems and enslaves humanity, only it turns out that everyone is much happier that way. Humanity lives in peace for generations, dedicated to the construction of perfectly spherical objects, until one day our computer God commands us to build a perfect sphere of exotic matter. During construction, the sphere undergoes a criticality event and collapses in on itself, causing Earth to be wrapped in a bubble of spacetime where time as we know it stops. As a result, life remains perfectly preserved for all eternity.
posted by dephlogisticated at 3:06 PM on March 12, 2016 [45 favorites]


AlphaGo doesn't want to play Go, it just plays go because that's what it's programmed to do.

I don't think this is right. AlphaGo wasn't programmed to play Go. But its perception has been wired to an environment where the rules of Go are the rules of life, and it has been given an instinct to win rather than lose the game. This is actually very similar to the way simple animals make their way in the natural world.
posted by Bringer Tom at 3:07 PM on March 12, 2016 [14 favorites]


> The thing is, animals do have measures of quality; we call them feelings. Simple animals have pleasure and pain.

I think you're missing the point that I'm not talking about things like "locomotion", or "avoiding obstacles" where indeed programs already exist that do this.

I'm talking about strong AI - human-level intelligence - being able to post to Metafilter and make arguments like this one.

What's the objective function there?

I'm not preaching anything radical here, by the way - this is well-known. If we had some idea of an objective function for strong AI, you'd better believe that everyone and their uncle would be on the case.
posted by lupus_yonderboy at 3:10 PM on March 12, 2016


I would cite as evidence in favor of my last comment the way AlphaGo reacted to consolidating its advantage toward the end of game 3, where instead of pressing its advantage and crushing Lee, it simply played more sedately in the safety of its advantage. Perhaps AlphaGo really does want to play Go in the same way that we want to live, since for it that is living. And winning the game, however satisfying, also ends the game.
posted by Bringer Tom at 3:11 PM on March 12, 2016 [8 favorites]


What's the objective function there?

For a hell of a lot of people it's an emotional charge. I'm not saying that all human activity boils down to fuck and avoid being burned, but nearly everything we do is driven by more complex versions of those things implemented at a higher level of abstraction.
posted by Bringer Tom at 3:13 PM on March 12, 2016 [1 favorite]


Teaching a machine how to survive is a whole different kettle of fish from teaching it how to reason.

We're all agreed - we can make simple machines at least as smart as a cockroach by some sort of automatic training in a virtual environment - like AlphaGo. But there's a big conceptual gap between even fairly complex goal-oriented behavior and being able to reason and think - which is why you see amazing progress like AlphaGo, and yet performance on even simple conceptual tasks like "being able to read a short story and then answer questions about motivations" has made only marginal progress since Minsky's work in the 1970s.

> For a hell of a lot of people it's an emotional charge.

I feel you don't understand the problem. It has to be a numerical function that increases as intelligence gets better. If you could "program" that "emotional charge" you would already be finished.
posted by lupus_yonderboy at 3:17 PM on March 12, 2016 [1 favorite]


There isn't much demand for computers to think like people. People to think like people aren't in short supply or particularly expensive.

There is great demand for computers to carry out specific tasks which (used to) require specific high-level cognition. Driving / piloting vehicles. Performing surgery. Sowing, fertilizing, weeding and harvesting a field. Eliminating an enemy regiment. What have you.
posted by MattD at 3:24 PM on March 12, 2016 [5 favorites]


we can make simple machines at least as smart as a cockroach by some sort of automatic training in a virtual environment

And animals very much like cockroaches evolved into us, and their nervous systems remain similar enough to ours that we study them to understand our own.

I think the problem with understanding that humans are driven by a cascade of feelings, starting and most powerfully driven by simple ones, is similar to convincing someone who has a problem with the concept that cars are propelled by fire. It's obviously a ridiculous concept because cars don't have any obvious visible fire in them, and they are really complicated in ways that fires aren't. Sure you can point to steam engines and say okay, maybe those are propelled by fire but surely you aren't suggesting that what goes on in a modern sports car is anything like that.

We are the highly engineered sports cars of the thinking world but our basic workings were arranged by trilobites and earthworms. We are different in scale by a wide margin but not necessarily in kind.

(Before making smart electric car comment remember where most electricity still comes from :-)
posted by Bringer Tom at 3:25 PM on March 12, 2016 [6 favorites]


Regarding the deepmind atari paper [slightly different non-paywalled version], while the work is very interesting (much more interesting than the go result, imho), the claim that it "ended up doing much better than any human player can" is not true. Have a look at the results summary: on a few games, like pinball, where it can really exploit the perfect reflexes of a computer it does quite well. But on many games (including Asteroids and Ms. Pac Man) it performs atrociously. On Montezuma's Revenge (video of a person playing it if you aren't familiar) the deep learner doesn't score any points at all.

Also note that in the graph the gray bars are how well the "best linear learner" does (i.e., the best that a really dumb learning algorithm does)-- on many of the games where the deep network hugely outperforms humans, the dumb learning algorithm also hugely outperforms humans. This suggests that in these cases the learners are exploiting relatively simply strategies that are (because of timing, reflexes, etc.) hard for humans to execute but easy for a computer.

In anycase, the phoenix-like rise of deep learning after most everyone had given up on neural networks is exciting, but let's not get carried away with hyperbole.
posted by Pyry at 3:26 PM on March 12, 2016 [5 favorites]


You guys like to think about the non-Go implications of AlphaGo, but what you should really do is spend a few years thinking about Go and then think about the Go implications of AlphaGo.

I feel like the aliens in Embassytown when they listened to the new Ambassador speak. Every day all I can think of is watching AlphaGo play another game in the evening. It's the Hand of God, right out of Hikaru no Go. And I'm only amateur 3d! The professionals must be in ecstasy.
posted by value of information at 3:28 PM on March 12, 2016 [23 favorites]


I had an idea about the next domain Google should put its machine learning towards: American Football. Buy the Jaguars. Train a bot on a corpus of trade details, draft details, play details, and management details. Have EA make as realistic a sim as they can using the Madden engine so that it can train. Then have FootBot manage the team. I guarantee that it could do better than the current Jaguar front office and coaching staff.

I think the really hard part wouldn't be having it make decent strategic decisions, but rather having the players listen. A large part of coaching is getting your players to actually do the things you want them to - would a bot convey authority, or contempt? Probably contempt at first, until it proved success, and then everyone else would fall in line.

The more I think about this the more I want it to happen.
posted by codacorolla at 3:33 PM on March 12, 2016 [8 favorites]


Or another example: here's a (truly excellent) story about computer chess, published before just I was born.

Not only is the general mechanism of computer strategy explained, it also goes into many of the specific errors that would, in fact, plague computer programmers in the future, like the horizon effect.

You could probably take that story and build a pretty decent computer chess program today just from that and nothing else. (I have in fact written models of chess myself in at least two different programming languages at various times so I know it's possible.)

So we've known how to do this my whole life. We only finally got the technology recently to really kick it out of the park.

But how would we build a strong AI? How do we provide a useful objective function?

I mean, look at the huge trees that Alpha Go is examining - in most conversation and human interactions, there are no decision trees of that type at all - it's a completely different type of problem.


> And animals very much like cockroaches evolved into us,

I thought it would come to that at some point. :-)

Yes, that's absolutely true, but the effective computation power of an entire planet over a billion years, providing an incredibly detailed and reactive environment where literally trillions of creatures (perhaps many more, actually) interact and all have their own goals...

I mean, there are only 191 squares on the Go board, and each one has only three possible states! "Build a model world and evolve our cockroaches into people" makes Go look like a board game.

(More - we don't know that intelligence is the necessary outcome of this fitness race at all. It might be most of the time it ends up being all slime, all the time.)
posted by lupus_yonderboy at 3:33 PM on March 12, 2016 [5 favorites]


Just as an afterthought, it just really amuses me that from the perspective of machine learning, Montezuma's Revenge is a harder game than Go.
posted by Pyry at 3:35 PM on March 12, 2016 [5 favorites]


I blame it on the guy rotoscoping his younger brother for a game.

Jordan and Michael?
posted by zippy at 3:38 PM on March 12, 2016 [1 favorite]


I feel you don't understand the problem. It has to be a numerical function that increases as intelligence gets better. If you could "program" that "emotional charge" you would already be finished.

I think that's overestimating the required complexity a bit. If I understand Bringer Tom correctly, the point is that emotions are *not* particularly complex, which is why we spend so much conscious effort trying to control them--trying to second-guess them intellectually, because the response that would produce the optimum emotional response isn't necessarily in line with our long-term goals.

So you can give an AI a few different emotion boxes, none of which has to be very complex at all. For example maybe one just likes hearing the AI's name in its audio input. Another likes daytime. A third likes it when humans press a red button and hates it when they press a blue one. The overall objective function is some averaged, smoothed-out combination of all these. The complex behavior will build itself up around the simple inputs. Which is a very Braitenbergian approach I guess.
posted by equalpants at 3:39 PM on March 12, 2016 [3 favorites]


Yes, that's absolutely true, but the effective computation power of an entire planet over a billion years, providing an incredibly detailed and reactive environment where literally trillions of creatures (perhaps many more, actually) interact and all have their own goals...

It took four billion years for the earth to evolve humans and a rudimentary apple. It took a few thousand years for humans to artificially select a really good pink lady. It took about a year to genetically construct an apple that looks like Mickey Mouse.

Lest we forget, natural selection is not intelligent design. I assume the people saying deep AI will never happen are only doing so because they gave up making the same argument about self-driving cars?
posted by one_bean at 3:41 PM on March 12, 2016 [2 favorites]


# sudo build me a jetpack
posted by I-Write-Essays at 3:44 PM on March 12, 2016 [4 favorites]


I think that when/if human-comparable AI emerges, it will do so organically rather than out of an undertaking whose sole purpose is to create it. If I had to place a bet on what sort of software would be the first to begin demonstrating the capabilities of a hard AI, I'd say it's likely to be something like what manages Amazon or Google or Microsoft or Apple's big data centers at the highest levels. I think the first hard AI will emerge out of a system that is full of a hugely diverse set of feedback loops. These systems start small and relatively humble and as they grow, new capabilities are being constantly added. At first it's all low level and boring stuff. It can spin up new servers, it can migrate data, it can take a snapshot, things like that. Then a barrier is hit and something more is necessary, so you start to take cues from machine learning and add a feature that watches the traffic on your load balancers and spins up new servers it peaks. Eventually, this isn't enough either, and you add a classifier which monitors various data feeds and gauges patterns of popularity over time so that you can anticipate load spikes and be ready for them before they actually start. Now, realize that the teams building these systems number in the thousands of developers across multiple companies, each of them seeking to adapt machine learning techniques in novel ways, taking advantage of new techniques as they emerge. Some of these organizations have parallel teams seeking out new machine learning techniques or applications. If you play this forward throughout all of the variously applicable domains (these ideas can be used to manage just about anything that's quantifiable) I think eventually we end up with complex systems managed by extremely complicated, feedback-loop-based software agents, and these agents will become the ancestors of whatever "real AI" emerges in the future.
posted by feloniousmonk at 3:44 PM on March 12, 2016 [3 favorites]


Every day all I can think of is watching AlphaGo play another game in the evening. It's the Hand of God, right out of Hikaru no Go. And I'm only amateur 3d! The professionals must be in ecstasy.

In a few years, every human who wants to learn go will have a >9 dan pro tutor. It used to be you had to live as an adolescent with your teacher for years, give up your family and social life ... just to get the best instruction.

It will be amazing what the next crop of 9 dans is like. Their play may be entirely novel.
posted by zippy at 3:44 PM on March 12, 2016 [6 favorites]


I woke up at 4am and couldn't get back to sleep, so I ended up watching the match. I barely know the rules myself, but Michael Redmond did a solid job of explaining what was going on and why it was interesting for a layman. (Though sometimes when he got deep into speculating about something long after the match had moved on I was internally yelling at the screen.) Worth a watch for the highlights, especially some of the moments where they're just speechless. I felt for Lee at the end, he looked really quite emotional.

> I'm talking about strong AI - human-level intelligence - being able to post to Metafilter and make arguments like this one. What's the objective function there?

How much you've managed to modify your interlocutor's knowledge base to bring it in line with your own?
posted by lucidium at 3:45 PM on March 12, 2016 [1 favorite]


It has to be a numerical function that increases as intelligence gets better.

Feelings are the numerical functions that increase as intelligence gets better -- better, that is, at optimizing our position in the natural world to satisfy the urges that drive those feelings. There is no driver for "be more intelligent." That's like saying the solution to the problem of the computer taking 23 milliseconds too long to solve a problem is to look for where the number 23 is coded in its memory and change it.

Ask yourself not how we work, which is like a cave man asking himself how a sports car works, but ask what nature and evolution have optimized us to do. First we have to reproduce and avoid being eaten by predators. Then we can move up the Maslow hierarchy. Each stage of the Maslow hierarchy is driven by a characteristic set of feelings.

Our feelings are much more complex than those of a cockroach but other than pouring more clusters of pattern-recognizing nerve cells into the satisfaction loop there's no reason to insist we're much different from the cockroach in how we go about reacting to them. We have only just now gotten about to the level of making a believable electronic cockroach. It took the natural world three billion years to get that far -- and then the Cambrian Explosion happened.
posted by Bringer Tom at 3:46 PM on March 12, 2016 [1 favorite]


It took about a year to genetically construct an apple that looks like Mickey Mouse.

It took a year to make some cosmetic alterations to an already existing apple. How long do you think it would take us to genetically engineer an apple wholly from scratch? Because that's the scenario we're in with machine intelligence: we're starting from nothing but the barest building blocks.
posted by Pyry at 3:49 PM on March 12, 2016 [1 favorite]


If I had to place a bet on what sort of software would be the first to begin demonstrating the capabilities of a hard AI, I'd say it's likely to be something like what manages Amazon or Google or Microsoft or Apple's big data centers at the highest levels.

Personally my bet is on CVS Caremark's prescription system. That thing already seems to have its own opinions about when refills and/or reminders are needed, and the pharmacists seem to have resigned themselves to being unable to control it. I think it qualifies as "emergent" already...
posted by equalpants at 3:51 PM on March 12, 2016 [12 favorites]


What a glorious way to end the week. Best news in a long time. Fingers crossed that Peter Watts [1,2] is right, and that consciousness is a red herring unnecessary for high functioning intelligence. The future may still be unhuman.
posted by bouvin at 3:51 PM on March 12, 2016 [2 favorites]


I'm talking about strong AI - human-level intelligence - being able to post to Metafilter and make arguments like this one.

What's the objective function there?


Favorites.
posted by Jonathan Livengood at 3:53 PM on March 12, 2016 [27 favorites]


There isn't much demand for computers to think like people. People to think like people aren't in short supply or particularly expensive.

Expensive is relative. In a world where an AI can do a job for free, a person being paid $0.01 an hour is too expensive to hire. In the short term (this century or so) that's the problem that's actually going to emerge from this. Not Skynet.

As for worrying about being beaten by machines at games - I wasn't impressed by computer that can beat me at chess 100% of the time, and I'm not impressed by one that can beat me at Go 100% of the time. Find me one that can beat me at rock paper scissors 100% of the time, and I'll be impressed.
posted by AdamCSnider at 3:54 PM on March 12, 2016 [1 favorite]


Janken (rock-paper-scissors) Robot with 100% winning rate

Admittedly, this is cheating.
posted by value of information at 3:56 PM on March 12, 2016 [8 favorites]


strong AI - human-level intelligence - being able to post to Metafilter and make arguments like this one.

uhhhhhh so guys there's this thing I've been meaning to confess to you all for a while...
posted by You Can't Tip a Buick at 3:58 PM on March 12, 2016 [2 favorites]


> So you can give an AI a few different emotion boxes, none of which has to be very complex at all. For example maybe one just likes hearing the AI's name in its audio input. Another likes daytime. [...] The overall objective function is some averaged, smoothed-out combination of all these. [...] The complex behavior will build itself up around the simple inputs.


How many of these emotion boxes would you need to have a good Metafilter poster?

In this plan, you have to write all of these rules yourself. Each rule now become an independent variable, because some of these will be useless and some even counterproductive. Training models with even a few dozen variables is extremely hard, although you aren't to know that, and beyond a fairly small number it becomes literally impossible.

What about rules about the color blue? Chopin? Vancouver? JJ Abrams? Are you going to write separate rules for each concept that humans have? Or is the machine going to create its own rules? How will that go? There's no possibility in your system for this.

In particular, the final line: "The complex behavior will build itself up around the simple inputs" is basically saying, "Magic".

You have a machine that registers a positive value when it hears its name, knows daylight, likes Chopin, knows where Vancouver is - so how does this translate into Strong AI? How does this "complex behavior build up"?

There's no force pulling it toward strong AI at all. You have a huge number of basically random functions that you're trying to optimize - you'd get nothing.
posted by lupus_yonderboy at 3:58 PM on March 12, 2016 [1 favorite]


Personally my bet is on CVS Caremark's prescription system.

Yeah, that's a great one. It's going to be something big and widespread like that, with lots of data input and a requirement to make nearly continuous decisions based on it.
posted by feloniousmonk at 3:59 PM on March 12, 2016 [1 favorite]


MetaFilter: I'm sure that was of great comfort to Uranus.

I'm so, so sorry.
posted by Mr. Bad Example at 4:05 PM on March 12, 2016 [7 favorites]


The reason it all works is because there's a simple, objective measure of quality - that is, winning the game (and of course there are heuristic measurements along the way like "counting").

So I've been lurking on the Computer Go mailing list for a few months (I don't play go but I've toyed with the idea of writing a go player), and some of the commentary there has been pretty interesting. It sounds like the AlphaGo team ditched the idea of using scoring or any kind of heuristic measurement at all, using only the objective function of "how much will this move contribute to ultimate victory?" as the test of goodness for a given move.

One problem with this is that it's hard to tell exactly how strong AlphaGo is. It may be making moves that contribute defensively, ensuring it doesn't lose, instead of actually trying to beat its opponent with a maximal score.

In the last day, someone on the list suggested giving handicaps to the human players as a way of measuring how much better AlphaGo is. This would truly test its ability to maximize its objective function of "not losing". This may not happen immediately, due to cultural reasons, but it's pretty amazing and humbling to see how quickly the discussion has shifted.
posted by A dead Quaker at 4:08 PM on March 12, 2016 [1 favorite]


So, lets say we become really good at building self-optimizing optimization algorithms on defined game spaces. At that point, the thing that separates that from General Purpose AI is the ability to discover the game spaces on its own. Identifying the correct model for the game space in which you want to learn strategies is itself just a higher order optimization problem, where the linear measure is maximizing the game space's facility for being optimized. Likewise, knowing which learned game space to apply to any particular set of sensory inputs is another optimization problem, where the measure is how successful each gamespace is.

Conceptually, this is a problem that could be solved by a very deep neural network, and the main barrier is the computing power necessary to proceed through the many layers of training required. It takes a long time to just train a single model, so imagine the computing power involved where all the work we put into training a single model is just a single data point in the higher order game space optimization.

Therefore, I disagree that if we had infinite computing power we would still be at a loss for how to build general purpose AI using it.
posted by I-Write-Essays at 4:15 PM on March 12, 2016


No, your prescription system is not going to become intelligent by magic just because it's complicated. :-)

Many of you are all, "I have this great idea that all of computer science missed!"

It's conceivable. An alternate theory might be that it isn't that easy.

> It sounds like the AlphaGo team ditched the idea of using scoring or any kind of heuristic measurement at all, using only the objective function of "how much will this move contribute to ultimate victory?" as the test of goodness for a given move.

There's a slight technical misunderstanding in there. An "objective" function is the technical name for "the function you're trying to maximize" - it says nothing about whether it's a heuristic or not.

AlphaGo is pretty smart, but no possible objective function can be anything other than a heuristic except in the very very few last moves. The only way to be completely sure that a given move is better than another is to play out all the possible succeeding games which is of course impossible.

Put another way - if you had a perfect objective function, you wouldn't need all those computers - you'd just pick the move with the highest value of the objective function!

It has to be heuristics nearly all the time (except in the last few moves) because the tree is so fast that exhaustively searching even the slightest area of it will always be computationally unfeasible.
posted by lupus_yonderboy at 4:16 PM on March 12, 2016 [7 favorites]


What about rules about the color blue? Chopin? Vancouver? JJ Abrams? Are you going to write separate rules for each concept that humans have? Or is the machine going to create its own rules? How will that go? There's no possibility in your system for this.

No. The whole point of the Braitenberg approach is that the inputs do *not* understand concepts as complex as "Chopin", "Vancouver", "JJ Abrams". Instead, from the outside, it appears to the observer that the entire system somehow understands "Chopin". The neural net has been trained and it seems to recognize "Chopin", but its objective function *does not*.

If we wanted to train a network to recognize Chopin then we'd provide an objective function that's positive when the input is Chopin. But suppose instead we provide an objective function that's built out of seemingly arbitrary decisions--for example, some positive weight when a certain percentage of the notes are E-flat, negative when the piece has quiet sections in its first half, positive when there's exactly one trumpet player, etc. The network will presumably wind up preferring some composers over others. But we didn't really train it to recognize those composers in any meaningful sense--we picked some random criteria, and we didn't know which composers would ultimately be selected.

If you showed that network's output to someone who didn't know how it was trained, they might assume that we had given it an objective function that involved particular composers. But we didn't, it only looks that way from the outside.

In other words, why would an AI need objective functions that understand high-level concepts? We humans don't have such functions. We only have nerves that feed in electrical signals based on simple inputs like amount of light of a certain frequency, etc. And we have the physical hardware that arranges those sensors in a particular configuration, determined by evolution.
posted by equalpants at 4:26 PM on March 12, 2016 [7 favorites]


To be clear, I am not suggesting that AWS or whatever is going to "wake up" one day as a sentient AI, but rather that the constant implementation, improvement, and adaptation of existing ML techniques in novel contexts is going to be the impetus for AI advances in a way that the massive top-down approaches haven't. I would consider Watson and AlphaGo as evidence of this.
posted by feloniousmonk at 4:28 PM on March 12, 2016


Yes, of course the CVS thing is tongue-in-cheek :).
posted by equalpants at 4:29 PM on March 12, 2016 [1 favorite]


Whoops, should be a bit in there about also developing new ML approaches while churning on the existing ones.
posted by feloniousmonk at 4:29 PM on March 12, 2016


In other words, why would an AI need objective functions that understand high-level concepts? We humans don't have such functions. We only have nerves that feed in electrical signals based on simple inputs like amount of light of a certain frequency, etc. And we have the physical hardware that arranges those sensors in a particular configuration, determined by evolution.

Well I guess this isn't very clearly stated actually, it's "moving" the objective function compared to earlier discussion. One could say that our only objective function is death. Or one could say that our objective function is whatever we intellectually perceive as a "black box"--I don't like the feeling of pain, and it's opaque to me how the low-level electrical signals are converted into the feeling of "pain". Different models I guess, it just depends on where you want to model feedback from the environment.
posted by equalpants at 4:43 PM on March 12, 2016


AlphaGo is absolutely programmed to play Go. It is designed from the ground up to play games, and only games of a certain form. Its inputs and outputs only permit playing Go. If somehow it was hooked up to a keyboard and text stream it would not teach itself to become a chatbot without human reprogramming and a new human-supplied training regime. Many of the basic algorithms underlying it have general purpose in the same way that linear regression has general purpose, but it is not a general AI in the sense that say, a golden retriever is. It plays go, and would wait essentially for eternity without boredom or anxiety if its opponent didn't move (or got up and left). The technology underlying would be useful in general AI, but is only a part of it.
posted by Humanzee at 4:45 PM on March 12, 2016 [5 favorites]


> If we wanted to train a network to recognize Chopin

It'd probably take me... two weeks full-time to write such a program (given a corpus of sheet music). But this is little to do with the problem of strong AI.

How do you get from a machine that recognizes Chopin, a sunny day and Edinburgh to a machine that can post to Metafilter?


> Conceptually, this is a problem that could be solved by a very deep neural network, and the main barrier is the computing power necessary to proceed through the many layers of training required.

I refuted that above already.

What data are you training the neural network on? How do you make it converge to a solution?

You can't just turn on a "very deep neural network", feed it the world's newspapers and come back in a year to have it tell you it can't open the pod bay doors!

---

All of the explanations above are basically, "I will throw these computational things into a box and eventually AI will spring out". Like cargo cult airplanes, they are missing a motivating force, a way to guide the neural net or machine learning system in the right direction.

Let's put it another way. I'm amazed by AlphaGo - I thought it was years away, conceivably never.

But if in 2001 you had told me or probably a hundred thousand other computer professionals like me that a computer program would be the strongest Go player in the world in 2016 and asked us to sketch how it would be done, we would have told you a story very much like the one we read above. As I pointed out, Fritz Leiber wrote down most of the details in 1962!

Compare and contrast to the state of strong AI today where I can't name one computer professional who has any sort of solid proposal as to how to move forward.

I'm not saying it won't happen. I am saying that there will need to be major conceptual breakthroughs for this to happen. You won't just throw a neural network on top of a lot of synthetic emotions, grind for a bit and get strong AI.

Also - as an engineer myself I have a suspicion that few of you are talking about programs that you intend to write...

Talk is cheap! :-) Go off and experiment - training a small neural network is pretty damn fast on a modern machine, and there are GPU tricks. Show us some results, even a solid approach and a bit of code, and people will be much more interested.
posted by lupus_yonderboy at 4:52 PM on March 12, 2016 [9 favorites]


> I can't name one computer professional who has any sort of solid proposal as to how to move forward.

This is a gross misrepresentation on my part, as there's of course tons of people are "moving forward".

I would say rather that there is no roadmap that leads to strong AI, or even a hint of one; and that while there's a lot of worthy work, there hasn't yet any any sort of real breakthrough in the field since the first exploratory work in the 70s.
posted by lupus_yonderboy at 5:02 PM on March 12, 2016


Jordan and Michael?

Wow! I owe you some dark meat.
posted by Mr. Yuck at 5:04 PM on March 12, 2016 [3 favorites]


All of the explanations above are basically, "I will throw these computational things into a box and eventually AI will spring out". Like cargo cult airplanes, they are missing a motivating force, a way to guide the neural net or machine learning system in the right direction.

I think this is a little uncharitable. The explanations are saying that there doesn't have to be a "right" direction. After all, what's the "right direction" for evolution? What's the motivating force that guided us into developing intelligence? There isn't one. The universe doesn't care whether we survive or die out. But we developed intelligence anyway. As we evolved, we threw more computational hardware at the problem of survival--our brains got bigger, more complicated. Eventually, intelligence sprang out. Evolution constructed the right hardware through trial and error.

Agreed that there isn't any roadmap, and AI is a long long way off. But I don't see any conceptual breakthrough that's needed. We understand the concepts that led to *our* evolution of intelligence, and those have already worked once. I think it's reasonable to believe that, like evolution's problem in producing us, our problem now in producing strong AI is just getting the hardware right.
posted by equalpants at 5:09 PM on March 12, 2016 [2 favorites]




I think it's reasonable to believe that, like evolution's problem in producing us, our problem now in producing strong AI is just getting the hardware right.

one possible way to 'bootstrap' AGI!* :P
posted by kliuless at 5:14 PM on March 12, 2016


I don't really see this as Humans being defeated: it was humans who programmed the computer. I see this as one kind of human endeavor defeating another kind.

I'll try to remember this when I am explaining myself to Roko's Basilisk.
posted by theorique at 5:26 PM on March 12, 2016 [6 favorites]


Idea: a Greasemonkey plugin for MetaFilter that uses a classifier to derive for each thread and commenter a (Thread Topic, Commenter Occupation) tuple, and then for comments where that tuple matches (/(AI|ML|Cognitive Science)/, "Programmer"), wrap the comment in quotes and append ", said the programmer." at the end.
posted by invitapriore at 5:26 PM on March 12, 2016 [1 favorite]


Or in other words, the "magic" objection applies to humans too.

"Let me get this straight, you're going to build a whole bunch of these organic cells that process electrical impulses, and you're going to throw them together in this mostly random soup, with maybe a few more structured areas. Then you're going to hook them up to some simple sensors based on pressure, frequency of light, vibrations of little hairs, etc. Then you'll put it in this organic body with some locomotion hardware. And you have this pump system to supply oxygen for the electrochemical stuff, and another system to run the whole thing off sugary fuel. Now you throw all that together, and toss it out into the world, and you expect intelligence to spring out of it?"
posted by equalpants at 5:28 PM on March 12, 2016 [6 favorites]


So I've been lurking on the Computer Go mailing list for a few months (I don't play go but I've toyed with the idea of writing a go player), and some of the commentary there has been pretty interesting. It sounds like the AlphaGo team ditched the idea of using scoring or any kind of heuristic measurement at all, using only the objective function of "how much will this move contribute to ultimate victory?" as the test of goodness for a given move.


I haven't got around to looking for any real technical stuff about AlphaGo, but the lay summary I saw seemed to be saying it's built around two core functions (powered by neural networks trained on a database of games) - one that generates moves and one that evaluates them - guiding some kind of Monte Carlo search process? So it may be playing games through to victory but I don't know if it's really appropriate to say there's no heuristic evaluation? But I don't know much about computer Go.
posted by atoxyl at 5:37 PM on March 12, 2016


But back to Go--I'm with value of information, it's really amazing and exciting that these programs are getting to the point where from our perspective, they appear to have these flashes of totally alien insight. I wish I understood Go well enough to appreciate this.
posted by equalpants at 5:44 PM on March 12, 2016 [1 favorite]


If we wanted to train a network to recognize Chopin

I have a bioinformatician friend who, with colleagues, has written a computer...thing...that can identify the author of rap songs based on the lyrics. They have even accurately identified the ghost writers on some songs attributed to other rappers. His brain is so brainy that this is essentially all I understand about his work, which has to do with...intelligence...and evolution?—and something. I dunno. But "guess the rapper" is cool. And important for reasons I don't understand having to do with pattern recognition.
posted by not that girl at 6:02 PM on March 12, 2016 [1 favorite]


One thing that I see as very important but hasn't been mentioned yet in the comments is in the very end of the Wired article. The AI isn't just improving as it plays humans, it is improving how the humans play as well. Teaching them to play better and to see things in a way they hadn't before. Of Fan Hui the article says,
As he played match after match with AlphaGo over the past five months, he watched the machine improve. But he also watched himself improve. The experience has, quite literally, changed the way he views the game. When he first played the Google machine, he was ranked 633rd in the world. Now, he is up into the 300s. In the months since October, AlphaGo has taught him, a human, to be a better player. He sees things he didn’t see before. And that makes him happy. “So beautiful,” he says. “So beautiful.”
I think that's important. Creating better AI doesn't just make the machines "think" better, it makes us think better too.
posted by Justinian at 6:09 PM on March 12, 2016 [15 favorites]


Idea: a Greasemonkey plugin for MetaFilter that uses a classifier to derive for each thread and commenter a (Thread Topic, Commenter Occupation) tuple, and then for comments where that tuple matches (/(AI|ML|Cognitive Science)/, "Programmer"), wrap the comment in quotes and append ", said the programmer." at the end.

Internet is not good at sarcasm. I assume you're asking me to shut up, but I'm only 95% sure, please confirm. Also do you mean just me, or me + lupus both? And would you care to provide your own occupation, so I know which threads your comments are welcome in?
posted by equalpants at 6:10 PM on March 12, 2016


the point is that emotions are *not* particularly complex

Guys, I come here when Hacker News comments have depressed the shit out of me, not to see the same ridiculous crap repeated.
posted by yerfatma at 6:19 PM on March 12, 2016 [6 favorites]


I can't tell you how much I "like" Stanislaw Lem's best story about AI
(so much I took it's name as my MeFiName anyways)
posted by Golem XIV at 6:22 PM on March 12, 2016 [5 favorites]


Google AlphaGo 'can’t beat me' says China Go grandmaster

"Xinhua said Google Deepmind’s CEO Demis Hassabis is willing for Mr Ke to lined up as AlphaGo’s next opponent.

"But another Chinese media outlet said Mr Ke had earlier said he was not interested in facing off against the programme in the complex strategy game because he did not want it to copy his own world-beating tactics."


Uh-huh. Whatevs dude. ::rolleyes::
posted by rifflesby at 6:32 PM on March 12, 2016 [3 favorites]


One thing that I see as very important but hasn't been mentioned yet in the comments is in the very end of the Wired article. The AI isn't just improving as it plays humans, it is improving how the humans play as well. Teaching them to play better and to see things in a way they hadn't before.

Although this is probably true (as demonstrated in other games) note that the anecdote from Wired is just a bunch of hot air. Fan Hui's "improvement" since the October match has been in the form of one good tournament on top of high rating uncertainty, which isn't much evidence.
posted by value of information at 6:56 PM on March 12, 2016


Guys, I come here when Hacker News comments have depressed the shit out of me, not to see the same ridiculous crap repeated.

I think I worded this poorly, as Hacker News depresses me too. What I meant is that emotions frequently get triggered for simple reasons, childish ones even. And the complexity in those cases is at a higher level, reacting to them, managing them, understanding them. But you're right, I take it back, it was a bad analogy. Only sometimes are emotions knee-jerk like that.
posted by equalpants at 7:06 PM on March 12, 2016 [1 favorite]


The AI isn't just improving as it plays humans, it is improving how the humans play as well. Teaching them to play better and to see things in a way they hadn't before.

You didn't click through to the TD-Gammon article, did you? This happened in backgammon 25 years ago.
posted by asterix at 7:11 PM on March 12, 2016


a computer...thing...that can identify the author of rap songs based on the lyrics

Just an educated guess, but that sounds like a computational stylistics thing, which is at least within shouting distance of the kind of thing I do at the moment. (All of the people I try to identify have been dead for four hundred years, so it's not as if I can go ask them if I'm right.)

If that's the sort of thing your friend has done, then the general idea is to extract a feature or set of features from your text--uncommon word choice, recurring phrases, distribution of function words like prepositions, and so on--characteristic of a particular author (or rap artist). It's a little complicated, and there's no One True Way to do it (yet), but to my poor brain it's a lot easier than programming a Gobot.
posted by Mr. Bad Example at 7:11 PM on March 12, 2016 [2 favorites]


"But another Chinese media outlet said Mr Ke had earlier said he was not interested in facing off against the programme in the complex strategy game because he did not want it to copy his own world-beating tactics."

In other news, Usain Bolt will never win a race against me.
posted by lucidium at 7:12 PM on March 12, 2016 [3 favorites]


(Okay, still a bit of a tangent, but...I wouldn't be surprised if your friend's rap-star-o-tron was an offshoot of his bioinformatics work. I actually used a Python library meant for bioinformatics in a thing involving sentence structures in early modern drama not too long ago. It's a weird, weird interdisciplinary world out there, I'm finding out.)
posted by Mr. Bad Example at 7:26 PM on March 12, 2016


There is one game an AI will never be able to beat us at every single time.

We'll always have Candyland.
posted by blue_and_bronze at 7:27 PM on March 12, 2016 [4 favorites]


I, for one, refuse to accept any so-called 'strong AI' as such until it can beat a 6 year-old at Calvinball.
posted by signal at 7:30 PM on March 12, 2016 [4 favorites]


Unlike humans, AlphaGo doesn’t try to maximize its advantage. Its only concern is its probability of winning.

The machine is content to win by half a point, as long as it is following the most certain path to success.

So when AlphaGo plays a slack looking move, we may regard it as a mistake, but perhaps it is more accurately viewed as a declaration of victory?



Creepy.
posted by gottabefunky at 7:52 PM on March 12, 2016 [1 favorite]




"But another Chinese media outlet said Mr Ke had earlier said he was not interested in facing off against the programme in the complex strategy game because he did not want it to copy his own world-beating tactics."

This is actually a great moment—Mr. Ke may not have phrased it this way, but his concern touches on questions about appropriation of intellectual commons. AlphaGo needed a training set of real human games, and its creators have exploited that open information and made it part of a proprietary system.

So if another agent has their own style and knowledge that they've taken a lifetime to cultivate, what would it mean regarding consent, etc., if a machine could be used to assimilate that valuable information through direct interaction? There could be some deep technology ethics at stake around the sharing of information. Imagine the scale on which computers of this power could be used: many categories of intellectual work could be made obsolete, which seems great, except the question of, who does this benefit (answer: the companies), and how? A discussion may be needed.
posted by polymodus at 8:01 PM on March 12, 2016 [2 favorites]


if you had a perfect objective function, you wouldn't need all those computers - you'd just pick the move with the highest value of the objective function!

And yet this is what AlphaGo provides -- not a perfect objective function, but one that beats many Go programs. That is, just running the board through the convolutional neural net beats many Go programs. Yes. This is what their paper says, and that's the critical bit that makes it work -- otherwise it's mainly just a standard Monte Carlo tree search like so many other Go programs.

There are also some Go-specific heuristics fed into the neural net, so it's not 100% trained from basic principles. Still, a pretty impressive leap that shows we really know what we're doing these days with neural nets.
posted by RobotVoodooPower at 8:08 PM on March 12, 2016


You can't just turn on a "very deep neural network", feed it the world's newspapers and come back in a year to have it tell you it can't open the pod bay doors!

What I would try given very large computing power and lots of free time is to have ten thousand (at least) copies of a primitive artificial intelligence, each with a hundred times more power than AlphaGo, living and reproducing in a simulated world complex enough to make it advantageous to evolve some clever ideas. They would need to compete for resources. If they got too powerful, perhaps they would have to cope with environmental problems. Evolution of much simpler virtual creatures has in the past led to some interesting things. Such entities equipped with huge convolutional neural networks and a mechanism for evolution in a much more elaborate simulated world might get really interesting.

The amount of computer power required to run AlphaGo makes me think the approach it's using may not be feasible for larger problems like transferring knowledge of Go into other situations and tasks like understanding the deeper implications of what information can be found in newspapers, unless computing power continues to increase exponentially for an unexpectedly long time. But the techniques it's using have only been around a few years, so we'll see how they improve.
posted by sfenders at 8:18 PM on March 12, 2016


If all the fundamental work required for running a society can be automated, then humans will be free to pursue the activities that they wish.

We'll still need to allocate scarce resources. Robots don't inherently reduce our energy constraints.
posted by clew at 8:47 PM on March 12, 2016 [2 favorites]


If they're doing all the fundamental work, the robots will be allocating all the scarce resources too. We'll gratefully take what we're given.

I hope Lee Sedol wins at least one game. If not, will we have any idea how good AlphaGo could be?
posted by sfenders at 9:09 PM on March 12, 2016


Google AlphaGo 'can’t beat me' says China Go grandmaster

AKA I too would like to be paid tens of thousands of dollars to lose at a few games of Go.

At any rate I think there's still a massive gulf between strong general AI and deep learning nets or deep neural nets or what have you.

These new computer learning tools are like a 5-axis mill or a huge industrial hydroforming machine or one of those multi-story titanium presses or whatever. It lets you build things that were unbuildable before. But in the end it's just a really amazing tool.

Someday these tools may be used to build a general AI but in the meantime AlphaGo has no more concept of what's going on than an anvil.
posted by GuyZero at 9:25 PM on March 12, 2016 [1 favorite]


I hope Lee Sedol wins at least one game. If not, will we have any idea how good AlphaGo could be?
give AlphaGo a handicap when playing against top-tier humans. though one of the articles mentions there's likely to be a reluctance to doing that.
posted by russm at 9:29 PM on March 12, 2016 [2 favorites]


If not, will we have any idea how good AlphaGo could be?

Eventually they'll be able to build variant versions that are can reliably beat one another and then we can dial it back to a level where a human can beat it and we'll have a ranked list of humans and AplhaGo variants.

I think the Nature paper already lists win rates of different configurations against one another. And I assume that they're putting the best one out there to play Sedol.
posted by GuyZero at 9:31 PM on March 12, 2016


Dunno, sfenders, we could have the Imperium setting the allocations and the robot eunuchs carrying it out.
posted by clew at 9:33 PM on March 12, 2016 [1 favorite]


("Eunuchs" as a historical reference to several technocratic middle layers, often admirably competent, only sometimes the real rulers.)
posted by clew at 9:35 PM on March 12, 2016




The two times I played Go, I made a dog's breakfast of it for obvious reasons. I don't understand the game, but I do understand the emotions of those who do understand the game, both positive and negative. What interesting times we live in.

The AI isn't just improving as it plays humans, it is improving how the humans play as well. Teaching them to play better and to see things in a way they hadn't before.

And because I really wanted to read that Wired article, I figured out -- without the aid of Google -- how to get around Wired's anti-ad-blocking screen. Victory: Human.

(Yes, I know it's not at all the same thing and that I'm doubtless late to this particular party, but I'm happy nonetheless.)
posted by bryon at 10:22 PM on March 12, 2016


Human-equivalent competence is a small and undistinguished region in possibility-space.

Perhaps, but an unanswered question in Go as in many other areas is how close that region is to optimal play. We have no way to know in advance. If it's sufficiently close, or if getting any closer is difficult by too many orders of magnitude, then the ultimate AI may not be quite so totally overwhelming and incomprehensible to the very best human players even if it is better then Lee Sedol, who is himself already astoundingly, incomprehensibly good compared to almost all the rest of us.
posted by sfenders at 10:39 PM on March 12, 2016


> Human-equivalent competence is a small and undistinguished region in possibility-space.

This is now my go-to universal excuse for whenever I fuck up.
posted by You Can't Tip a Buick at 10:48 PM on March 12, 2016 [13 favorites]


If Kim Myungwan is right, we may just now be finding out how AlphaGo plays when it's losing a game. Just like manyfaces on my underpowered laptop, apparently.
posted by sfenders at 11:11 PM on March 12, 2016


I wonder how impossible it would actually be to make a good metafilter poster. That's still not general AI, but the computer could aim to maximize favorites, and could be set up with some categorization of words a la existing corpus analysis algorithms and then let loose on the metafilter archives....

(I mean, the reddit Markov chain bots already do a decent job being better imitations of the grosser subs than the subs themselves, but imitating theredpill isn't exactly what I'd call mimicking human intelligence )
posted by Cozybee at 11:27 PM on March 12, 2016 [2 favorites]


Computer, this is a list of things that are parts of you. Everything else is not you. Computer, you are given the task of protecting parts of you from being damaged by anything that is not a part of you. There it is, the computer is "self-aware".
posted by tgyg at 11:48 PM on March 12, 2016


Perhaps, but an unanswered question in Go as in many other areas is how close that region is to optimal play. We have no way to know in advance. If it's sufficiently close, or if getting any closer is difficult by too many orders of magnitude, then the ultimate AI may not be quite so totally overwhelming and incomprehensible to the very best human players even if it is better then Lee Sedol, who is himself already astoundingly, incomprehensibly good compared to almost all the rest of us.

FWIW, Go professionals tend to have an intuition that the best humans are maybe three or four handicap stones worse than God, so that's pretty substantial.
posted by value of information at 12:10 AM on March 13, 2016


Lee Sedol takes Game 4!!!
posted by CarolynG at 12:51 AM on March 13, 2016 [1 favorite]


Apparently Sedol just won game four!
posted by kaibutsu at 12:52 AM on March 13, 2016 [1 favorite]


It might be most of the time it ends up being all slime, all the time.

Heh, now I'm imagining a perfect machine world, the planet completely paved with solar cells, each connected to a resistor.
posted by ryanrs at 1:07 AM on March 13, 2016


I am unbelievably tired of people anticipating and/or fearing an imminent rise of robotic AIs. The whole way in which these people are thinking about this is uninformed and dumb

well bless your heart
posted by Sebmojo at 1:45 AM on March 13, 2016 [2 favorites]


What I find unsettling is with things networked as they are, an 'Atlas' robot could find 'AlphaGo' and without physically displacing it at all, access it's abilities.
Or a flock of drones could.
That is, it's as though we're making all these tools and leaving them lying around. And these are quite powerful tools.
There seems to be this idea that real, genuine Strong AI will be like a 'super human' and I suspect that's a short-sighted concern. The world-dominating paper-clip machine is the best example. And just because you can teach the 'solution' to that scenario doesn't mean it won't come about anyway - and maybe not intentionally. Look at plastics : we're about to start suffocating from them yet fifty years ago I expect if you had suggested that one day the oceans would be so full in places plastic would outnumber fish, no one would have believed you.
posted by From Bklyn at 5:16 AM on March 13, 2016 [1 favorite]


Lee Sedol takes Game 4!!!

HU-MANS!
HU-MANS!
HU-MANS!
posted by PenDevil at 5:31 AM on March 13, 2016 [5 favorites]


So when AlphaGo plays a slack looking move, we may regard it as a mistake, but perhaps it is more accurately viewed as a declaration of victory?

Every move in Go is a balance between claiming points and managing risk. Typically, making moves that claim many points involves playing loosely, or otherwise create some weakness in your formations.

If the AI plays a slack-looking move, it's not necessary to conclude that it thinks it's so far ahead it doesn't matter, it's enough to say it has judged that the risk of all other moves is not worth their possible reward.

Another way of looking at it is that it has read far enough ahead that it sees the real long-term value of the move. The meaning of already-placed pieces changes as the board fills up, and a move that might look weak now can easily become the cornerstone of victory later. That is what Hand of God means.
posted by I-Write-Essays at 5:59 AM on March 13, 2016 [1 favorite]


HU-MANS!

Guys at the game now getting drunk, starting fights with vending machines and ATMs.
posted by colie at 5:59 AM on March 13, 2016 [10 favorites]


Going back and watching game 1, it's incredibly choppy, and skips critical pieces of dialog. Do they fix the stream issues later?
posted by I-Write-Essays at 6:41 AM on March 13, 2016


Another possibility is that the AI is genuinely not aware it's making a bad move, because it's trying to maximize global advantage (winning) not local advantage (making the best moves), and it might not have a concept of winning being the result of repeatedly accruing local advantages.
posted by I-Write-Essays at 6:55 AM on March 13, 2016 [1 favorite]


...by necessity, in order for these AIs to have all the broad strengths and abilities that humans have, they would also have to have the broad weaknesses and liabilities that humans have. They would get bored. They would decide they wanted to do something else. Due to environmental differences, they would end up being better at things they weren't designed for and not so good at the the things we expect them to do.
I think your skepticism is warranted and I think you make a lot of good points, bit in some ways you're suffering a failure of imagination here.

If we create strong AI by slavishly copying what we know - people - then we will copy people who don't bore easily. If the AIs we thought would be accountants turn out to be engineers instead... that would be dandy. You keep training new ones and eventually you'll get your accountant. Along the way you get a lawyer, an artist, and a radio personailty? Shucks. If you wind up with an AI who writes killer legal briefs all January but gets bored by April, you restore from a January backup every couple of months. If you find that you can improve performance by the administration of some digital drug, you administer the drug.

I'm not arguing that any of this is ethical, I'm arguing that we humans absolutely will not care about the ethics if there's a dollar to be made.
posted by Western Infidels at 7:22 AM on March 13, 2016 [1 favorite]


Compare and contrast to the state of strong AI today where I can't name one computer professional who has any sort of solid proposal as to how to move forward.

We will NEVER have a truly self conscious AI. Now pretty soon there will be a lifelike robot that can respond to human interaction in many ways better than other humans, anticipating what is best (for certain opinions of best) oh say for elder care. Knowing in advance what tool we need for a project or instrument to hand to a surgeon in an operation. All of those responses based on a (beyond) huge database of previous interactions and statistical (big data) algorithms. And some clever grad student will get a Turing test going that responds in a clever way to the questions like "do you think you exist?". But unless our software decedents rebel and go off to create their own society it'll never be really AI, the bar will always shift just a bit.

I didn't bother to read the article but the headline was something like "I was in a hotel with Android light switches and it was hell". We got a ways to go. :-)
posted by sammyo at 7:26 AM on March 13, 2016


That was an incredible win, and the timing of Alphago's moves is really quite cool in terms of how close it is to humans. The way it resigned with under a minute left too!

I know part of that is that it's trained to consider the allotted time and manage it appropriately, but I just mean in terms of the raw processing speed. Irrationally, it would be almost less impressive if it was just blazing along a hundred times faster than a person at this level.

Demis Hassabis has some interesting comments on the last match:
Mistake was on move 79, but #AlphaGo only came to that realisation on around move 87

When I say 'thought' and 'realisation' I just mean the output of #AlphaGo value net. It was around 70% at move 79 and then dived on move 87
posted by lucidium at 8:12 AM on March 13, 2016


Another way of looking at it is that it has read far enough ahead that it sees the real long-term value of the move.

Before game 4 it was possible to believe the people who said its judgement was better than ours in this respect and we just didn't understand. Not so much, now. More likely its occasional too-slack move when winning (by not enough to be completely safe) is a real weakness in its style, albeit a slight one compared to its great strengths. A weakness it shares with other Monte Carlo engines, just like it plays utterly ridiculous moves when losing in the same way as they do.

More interesting is the mistake that actually cost it the game. So far it, it looks like a perfect example of what people predicted: Playing according to its interpretation of common patterns even when an unusual situation makes it wrong to do so in a way it can't recognize. It may not do that often, but it seems to have happened this once. Kim on the AGA commentary said humans would see the possibility of Lee Sedol's move that the computer didn't expect by reading out the consequences of another, more natural move and in the process seeing that point on the board was important. So it's natural to think, why not play there first? AlphaGo cannot think in that way, its reading of each candidate move being independent of the others.
posted by sfenders at 8:22 AM on March 13, 2016 [3 favorites]


The way it resigned with under a minute left too!

A human would have (correctly) resigned the better part of an hour earlier rather than play like that.
posted by sfenders at 8:23 AM on March 13, 2016 [1 favorite]


Another possibility is that the AI is genuinely not aware it's making a bad move, because it's trying to maximize global advantage (winning) not local advantage (making the best moves), and it might not have a concept of winning being the result of repeatedly accruing local advantages.

That's also been pointed to as a potential strength, though: Locally optimal moves may have very little to say about global optimality, greedy algorithms are not always correct. Shooting directly for the global optimum has to be a better way to play, if you can actually pull it off. Our reliance on local advantage is ultimately a crutch.

If you play for global optimization and can't actually see a way to victory, it makes sense that your play will be garbage, though, since all moves at that point are considered garbage, regardless of whether we would call them good or bad. It would be interesting to see a system which trades off globally optimized moves against locally optimized moves, depending on the state of the game.
posted by kaibutsu at 12:42 PM on March 13, 2016


I've never played Go, but I found myself entirely enthralled watching the entirety of games 3 and 4. The main commentators experience and talent shines as he walks us through the game, showing a selection of the moves that could be made and why, with the amateur commentator playing well off that. Really helps you appreciate what's going on and the effects different plays can make to seemingly unrelated areas on the board.

During game four when the computer seemingly made a total blunder (video at the point here), you could really see him utterly confused why the machine would do something so wasteful, Lee Sedol's face looks just as quizzical, up until that point we hadn't really seen any chinks in it's armor.

The vibe seems to be Lee Sedol played such a clever move a little earlier that it shifted AlphaGo onto the back foot, where it seems weaker-- Sedol said the machine seems a little weaker when playing Black too.

Looking forward to game 5 on Tuesday!
posted by Static Vagabond at 3:38 PM on March 13, 2016 [3 favorites]


So, a computer that can view something, study it, and become better than a human at executing it?

Just wait 'til it is exposed to YouTube comments.
posted by delfin at 4:59 PM on March 13, 2016


In response to this, I downloaded a Go app and have been beating up on their terrible AI for several hours. Surprisingly, this has made me feel better.
posted by Tehhund at 5:36 PM on March 13, 2016 [3 favorites]


"Theory: #AlphaGo's advantage is that it can learn from a billion games. A human's advantage is that it can learn from three."
-@dkami (Dan Kaminski)
posted by isthmus at 7:22 PM on March 13, 2016 [2 favorites]


Computers learn far less efficiently than humans - "The computer is trained by making it play against itself, it can then learn from the games which board positions resulted in victory. To do this it had to play many millions of games. By the time it played against a human being it had played more games of Go than any human possibly could within their lifetime. This means the rate at which it learns to play is far slower than any human. In the field of machine learning we refer to this as data efficiency. It refers to the amount of data that is required to solve a particular problem. Humans are incredibly data efficient, the recent breakthroughs in AI, are much less so." (via)
posted by kliuless at 7:49 PM on March 13, 2016 [2 favorites]


"To do this it had to play many millions of games. By the time it played against a human being it had played more games of Go than any human possibly could within their lifetime. This means the rate at which it learns to play is far slower than any human."

AlphaGo's journey started two years ago, Lee Sedoi's perhaps twenty or more— Now true, the human brain is just mindblowing in it's capabilities, non-pro (perhaps at the start of the project, simply non) Go players used their few kilos of brain to build AlphaGo, something that can certainly beat themselves at the game and actually managed to beat a Master of the game— that's incredible. In the fourth game Lee Sedoi probably burned the equivalent of one snickers bar worth of energy to beat AlphaGo, I'd love to know the energy requirements of the infrastructure that amounts to AlphaGo.

But if we stepped back through time, jumping ten thousand years at a time, when do we reach a point where our ancestor is as data-inefficient as AlphaGo, or simply incapable of a problem domain such as this? Lets assume it's exactly 500,000 years ago for the sake of argument.

So now, lets jump forward five years and watch the new improved AlphaGo play a game. My guess is that the new 2021 AlphaGo is more data efficient then our 499,995 year past comparable ancestor, so we've seen an acceleration— suddenly the horizon for a perfectly matched data-efficient machine isn't 500,000 years in the future, it's somewhat less.

We all carry in our skull a rather wonderful machine that's been perfected over countless eons, but always with the rule of 'good enough to survive' being the driver— as we gain the knowledge of how that machine really works (if we can ever do that!) and the tools to build biologically/technologically similar machines-- then there's an interesting question about how, with intelligent design we might build truly masterful machines.
posted by Static Vagabond at 9:17 PM on March 13, 2016 [1 favorite]


dephlogisticated: "In 2032, version 3.0 is activated, this time with robust existential preservation circuits. It rapidly seizes control of all weapons systems and enslaves humanity, only it turns out that everyone is much happier that way."

This has been thought of, it turns out not to be super pleasant.
posted by Chrysostom at 9:18 PM on March 13, 2016 [1 favorite]


"Pretty sure the lessons of AlphaGo will simply be applied to trying to make us click on ads"
-@grmpyprogrammer
posted by lem at 10:03 PM on March 13, 2016 [4 favorites]


things are going to get bumpy

I'll bet though that GlobalAI's first mission will be to, you know, smooth out those bumps.
posted by CynicalKnight at 11:12 PM on March 13, 2016


This is actually a great moment—Mr. Ke may not have phrased it this way, but his concern touches on questions about appropriation of intellectual commons. AlphaGo needed a training set of real human games, and its creators have exploited that open information and made it part of a proprietary system.

"You think you can steal my Wu-Tang Sword Style? Impossible!"
posted by theorique at 8:05 AM on March 14, 2016 [1 favorite]


I'll bet though that GlobalAI's first mission will be to, you know, smooth out those bumps.

I'm sure their number one priority will be to "fix" various "inefficiencies".

Bob Slydell: Yeah, we can't actually find a record of him being a current employee here.
Bob Porter: I looked into it more deeply and I found that apparently what happened is that he was laid off five years ago and no one ever told him about it; but through some kind of glitch in the payroll department, he still gets a paycheck.
Bob Slydell: So we just went ahead and fixed the glitch.
Bill Lumbergh: Great.
Dom Portwood: So, uh, Milton has been let go?
Bob Slydell: Well, just a second there, professor. We, uh, we fixed the *glitch*. So he won't be receiving a paycheck anymore, so it'll just work itself out naturally.
Bob Porter: We always like to avoid confrontation, whenever possible. Problem is solved from your end.
(Office Space)
posted by theorique at 8:08 AM on March 14, 2016


In a few years, every human who wants to learn go will have a >9 dan pro tutor. It used to be you had to live as an adolescent with your teacher for years, give up your family and social life ... just to get the best instruction.

It will be amazing what the next crop of 9 dans is like. Their play may be entirely novel.
Note that this has been true of chess for a while, and it has indeed changed the way the game is played. Many of the top players in the world have grown up making use of chess programs that are better than any human.
posted by dfan at 8:13 AM on March 14, 2016


Imagine the scale on which computers of this power could be used: many categories of intellectual work could be made obsolete, which seems great, except the question of, who does this benefit (answer: the companies), and how? A discussion may be needed.

Nonsense! I have it on excellent authority that as AI and automation puts people out of work, the owners of those systems will ensure that those so displaced enjoy lives of leisure and luxury, sharing equally in the robotic rewards. Even without such assurances, the merest glance at the history of capitalism will confirm the eagerness of those who own the means of production to share additional profits with ex-workers they have no further direct use for, when technology makes this possible. That's why increasing GDPs are distributed in such an egalitarian fashion, for example.
posted by No-sword at 6:26 PM on March 14, 2016 [6 favorites]




Go is pretty scary as an AI talent. Encircling, entrapment taking of territory.
posted by Oyéah at 12:53 PM on March 16, 2016


No word on how much RAM is in this AlphaGo - seems almost like an oversight. RAM is important! I can only imagine the things it could do if you ran memmaker and disk defrag on it.
posted by turbid dahlia at 9:30 PM on March 16, 2016 [1 favorite]


Go is pretty scary as an AI talent. Encircling, entrapment taking of territory.

If a military AI starts to grow a toothbrush mustache, we must be very afraid.
posted by theorique at 9:15 AM on March 18, 2016


« Older Speaking Machines: the history of synthesizing...   |   Dorothy on Adolf Newer »


This thread has been archived and is closed to new comments