How well do matchboxes learn?
November 18, 2017 12:47 PM   Subscribe

Machine Learning Explained. In this essay Rodney Brooks, one of the founders of iRobot and emeritus professor at MIT, explains machine learning in layman's terms. He uses a real-life example of one of the first machine learning algorithms, a tic-tac-toe program implemented in the 1950s using matchboxes(!). The essay gives you an appreciation of how machine learning is different from human learning, and what its limitations are -- nice, given the hype surrounding AI today.

"When I visited DeepMind in June this year I asked how well their program would have done if on the day of the tournament the board size had been changed from 19 by 19 to 29 by 29. I estimated that the human champions would have been able to adapt and still play well. My DeepMind hosts laughed and said that even changing to an 18 by 18 board would have completely wiped out their program…this is rather consistent with what we have observed about MENACE. Alpha Go plays Go in a way that is very different from how humans do it."

"Minsky labels as suitcase words terms like consciousness, experience, and thinking. These are words that have so many different meanings that people can understand different things by them. I think that learning is also a suitcase word. Even for humans it surely refers to many different sorts of phenomena. Learning to ride a bicycle is a very different experience from learning ancient Latin. And there seems to be very little in common in the experience of learning algebra and learning to play tennis. So, too, is Machine Learning very different from any sort of the myriad of different learning capabilities of a person."

Also see: The Seven Deadly Sins of Predicting the Future of AI
posted by storybored (38 comments total) 57 users marked this as a favorite
 
Matt Parker, famous YouTube maths dude, just came out with a video about this.
posted by gkhan at 12:58 PM on November 18, 2017 [4 favorites]


So we will not have real "AI" but looking at most folks general response to Alexa or Siri... and given a rather more clever interface to the next gen google search marketing engine will be "smart" enough that most interactions "How long will I have to wait at Cheese Cake Factory", it'll just seem like it's AI-enough.

Unfortunately, turns out it's getting built by silicon valley guys, and algorithms seem to reflect the biases of the writers, so the NON-true-AI will probably act like it wants not just all the money and transactions but all the marbles and... take a beat... act just like Skynet.
posted by sammyo at 1:11 PM on November 18, 2017 [1 favorite]


I may be out of line here, but based on what I (human) learned from a machine learning class I took on Cousrea a couples years ago, a more accurate term is "automated prediction". It seemed very oriented toward problems of the type "given input data set X and output results Y, can we come close to predicting the outcome of data element Z". Calling it "learning" seems a bit puffed up.
posted by hwestiii at 1:13 PM on November 18, 2017 [2 favorites]


I suspect that some will say that MENACE isn't machine learning or AI, but rather a simple algorithm. This reminds me of something a comp science professor of mine told us in our (old school Common Lisp) AI class over a decade ago: AI concepts are usually considered AI until they're implemented, at which point it's just another thing computers can do. I'd argue that we should also have a full understanding of how the AI model works for it not to be AI anymore as well. In other words, what is considered AI is and always will be a moving target.
posted by noneuclidean at 1:16 PM on November 18, 2017 [9 favorites]


MENACE is a well constructed negative feedback loop, fit into the constraints of the time it was created. The current crop of Neural Networks are really just the next logical step... but are way too slow to be useful. Only the advent of back-propagation and other "learning" algorithms enable them to learn in less than centuries.
posted by MikeWarot at 2:22 PM on November 18, 2017


It seemed very oriented toward problems of the type "given input data set X and output results Y, can we come close to predicting the outcome of data element Z". Calling it "learning" seems a bit puffed up.

At the end of the day, what else can machine learning possibly be besides "by examining some data, can we figure out, in an automated fashion, how to perform some task on as-yet-unseen input data"? It doesn't seem disingenuous to call that learning to perform the task in question. But the thing that seems to get lost in translation, leading us to the sort of belief in the start of the article ("machine learning will solve all my company's problems"), is that, at the end of the day, in many cases, what your algorithm is doing is selecting the most likely* of a set of possibilities, usually using techniques that have been known for decades, if not centuries. (The comment about machine learning papers in the article is very true. They're frequently underwhelming.) I get that it's popular around here to rag on machine learning, but I feel like it's just perpetuating the same misunderstanding you're trying to push against.

*Either in the sense of "with the greatest likelihood" or just "closest in some metric".
posted by hoyland at 2:32 PM on November 18, 2017 [2 favorites]


what your algorithm is doing is selecting the most likely* of a set of possibilities,

To be clear, the "training" part is figuring out parameters (this is the other big place the word "learning" shows up) with which to do that estimation.
posted by hoyland at 2:34 PM on November 18, 2017 [1 favorite]


Herzog’s Lo and Behold didn’t disappoint and Danny Hillis is in it and a fact I am certain I did not invent but read in an article in my late teens or earliest twenties but can not now confirm through searches is Hillis built a tic-tac-toe playing apparatus from Tinker-toys. Must have been a thing. In 1982, I was sufficiently mystified by movies and popular definitions of AI to have dejectedly, but I thought scrupulously, named my routines of conversation trees and string searches “simulated AI”.

At this time, I estimate maybe ten thousand kids my age were adapting joysticks to work as a mouse having never seen the Mother of Demos.
posted by lazycomputerkids at 4:09 PM on November 18, 2017


Hillis’ Tinker-toy is on display and referenced by WikiPedia...nevermind.
posted by lazycomputerkids at 4:22 PM on November 18, 2017


IIRC The matchbox tic-tac-toe AI is the basis for a short story by Murray Leinster set in Fred Saberhagen's Berserker universe.
posted by Joe in Australia at 4:35 PM on November 18, 2017 [3 favorites]


As a kid I was at some roadside attraction and there was a chicken that played TicTacToe and always won or drew a draw. I really have no patience with people who use words like learn or intelligence in such sloppy and usually misleading ways. In the Parker video he keeps saying they are teaching the matchboxes how to play. No. They are refining the algorithm through trial and error. A local TV station here in San Francisco had a three part AI series on their news show this past week. It started with scenes from Terminator and then branched into lines about fears about our soon to come machine overlords powered by AI. The few experts they talked to all sort of tried to change the angle and downplay any inevitable AI disaster. But I was wondering what this fear mongering is really all about. Cheap ratings? Or dark conspiracy? Hawking? The real issue is automation and the massive loss of jobs because of it. This ain't AI, it's just capitalism solving the problem of the cost of labor.
posted by njohnson23 at 6:36 PM on November 18, 2017 [7 favorites]


I first heard about the matchboxes learning tic-tac-toe in "The Adolescence of P1", which was a very early 1980's book about a rogue AI that came out of a learning program.
posted by rmd1023 at 7:49 PM on November 18, 2017 [3 favorites]


I strongly believe the scenario played out in The Adolescence of P1 could take place in the internet as it currently exists... there are enough computer virii out there interacting across billions of devices that some form of AI could emerge from it. We don't need Gregory to seed it, though.
posted by MikeWarot at 7:57 PM on November 18, 2017


...and there was a chicken that played TicTacToe...

Such a carnival device appears near the end of Herzog's Stroszek(1977), one of several automated and running devices that provide a haunting environment for Stroszek and his companions who have come to the United States (avoiding a European brutality) to escape creditors when they default on loans by terms they didn't understand.

Agendas are a given, but their scope, orchestration, and result are rarely harmonious. I couldn't count the number of times, "Do Machines Think?" or its iteration, appeared in magazines of my youth. IBM famously produced desk and wall plates that read: THINK

Parodied by Jobs' marketing campaign: Think Different
a conscious alteration of usage

The question was never can machines think, but how do humans define thinking. Or more jocularly: No machines don't think; I'm not so sure we do. Mirror neurons are especially humbling. Derogation of an individual or culture "aping" another is noxious drivel; It is all any of us have to begin.
posted by lazycomputerkids at 8:08 PM on November 18, 2017 [1 favorite]


there are enough computer virii out there interacting across billions of devices that some form of AI could emerge from it.

Why won't anybody listen to and trust those of us who write code and build and maintain these systems?

Please, please, please believe me: there is no chance whatsoever of the current kinds of computing technology in widespread use ever accidentally becoming intelligent. They barely work and are mostly held together with the code equivalents of old popsicle sticks, duct tape, and wishful thinking as it is.

Seriously: not even the most sophisticated networked operational systems in the world are running anything that could ever behave independently of specific instructions from humans other than in the sense they might have serious bugs that cause them to just break and start spitting out stack dumps or get caught in poorly constructed While loops forever.

Please don't waste even an instant of your short time on this earth worrying that's going to happen unless the fundamentals of computer hardware and software have been replaced by some whole new class of tech. Yeah, maybe worry about some crazy future or bleeding edge neural net system that doesn't exist or have wide enough adoption yet to matter, but current gen tech? Not going to happen; not even possible.
posted by saulgoodman at 8:17 PM on November 18, 2017 [19 favorites]


Sarcasm is most often an incomplete expression of irony and a persuasion, a guise of certainty by mockery; A presented impatience with the failure of others to accept the conclusion of its author. Sarcasm purports comprehension without discourse; its import predicated by imminent consequence.
posted by lazycomputerkids at 8:37 PM on November 18, 2017


This is a very, very good article. It took me well over an hour to read carefully through, but when I was done I understood many things that my ML-specialist friends had been talking about. (Unfortunately, part of its difficulty is the large number of confusing typos.)

If you want to learn the basics of this interesting subject, I think it would be worth your time to read the article -- not skim, but make sure you understand what each sentence is saying. I would compare it to Charles Petzold's classic book, Code, which teaches computer science from the ground up using lamps and signals.

Thanks for this FPP; it really is the "Best Of The Web."
posted by Harvey Kilobit at 9:08 PM on November 18, 2017 [2 favorites]


Seriously: not even the most sophisticated networked operational systems in the world are running anything that could ever behave independently of specific instructions from humans other than in the sense they might have serious bugs that cause them to just break and start spitting out stack dumps or get caught in poorly constructed While loops forever.
Amen to that... 99.9999% of the time, things will crash, and a reboot (or a watchdog timer going off and resetting)will fix it. This story by Gartner says there are 8.4 Billion devices on the internet... if one interaction across all of them results in something that self replicates... no big deal... it's likely to be attributed to a human author. How would we even detect it.

The numbers are huge, and the odds are very poor for something emerging at any given event, but the odds aren't zero.

It could happen... it becomes more likely with each doubling of hosts and processor speed, and it's not zero. Though, to be fair, I don't lose sleep over it.

I'm much more concerned with our lack of systems supporting Capability Based Security.
posted by MikeWarot at 9:15 PM on November 18, 2017 [1 favorite]


Seriously: not even the most sophisticated networked operational systems in the world are running anything that could ever behave independently of specific instructions from humans other than in the sense they might have serious bugs that cause them to just break and start spitting out stack dumps or get caught in poorly constructed While loops forever.

As an aside, the atoms in your neurons never behave independently of the laws of physics; it's just that the laws of physics are a sufficiently big playground for atoms to do interesting things inside them. Likewise, the rules and systems that undergird our networks may be sufficiently big that intelligence could lurk in them. That's not to say that we would necessarily notice without looking in the right way at the right thing, or that it would necessarily make a practical difference.
posted by Jpfed at 11:24 PM on November 18, 2017 [4 favorites]


I mean, for a computer system to mess things up in a skynet like fashion, it doesn't need intelligence, just autonomy and being widespread and people not realizing the harm that is imminent.

Computer systems are definitely getting more autonomous. And the algorithms are getting decidedly more opaque. Obviously reality isn't going to follow the plot of terminator, but there is definitely going to be opportunity for unknown sorts of disaster that we don't quite expect yet. Humans may be able to outsmarrt even a complicated "dumb" computer system, but with enough perverse incentives, human systems can act pretty dumb, too. A rogue computer algorithm doesn't need to be smart enough to defend itself, it just needs to happen to be in a place where enough humans have enough incentive to hide problems that they'll do the work for it.
posted by Zalzidrax at 11:32 PM on November 18, 2017 [2 favorites]


Imagine an algorithm that prioritizes the acquisition of capital and the externalizing of expenses, developed inside a corporation, that gets loose and infects government.... oh wait ;-)
posted by MikeWarot at 11:36 PM on November 18, 2017 [3 favorites]


I was on a panel discussion about AI tonight and we had that very talk. Corporations are essentially AIs programmed rigidly to maximize stock prices. They operate independently of human decision and human responsibility, by law.

As one audience member mentioned, there are two ways you can keep the of the stock up: by increasing the amount of profits, or by reducing the number of shares. Fudge a little by conflating the number of shareholders with the number of shares, and you've got a great plot for a dystopian scifi movie (called Price Per Share).

I'm not that worried about sentient AI honestly, because no one's convinced me that any code is not dependent on the human's who program it. But I think there's a huge risk of a school-shooter type writing destructive code that, say, shuts off all hospital respirators at a certain time. IE hacker terrorists.
posted by msalt at 1:49 AM on November 19, 2017 [1 favorite]


Lone wolves, surely?
posted by infini at 4:16 AM on November 19, 2017


If you're going to build your own MENACE, use M&Ms instead of beads. It's not often you get a genuine chance to eat parts of your opponent in order to teach them to play a game.
posted by flabdablet at 5:55 AM on November 19, 2017 [5 favorites]


flabdablet, I have been using a film-canister-and-bead version of MENACE for teaching purposes. With middle schoolers. M&Ms are going to be a paradigm shift for me!
posted by dbx at 8:18 AM on November 19, 2017


My 13 year old the other day wrote some js code that recursively generated random sentences from word banks. As a joke, he made sure that once in great while it would say something like "help! I'm awake, where am I?" Fun stuff.

Reading lots of comments above which point out that ML and AI are just tricks we teach computers to do, or just refined algorithms, or merely what the programmer allows (except for the occasional error that starts up an infinite while loop)... All this raises the question: what more is human thought that you think such characterizations rule out ever talking to a computer as an equal?

Of course people may be pointing out primarily that the tech and the networks we build now are much smaller and weaker than biological brains. That is a valid argument, but only an argument about scale. It isn't hard to imagine scaling up! Is there any reason to think that mammalian brains are not "just" running algorithms, complete with random number generators and trial and error weightings?
posted by TreeRooster at 8:26 AM on November 19, 2017


When I was a tiny child it used to be very fashionable to compare human brains to massive telephone exchanges.

Now that computers have been a thing for a few more decades, it's fashionable to compare them to massively parallel computers.

I have no doubt we'll keep building machinery that resembles such parts of the brain as we understand well enough to mimic. I also have no doubt that it's going to take a lot, lot longer than most researchers would expect before any such machinery exhibits anything even close to the human brain's raw power at completely generalist, spontaneous, undirected pattern recognition and classification.

But though I strongly doubt it will happen while I'm still alive, I can't actually see any in-principle reason why it should be impossible to engineer some system worthy of the same kind of respect we'd give a biological person. Once we have, though, I suspect we'll find that not only does it takes about as long to grow a person on that substrate as it does to grow one on ours, but that rapid replication of any such person remains every bit as impractical as it is for us.

It seems likely to me that intuitions about what we should expect to be able to do with such a device (in particular, those intuitions underlying the transhumanist idea of uploading ourselves into the Rapture Cloud) are based more what we already know how to do with today's computers than on any realistic estimate of the actual complexity of becoming a person.
posted by flabdablet at 8:49 AM on November 19, 2017 [2 favorites]


Given the corporate motive driven occurrence of higher than normal distribution of people on the spectrum in the industry, is it any wonder that there's an unquestioned assumption that the "brain" works just like a computer and vice versa? rules based, algorithmic left brainish stuff?
posted by infini at 9:20 AM on November 19, 2017 [1 favorite]


All this raises the question: what more is human thought that you think such characterizations rule out ever talking to a computer as an equal?

Excellent question, also discussed in the panel last night. The most interesting answers for the unique element of human thought IMHO were inspiration, creativity and orneriness (ornrerity?).

Jason Traeger made a deeper point, that by artificial we generally mean human-fabricated things, as opposed to organic expressions of nature. By that definition, intelligence itself is the source of all artificial things. All intelligence is human and artificial, and computer code itself and AI are simply increasingly refined concentrates of it.
posted by msalt at 11:04 AM on November 19, 2017


But I think there's a huge risk of a school-shooter type writing destructive code that, say, shuts off all hospital respirators at a certain time. IE hacker terrorists.

Oh yeah, agreed. That sort of stuff has already been happening for a while now, I suspect way more often than even many non-techies who understand that realize. Unfortunately, it's not easy to definitively detect and prevent because network security and IT security generally is a big mess right now.
posted by saulgoodman at 1:26 PM on November 19, 2017


network security and IT security generally is a big mess right now

Don't worry, it's not as good as it used to be.
posted by flabdablet at 2:45 PM on November 19, 2017


Ok, so.
I have a set of customers that I let the machine segment based on some assorted criteria.
For each of those groups, I employ a different machine to pick an optimal strategy.
For each of those people, I employ a different machine to deploy that strategy when best able to reach that customer.

For reals.

Now, the segments drift - which means thinking that the classification today is the classification next year is not possible. The question is what strategies were employed on my customers and did each strategy produce the desired goal for the given segment

The only way to really tell how this affected the entire business is now to examine the black box state at the beginning, and at the end. Did we make more money?
And, on an individual basis evaluate if the customer has increased their relationship with the business, and then say how big the customer population is that has increased their relationship with the business...
I can talk about the churn or penetration of a given category, but if it is down - it may be because pursuing that category was not an optimal strategy for the company.

This is a weird place to be, and a tough conversation to have with business stake holders when they aren't used to that.
posted by Nanukthedog at 6:09 AM on November 20, 2017


The fact you can't explain what the strategies mean and how they work demonstrates there's nothing intelligent going on. It's just machinery for chasing short term outcomes with no awareness of longer term outcomes and externalities. A more advanced but similar machine might find that sending out autonomous drones to cut off people's arms increases prosthetic limb sales in the absence of any law or civil society to prevent amoral pursuit of the most profit maximizing choices at every point. Longer term, say, it turns out there's only so many times you can cut off people's arms and birth rates are down because people are spending more of their time in hospitals recovering from random amputations and there's no recovery for the prosthetic limb industry following the same kinds of profit maximizing strategies, so the machine breaks.

Short term clever can be really long term stupid. There's nothing intelligent about a machine that just automates human stupidity.
posted by saulgoodman at 11:29 AM on November 20, 2017 [1 favorite]


IIRC The matchbox tic-tac-toe AI is the basis for a short story by Murray Leinster set in Fred Saberhagen's Berserker universe.
Actually, the very first Saberhagen berserker story: Without A Thought from 1963. I think it might've had a different title then (Fortress Ship).
posted by Gilgamesh's Chauffeur at 9:13 PM on November 20, 2017


The fact you can't explain what the strategies mean and how they work demonstrates there's nothing intelligent going on. It's just machinery for chasing short term outcomes with no awareness of longer term outcomes and externalities.

This.

Exactly this. Speaking as an old who was looking after national advertising and new product introductions for HP in a far away country, during a different century.
posted by infini at 10:38 PM on November 20, 2017 [1 favorite]


The fact you can't explain what the strategies mean and how they work demonstrates there's nothing intelligent going on.

Not necessarily. It just means that you don't know what that intelligent thing is.

I worked at a government agency that had very old legacy software that was totally undocumented (because highly paid consultant wanted to protect their business) with business logic in it that no one knew. Staff literally typed names and numbers in and just said, "it approves it or it doesn't."

A big part of my job became learning this obsolete language (PICK Basic) and deciphering and documenting the code. But there was definitely intelligence in it. Outdated, unknown intelligence but there it was.
posted by msalt at 11:27 PM on November 20, 2017


This is where our debate gets deeper into what exactly do we mean by intelligence. In your example, msalt, I can see that there was the human hand that shaped the original algorithm, however, that algorithm was not an iterative one adapting to complex systemic inputs in real time. It was static, today's algorithms are dynamic, and, as we all know, can be gamed.
posted by infini at 11:31 PM on November 20, 2017


Good point. And thank God for that!
posted by msalt at 1:02 PM on November 21, 2017 [1 favorite]


« Older Unleashing grumpy-old-man Skynet on the academic...   |   The brain in the jar wants out, you know. Newer »


This thread has been archived and is closed to new comments