Deep Learning is Neither
September 12, 2019 2:01 PM   Subscribe

If Computers Are So Smart, How Come They Can’t Read? This is an accessible presentation of the inherent lack of understanding in artificial "intelligence." It's amazing how much mileage the field has gotten out of pattern matching and probabilistic models, but the authors argue that it's time for an "entirely new approach."
posted by bbrown (28 comments total) 22 users marked this as a favorite
 
Winter is coming.
posted by drdanger at 2:22 PM on September 12, 2019 [18 favorites]


tired: making massive demands of AI based on what someone said in a TED talk
wired: a basic understanding of the history of AI criticism
inspired: Ctrl+F "genuine"
posted by phooky at 2:26 PM on September 12, 2019 [8 favorites]


If cars are so fast, how come they can't jump over fences?
posted by J.K. Seazer at 2:27 PM on September 12, 2019 [19 favorites]


What's the entirely new approach? I don't disagree that we may need one.

I've this idea that in order to train something to think and read, reason, and infer like a human, you're going to have to give them the life experience at least up through the eighteen years that human brains experience before we're considered adults. You will need sensory inputs, and a breadth of human interaction that you get from living in a human society. And the intelligence you get out of it will be biased towards reasoning in that society. You can't get it from just pointing something at books. Books don't have enough context to make something to understand themselves... sometimes I don't have enough context!

So something like an android that can live among humans. But it'd take a while to teach them, right? As much as an ordinary person... however, once you've got one, you can just make copies. Of course, you'll want a few thousand for variation or you'll risk copying the biases of one of the AIs.

Train enough of them... a couple of thousand maybe. Develop a way of sticking them into a simulation of our world and then you can accelerate their training in a society the approximates our own. That way you don't have to pull them into the real world until you need them.

Of course, you'll have to provide them incentives to put them to work on things that we currently use AIs for. I can't imagine they'd love to be search bots all day if they were trained with human intelligence.
posted by Mister Cheese at 2:30 PM on September 12, 2019 [5 favorites]


Any sufficiently advanced discussion about artificial intelligence is indistinguishable from phenomenology.
posted by gwint at 2:32 PM on September 12, 2019 [24 favorites]


Mister Cheese, you might enjoy the Ted Chiang novella "The Lifecycle of Software Objects" It's reprinted in his latest collection.
posted by gwint at 2:33 PM on September 12, 2019 [10 favorites]


I've wondered about the "educating an AI in the real world" question. Suppose a company with vast computing resources could develop and deploy devices with eyes and ears to a large fraction of the population. Suppose it could map public spaces in fine detail. Suppose it could develop language translation services based on the latest AI research; suppose it even develops a platform and ecosystem for this kind of work. Suppose that an AI will need some agency in the world; then that company might be interested in autonomous vehicles and even (if briefly) acquire Boston Dynamics. Suppose its corporate ethical code was, at root, Asimov's laws of robotics.

Yeah, that'll never happen.
posted by sjswitzer at 3:01 PM on September 12, 2019


We can barely educate our children, but we're worried about educating our robots?
posted by Godspeed.You!Black.Emperor.Penguin at 3:05 PM on September 12, 2019 [3 favorites]


We can barely educate our children, but we're worried about educating our robots?

Yeah, we are. Because those children are going to live in a world where decisions that affect them daily are increasingly going to be made by robots. Not paying attention now is mortgaging their future.
posted by lumpenprole at 3:18 PM on September 12, 2019 [3 favorites]


I think the underlying computer science question is whether neuromorphic machines can scale to the kinds of activities that the authors describe, or not? Is AlphaGo an example of logical reasoning built on top of correlations, or just an imitation of it?

After all, humans are made of brain cells and it seems that for human beings, compositionality actually arises out of a sufficient large configuration of neurons. On the other hand, maybe an alternative computational model could be much more efficient at this compositional reasoning than the billions or trillions of neurons that natural evolution came up with.
posted by polymodus at 3:33 PM on September 12, 2019 [4 favorites]


That’s because, in the final analysis, statistics are no substitute for real-world understanding. Instead, there is a fundamental mismatch between the kind of statistical computation that powers current AI programs and the cognitive-model construction that would be required for systems to actually comprehend what they are trying to read. - TFA

Something I've wondered about for the past thirty years or so, on and off, is whether the right model of language (and, obliquely, understanding) is a probabilistic recursive (but not particularly deep) grammar. We have probabilistic regular grammars; they are Markov chains. Work on probabilistic CFGs and CSGs does not seem as far along, though I've only checked in on the research occasionally and I do get a lot of search results now.

It seems clear to me that we learn language in a sort of Bayesian process that continually refines the language model based on experience. Fragments in this grammar can range from idiomatic phrases to unique constructions from other elements (in a limited recursive sense). Ancient AI talked about "frames" to represent concepts in context, and I'm thinking that grammar fragments can be associated with frames and frame networks. Or something like that.

Although the success of neural networks is still limited in scope, it works so well in specific scopes that I'm pretty confident that there's something essentially right in the formulation, which is after all inspired by what we understand of brain physiology. It may not take so very much to connect the missing pieces; it could be only a few small lateral leaps. On the other hand, a lot of our neural nets are inspired by the visual cortex (convolutional neural networks, and they do what they do very well!), but there are whole areas of the brain that are much less understood.

We don't really understand how animals think (much less humans) but I suspect that the differences between animal cognition and human cognition might be smaller than we realize. It's just that certain small differences have high multiplier effects. Can we simulate the brain of a pond snail? I bet we could. An octopus would be a lot harder.

What's the essential leap between the intelligence of chimpanzees (who are probably more sophisticated than we realize) and humans? I'd wager that the physiological and computational difference is relatively small but exquisitely leveraged. Discovering that leap, or one like it, might lead to huge improvements in machine intelligence. It could be just around the corner. Or maybe it will hide in the shadows undiscovered for a long time for good reasons or bad (IMO the "Chaostron" lampoon put machine learning back at least a decade; what a shame).

Anyway, while there are clearly some conceptual leaps we need to take, I would not bet money that we are on the wrong track. Not a single cent.
posted by sjswitzer at 3:48 PM on September 12, 2019 [7 favorites]


Mister Cheese: "Develop a way of sticking them into a simulation of our world and then you can accelerate their training in a society the approximates our own"

Hey wait a minute
posted by chavenet at 3:50 PM on September 12, 2019 [7 favorites]


I work in this space and the state of the art has advanced by leaps and bounds in the last 8 months. New transformer-based approaches to building language models are knocking down every previous best result across a range of natural language tasks. Even as I type this I'm sure all the benchmarks will be knocked down again in some soon-to-be published transformer++ approach.

The big down-side of these approaches is they are extremely energy-intensive. We're talking about models with 300M+ parameters and no doubt someone's currently training a billion parameter model.
posted by simra at 4:15 PM on September 12, 2019 [4 favorites]


this is an odd article because 1) the problems deep learning has with QA tasks of the sort described have been well-known and discussed by practitioners for several years and 2) one of the hottest current research areas is hybridizing deep learning with the symbolic, logic-based systems the authors advocate.
posted by vogon_poet at 4:30 PM on September 12, 2019


It's never been clear to me how much mileage humans have gotten out of pattern matching and probabilistic models.
posted by rhamphorhynchus at 4:43 PM on September 12, 2019 [6 favorites]


fwiw!
How to Build Artificial Intelligence We Can Trust - "Computer systems need to understand time, space and causality. Right now they don't."

also btw :P
A critique of pure learning and what artificial neural networks can learn from animal brains
Artificial neural networks (ANNs) have undergone a revolution, catalyzed by better supervised learning algorithms. However, in stark contrast to young animals (including humans), training such networks requires enormous numbers of labeled examples, leading to the belief that animals must rely instead mainly on unsupervised learning. Here we argue that most animal behavior is not the result of clever learning algorithms—supervised or unsupervised—but is encoded in the genome. Specifically, animals are born with highly structured brain connectivity, which enables them to learn very rapidly. Because the wiring diagram is far too complex to be specified explicitly in the genome, it must be compressed through a “genomic bottleneck”. The genomic bottleneck suggests a path toward ANNs capable of rapid learning.
posted by kliuless at 5:30 PM on September 12, 2019 [1 favorite]


"I've this idea that in order to train something to think and read, reason, and infer like a human, you're going to have to give them the life experience at least up through the eighteen years that human brains experience before we're considered adults. You will need sensory inputs, and a breadth of human interaction that you get from living in a human society. And the intelligence you get out of it will be biased towards reasoning in that society. You can't get it from just pointing something at books. Books don't have enough context to make something to understand themselves... sometimes I don't have enough context!

So something like an android that can live among humans. But it'd take a while to teach them, right? As much as an ordinary person... however, once you've got one, you can just make copies. Of course, you'll want a few thousand for variation or you'll risk copying the biases of one of the AIs.

Train enough of them... a couple of thousand maybe. Develop a way of sticking them into a simulation of our world and then you can accelerate their training in a society the approximates our own. That way you don't have to pull them into the real world until you need them.

Of course, you'll have to provide them incentives to put them to work on things that we currently use AIs for. I can't imagine they'd love to be search bots all day if they were trained with human intelligence."


So you're basically agreeing that cognition - at least any useful in our society - must be embodied. (c.f. Lakoff) I concur.

Then the question becomes motivation/emotion. As emotion is an essential part of our own cognition - we'd spin our wheels endlessly in logical circles, not actually acting, without it (see Damasio as well as Lakoff) - you'd have to develop human-like emotions in AI as well, from the start of the whole process. (The Trek notion of Data getting an "emotion chip" as an optional afterthought overlooks this.)
posted by Philofacts at 5:45 PM on September 12, 2019 [5 favorites]


vogon_poet: "1) the problems deep learning has with QA tasks of the sort described have been well-known and discussed by practitioners for several years..."

I was thinking of exactly the same thing while reading the article. For reference, the Cyc project is in its 30s and, although complex, is nowhere near actual language understanding as a functioning human adult.
posted by andycyca at 5:47 PM on September 12, 2019 [1 favorite]


It's never been clear to me how much mileage humans have gotten out of pattern matching and probabilistic models.

Huge amounts of mileage! The last ten years has basically been an admission by the AI community that no, we're not going to build a generalized AI without quantum computing, so let's stop aspiring to building HAL, let's stop trying to teach chess bots to read the state of the board as though it were quantifiable as a number, and let's instead train our agents on monstrous amounts of data in a clearly-defined problem space. This is generally not a sexy proposition; the two places I've worked that use machine learning to do things have used it to (a) write a shitty chat-bot to handle NLP in flight bookings and (b) figure out how to pay our users the bare minimum amount to ensure that they (in aggregate) do what we need them to do when payment is their only motivator. But machine learning is REALLY REALLY good at hyper-specialized domain problems that don't need a ton of context. That's what computers are. They're capable of ingesting a ton of data, building models based on that huge data set, and applying it to extremely-similar problems with a very limited amount of uncertainty.

I defer to subject-matter experts like simra upthread, but my experience as a software person has been that the idea of a general AI is largely dead. You can train a machine to be incredibly smart in a very very limited context, but as soon as you try to turn the ability to, e.g., beat the world's best chess player into the ability to win a game of bridge, you're going to fall flat on your space because the problem spaces are so fundamentally different.
posted by Mayor West at 6:00 PM on September 12, 2019 [6 favorites]


Or, "Why Johnny 5 Can't Read"
posted by biogeo at 9:33 PM on September 12, 2019 [11 favorites]


The last ten years has basically been an admission by the AI community that no, we're not going to build a generalized AI without quantum computing, so let's stop aspiring to building HAL

Yeah, this.

ML as it stands is not about aspiring to AI - it's about doing fucktons of stats calcs better than we could do before - because we have fucktons more transistors than we've ever had before.

You want AI? Then yes, you'll be wanting a new approach. Shame TFA finished at that point, I was just getting interested.
posted by pompomtom at 4:10 AM on September 13, 2019


The big down-side of these approaches is they are extremely energy-intensive.

Just read somewhere that 70% of the food babies eat goes to "powering" the brain. My personal observation of robots/ai is the inputs, the number of sensors on any robot is in the single digits, in a person, nerve endings is way into the billions. A baby does not just look at a toy it touches, puts it into it's mouth, smells, listens to an admonition to take it out....

Give AI a nice warm sensual mouth with at least a few thousands of 'sensors' and we'll start to get somewhere.

(be sure to kiss them AI's sweetly so they don't turn around to bite use ;-)
posted by sammyo at 4:24 AM on September 13, 2019 [2 favorites]


I think one of the reasons why people think AI is reaching a dead end is because, traditionally, most AI research has focused upon getting AI to do things. And this is true - when it comes to learning how to accomplish tasks, neural networks tend to be pretty specific learners. If you train a neural network to play Go, that doesn't mean it'll be able to perform well at or even know how to approach playing chess or basketball, so I agree that AI tends to do very specific, narrow things (although there is some promising work in multi-task or zero/few shot learning that seeks to address this).

But one interesting branch that has emerged in AI research in the past few years concerns how these neural networks represent the world around them (or at least the slice of data that has been given to them about the world), in order to have the perceptual information to actually do these tasks. And this is where it gets interesting: even when AIs learn to do very specific things and can't deviate from these things, the way that they learn to represent the world around them often seems to be shockingly general. For example, if you train an AI to classify different natural objects in images (cats, dogs, boats, etc) with a large enough dataset (see: ImageNet), it actually learns representations, that when you take out of the neural network and apply to other tasks, function incredibly well at even very different tasks - researchers have applied these representations to completely different domains like biological and medical images, and shown that they surpass even very carefully engineered measurements taken by humans. We think this happens because by being exposed to a large number of classes and images, the neural network is also exposed to a large number of different textures, colors, shapes, and concepts - so even though the task is very specific, the neural network ends up developing a very wide visual vocabulary, that can be repurposed to solve other problems.

Where these observations about the generality of representations seems to be going now is self-supervised learning. The underlying intuition is that - well, maybe getting AI to learn how to do things isn't as important as getting them to learn how to perceive things. We've struggled a lot with getting AI to do things more general than what we train it on, but we seem to have an easier time getting the AI to learn how to represent the world around them in a general, reusable format, so why don't we learn into perception as our building block? Now that we've discarded the notion that we actually need the AI to do anything useful, we start looking at training tasks as a way to teach the AI transferrable skills, rather than a means to an ends. So the way people have started experimenting with this in the domain of self-supervised learning is that they're now getting AIs to solve puzzles and games - things like, piecing together jigsaw puzzles, or trying to figure out if an image is upside down or not, or taking a black and white image and coloring it. They've shown that by solving these puzzles, the AIs can learn really useful representations of the world, that can then be transferred to actually useful tasks, like finding objects in images. I find this really cool, because it gets closer to the way humans seem to learn about the world - developmental psychologist Alison Gopnik is actually actively investigating this field now, because she thinks that there's an analogy between self-supervised learning for AI, and how children learn about the world around them through play.

I've actually been exploring self-supervised learning in my lab, and we recently published a paper about this - we're trying to exploit some of these self-supervised ideas for practical applications in biomedical domains. One of the things we've found out is that, depending on what kind of puzzle you give the neural network, it seems to learn different ways of seeing or representing the world. So for example, we recently published an article about how you can train an AI to understand biology in images by comparing different objects. It turns out the AI learns to represent these objects in a way that's shockingly good for a lot of the biological analyses people are interested in conducting on these images - better than anything else people have come up with in the past by a massive margin. But one of the interesting things we found out is that through the task of comparing objects, the AI learns to focus more on the commonalities between objects, while ignoring the differences between objects (this is what makes it good for biological analyses, because we're often more interested in the former than the latter). But there are other ways of training these AIs - for example, you can get them to summarize what they see in the image. And that leads to a different representation about the objects - now it encodes both similarities and differences (useful or not), for example.

I think that these insights make sense - when a child plays, it's not like they focus on one task - they're really interested in developing a wide range of skills. And when we perceive the world around us, we often perceive and highlight certain aspects of information differently depending on what our goals are in the moment - we're more sensitive to different things when we're driving versus just casually walking along the street versus trying to find a watch we dropped. So what I'm seeing is that the research on AI perception and representation of perception is actually driving us there - we're moving closer and closer to figuring out how to get AIs to represent the world and the different ways AIs can see the world - and hopefully in the near future, how to switch between these representations intelligently depending on goals. In other words, it's true that the current state of getting AI to do things is very limited and specific, but I think going back to the fundamental units of perception and representation, and then building on top of that is a promising path forward - it's not like we've reached a dead end at all, in my opinion.
posted by Conspire at 6:52 AM on September 13, 2019 [9 favorites]


Do want Skynet?

Because this is how you get Skynet.
posted by mikelieman at 7:46 AM on September 13, 2019 [1 favorite]


I interviewed at Cyc around 1990 when they were across the street from Stanford. I got the feeling that the job was sort of like Talmudic scholars arguing about the semantics in relational and situational contexts involving mundane things. Is that really this when the other thing is over there? It was people trying to formulate “knowledge” into a machine readable/executable language. This was not Artificial Intelligence as all the thinking was done by people. The machine was just a database with a snazzy SQL. I went on the interview because I was curious about what they were up to being a weekend epistemologist. The idea of encapsulating all of common knowledge 30 years ago struck me as being very even insanely optimistic. As they are still in the process tells me that they are very sure that they are unto something. I’m still not...
posted by njohnson23 at 9:12 AM on September 13, 2019


1. Someone proposes a scheme they claim will result in AI.
2. Critics say that'll never work -- we need an entirely new approach.
3. Someone proposes a different approach.
4. GOTO 2

This has been going on for longer than most of us have been alive...
posted by Zed at 10:33 AM on September 13, 2019 [1 favorite]


I think AI as a discipline doesn't focus enough on theory (example: this article); the industry focus is on applications which biases research towards a "prove by doing", technology-driven approach to basic science. Like if the claim whether neuromorphic computational models are sufficient or not, or equivalent to biology, etc., there's a mathematical proof out there that answers this question definitively. We already know that the neural network model of computation can be Turing complete. What's needed is for complexity theorists to help come up with a research program for investigating AI models using theorems and proofs. It's great that the majority of the AI community is focused on empirical science, but by somewhat marginalizing a theoretical CS approach it's kind of ignoring its own foundations.
posted by polymodus at 12:12 PM on September 13, 2019


Notes on "Intriguing Properties of Neural Networks", and two other papers (2014) - "in the interest of the historical record" :P
How Can This Be?
  • The networks aren't over-fitting in any obvious sense
    • CV shows they generalize to new instances from the same training pool

  • The paper suggests looking at "instability", i.e., the Lipschitz constant of the \(\phi\) mapping
    • They can only upper-bound this
    • And frankly it's not very persuasive as an answer (Lipschitz constant is a global property not a local one)

  • Speculative thought 1: what it is like to be an autoencoder?
    • Adversarial examples are perceptually indistinguishable for humans but not for the networks
    • \(\therefore\) human perceptual feature space is very different from the network's feature space
    • What do adversarial examples look like for humans? (Possible psych. experiment with Mechanical Turk)

  • Speculative thoguht 2: "be careful what you wish for"
    • Good generalization to the distribution generating instances
    • This is \(n^{-1}\sum_{i=1}^{n}{\delta(x-x_i)}\), and \(n = 10^7 \ll\) dimension of the input space
    • Big gaps around every point in the support
    • \(\therefore\) training to generalize to this distribution doesn't care about small-scale continuity...
    • IOW, the network is doing exactly what it's designed to do
    • Do we need lots more than \(10^7\) images? How many does a baby see by the time it's a year old, anyway?
    • What if we added absolutely-continuous noise to every image every time it was used? (Not enough, they use Gaussians)
    • They do better when they fed in adversial examples into the training process (natch), but not clear whether that's not just shifting around the adversarial examples...

posted by kliuless at 1:01 AM on September 14, 2019


« Older Gizapon my works and despair   |   This is why we don’t use poop in any of our work. Newer »


This thread has been archived and is closed to new comments