The extended mind surfs uncertainty with predictive processing
April 7, 2018 9:12 PM Subscribe
Where does the mind end and the world begin? Is the mind locked inside its skull, sealed in with skin, or does it expand outward, merging with things and places and other minds that it thinks with? What if there are objects outside—a pen and paper, a phone—that serve the same function as parts of the brain, enabling it to calculate or remember? You might say that those are obviously not part of the mind, because they aren’t in the head, but that would be to beg the question. So are they or aren’t they?
Consider a woman named Inga, who wants to go to the Museum of Modern Art in New York City. She consults her memory, recalls that the museum is on Fifty-third Street, and off she goes. Now consider Otto, an Alzheimer’s patient. Otto carries a notebook with him everywhere, in which he writes down information that he thinks he’ll need. His memory is quite bad now, so he uses the notebook constantly, looking up facts or jotting down new ones. One day, he, too, decides to go to MOMA, and, knowing that his notebook contains the address, he looks it up. […]
Andy Clark, a philosopher and cognitive scientist at the University of Edinburgh, believes that there is no important difference between Inga and Otto, memory and notebook. He believes that the mind extends into the world and is regularly entangled with a whole range of devices. But this isn’t really a factual claim; clearly, you can make a case either way. No, it’s more a way of thinking about what sort of creature a human is. Clark rejects the idea that a person is complete in himself, shut in against the outside, in no need of help. […]
Perhaps because Clark has been working so closely with a neuroscientist, he has moved quite far from where he started in cognitive science in the early nineteen-eighties, taking an interest in A.I. “I was very much on the machine-functionalism side back in those days,” he says. “I thought that mind and intelligence were quite high-level abstract achievements where having the right low-level structures in place didn’t really matter.” Each step he took, from symbolic A.I. to connectionism, from connectionism to embodied cognition, and now to predictive processing, took him farther away from the idea of cognition as a disembodied language and toward thinking of it as fundamentally shaped by the particular structure of its animal body, with its arms and its legs and its neuronal brain. He had come far enough that he had now to confront a question: If cognition was a deeply animal business, then how far could artificial intelligence go?
This thread has been archived and is closed to new comments