Time, Space and Causality
September 26, 2019 6:32 AM Subscribe
The Genius Neuroscientist Who Might Hold the Key to True AI - "Karl Friston's free energy principle might be the most all-encompassing idea since Charles Darwin's theory of natural selection. But to understand it, you need to peer inside the mind of Friston himself." (via)
Friston first became a heroic figure in academia for devising many of the most important tools that have made human brains legible to science. In 1990 he invented statistical parametric mapping, a computational technique that helps—as one neuroscientist put it—“squash and squish” brain images into a consistent shape so that researchers can do apples-to-apples comparisons of activity within different crania. Out of statistical parametric mapping came a corollary called voxel-based morphometry, an imaging technique that was used in one famous study to show that the rear side of the hippocampus of London taxi drivers grew as they learned “the knowledge.”also btw...
A study published in Science in 2011 used yet a third brain-imaging-analysis software invented by Friston—dynamic causal modeling—to determine if people with severe brain damage were minimally conscious or simply vegetative.
When Friston was inducted into the Royal Society of Fellows in 2006, the academy described his impact on studies of the brain as “revolutionary” and said that more than 90 percent of papers published in brain imaging used his methods. Two years ago, the Allen Institute for Artificial Intelligence, a research outfit led by AI pioneer Oren Etzioni, calculated that Friston is the world’s most frequently cited neuroscientist. He has an h-index—a metric used to measure the impact of a researcher’s publications—nearly twice the size of Albert Einstein’s. Last year Clarivate Analytics, which over more than two decades has successfully predicted 46 Nobel Prize winners in the sciences, ranked Friston among the three most likely winners in the physiology or medicine category...
For the past decade or so, Friston has devoted much of his time and effort to developing an idea he calls the free energy principle. (Friston refers to his neuroimaging research as a day job, the way a jazz musician might refer to his shift at the local public library.) With this idea, Friston believes he has identified nothing less than the organizing principle of all life, and all intelligence as well. “If you are alive,” he sets out to answer, “what sorts of behaviors must you show?”
First the bad news: The free energy principle is maddeningly difficult to understand. So difficult, in fact, that entire rooms of very, very smart people have tried and failed to grasp it. A Twitter account with 3,000 followers exists simply to mock its opacity, and nearly every person I spoke with about it, including researchers whose work depends on it, told me they didn’t fully comprehend it.
But often those same people hastened to add that the free energy principle, at its heart, tells a simple story and solves a basic puzzle. The second law of thermodynamics tells us that the universe tends toward entropy, toward dissolution; but living things fiercely resist it. We wake up every morning nearly the same person we were the day before, with clear separations between our cells and organs, and between us and the world without. How? Friston’s free energy principle says that all life, at every scale of organization—from single cells to the human brain, with its billions of neurons—is driven by the same universal imperative, which can be reduced to a mathematical function. To be alive, he says, is to act in ways that reduce the gulf between your expectations and your sensory inputs. Or, in Fristonian terms, it is to minimize free energy.
- Facebook's Latest Purchase Gets Inside Users' Heads—Literally - "The social media company acquires CTRL-Labs, a 'brain-machine-interface' startup that lets users control devices by tapping signals off a wristband."[2,3]
- Inside DeepMind's epic mission to solve science's trickiest problem - "DeepMind's AI has beaten chess grandmasters and Go champions. But founder and CEO Demis Hassabis now has his sights set on bigger, real-world problems that could change lives. First up: protein folding."
- How to Build Artificial Intelligence We Can Trust - "Computer systems need to understand time, space and causality. Right now they don't."
- A critique of pure learning and what artificial neural networks can learn from animal brains - "Artificial neural networks (ANNs) have undergone a revolution, catalyzed by better supervised learning algorithms. However, in stark contrast to young animals (including humans), training such networks requires enormous numbers of labeled examples, leading to the belief that animals must rely instead mainly on unsupervised learning. Here we argue that most animal behavior is not the result of clever learning algorithms—supervised or unsupervised—but is encoded in the genome. Specifically, animals are born with highly structured brain connectivity, which enables them to learn very rapidly. Because the wiring diagram is far too complex to be specified explicitly in the genome, it must be compressed through a 'genomic bottleneck'. The genomic bottleneck suggests a path toward ANNs capable of rapid learning."
- AI That Evolves in the Wild - "Alison Gopnik said how nobody reads past the one sentence in Turing's 1950 paper. They never read past his 1936 paper to his 1939 'Systems of Logic Based on Ordinals', which is much more interesting. It's about non-deterministic computers, not the universal Turing machine but the second machine he wrote his thesis on in Princeton, which was the oracle machine—a non-deterministic machine. Already he realized by then that the deterministic machines were not that interesting. It was the non-deterministic machines that were interesting. Similarly, we talk about the von Neumann architecture, but von Neumann only has one patent, and that patent is for non-von Neumann architecture. It's for a neuromorphic computer that can do anything, and he explains that, because to get a patent you have to show what it can do. And nobody reads that patent."
- The Explosive Evolution of Consciousness - "Some philosophers identify consciousness with the complex, reflective, self-conscious experiences that we have when, say, we are sitting in an armchair and thinking about consciousness. As a result, they argue that even babies and animals aren't really conscious. At the other end of the spectrum, some philosophers have argued for 'pan-psychism', the idea that consciousness is everywhere, even in atoms. Recently, however, a number of biologists and philosophers have argued that consciousness was born from a specific event in our evolutionary history: the Cambrian explosion. A new book, 'The Evolution of the Sensitive Soul' by the Israeli biologists Simona Ginsburg and Eva Jablonka, makes an extended case for this idea."
In mathematics there’s this deep, old problem called the continuum hypothesis. We have an infinite number of different infinities, but they divide into only two kinds: countable infinities and uncountable infinities. My analogy for that is how at the end of a conference when you look for a t-shirt, there are only extra small t-shirts and extra large. There are no medium t-shirts. The continuum hypothesis—and there is a difference between being true and being provable—has not been proved. It says you will never find a medium-sized infinity. All the infinities belong to one side or the other.[4,5,6]
Two very interesting things are happening. What this means is that for any uncountable infinity, say, a line, there’s an infinite number of points between any two points, and then if you cut a piece of that line, it still has an infinite number of points. That, I believe, is analogous to organisms. All organisms do their computing with continuous function. In nature we use discrete functions for error correction in genetics, but all control systems in nature are analog. The smallest analog system has the full power of the continuum.
On the other side, you have the constructible infinities. What’s interesting there is that we’re trying to prove this by doing it. We’re doing our best to create a medium-sized infinity. So, you can say, "Well, it exists. We’ve made it." The current digital universe is growing by 30 trillion transistors per second, and that’s just on the hardware side, so we have this medium-sized infinity, but it still legally belongs to the countable infinities.
For around 100 million years, from about 635 to 542 million years ago, the first large multicellular organisms emerged on Earth. Biologists call this period the Ediacaran Garden—a time when, around the globe, a rich variety of strange creatures spent their lives attached to the ocean floor, where they fed, reproduced and died without doing very much in between. There were a few tiny slugs and worms toward the end of this period, but most of the creatures, such as the flat, frond-like, quilted Dickinsonia, were unlike any plants or animals living today.
Then, quite suddenly by geological standards, most of these creatures disappeared. Between 530 and 520 million years ago, they were replaced by a remarkable proliferation of animals who lived quite differently. These animals started to move, to have brains and eyes, to seek out prey and avoid predators. Some of the creatures in the fossil record seem fantastic—like Anomolocaris, a three-foot-long insectlike predator, and Opabinia, with its five eyes and trunk-like proboscis ending in a grasping claw. But they included the ancestors of all current species of animals, from insects, crustaceans and mollusks to the earliest vertebrates, the creatures who eventually turned into us... [8,9,10,11]
This thread has been archived and is closed to new comments