The IEEE Spectrum "special report" on The Singularity makes for interesting reading, but I’d like you to try something as you click through it. When you read these essays and interviews, every time you see the word "Singularity," I want you to replace it in your head with the term "Flying Spaghetti Monster."
(My personal favourite right now is "The Flying Spaghetti Monster represents the end of the supremacy of Homo sapiens as the dominant species on planet Earth.")
The Singularity is the last trench of the religious impulse in the technocratic community. The Singularity has been denigrated as "The Rapture For Nerds," and not without cause. It’s pretty much indivisible from the religious faith in describing the desire to be saved by something that isn’t there (or even the desire to be destroyed by something that isn’t there) and throws off no evidence of its ever intending to exist. It’s a new faith for people who think they’re otherwise much too evolved to believe in the Flying Spaghetti Monster or any other idiot back-brain cult you care to suggest.
Vernor Vinge, the originator of the term, is a scientist and novelist, and occupies an almost unique space. After all, the only other sf writer I can think of who invented a religion that is also a science-fiction fantasy is L Ron Hubbard.
[We] can certainly conceive of a machine so constructed that it utters words,
and even utters words which correspond to bodily actions causing a change in
its organs (e.g., if you touch it in one spot it asks you what you want of it, if
you touch it in another it cries out that you are hurting it, and so on). But it is
not conceivable that such a machine should produce different arrangements of
words so as to give an appropriately meaningful answer to what is said in its
presence, as the dullest of men can do... [And]... even though such machines
might do some things as well as we do them, or perhaps even better, they
would inevitably fail in others, which would reveal that they were acting not
through understanding, but only from the disposition of their organs. For
whereas reason is a universal instrument which can be used in all kinds of
situations, these organs need some particular disposition for each particular
action; hence it is for all practical purposes impossible for a machine to have
enough different organs to make it act in all the contingencies of life in the
way in which our reason makes us act. (Cottingham et al. 1985a, 140)
Spectrum: You’ve already expressed your disagreement with many of the ideas associated with the Singularity movement. I’m interested in your thoughts about its sociology. How do you account for its popularity in Silicon Valley?
LeCun: It’s difficult to say. I’m kind of puzzled by that phenomenon. As Neil Gershenfeld has noted, the first part of a sigmoid looks a lot like an exponential. It’s another way of saying that what currently looks like exponential progress is very likely to hit some limit—physical, economical, societal—then go through an inflection point, and then saturate. I’m an optimist, but I’m also a realist.
There are people that you’d expect to hype the Singularity, like Ray Kurzweil. He’s a futurist. He likes to have this positivist view of the future. He sells a lot of books this way. But he has not contributed anything to the science of AI, as far as I can tell. He’s sold products based on technology, some of which were somewhat innovative, but nothing conceptually new. And certainly he has never written papers that taught the world anything on how to make progress in AI.
Spectrum: What do you think he is going to accomplish in his job at Google?
LeCun: Not much has come out so far.
Spectrum: I often notice when I talk to researchers about the Singularity that while privately they are extremely dismissive of it, in public, they’re much more temperate in their remarks. Is that because so many powerful people in Silicon Valley believe it?
LeCun: AI researchers, down in the trenches, have to strike a delicate balance: be optimistic about what you can achieve, but don’t oversell what you can do. Point out how difficult your job is, but don’t make it sound hopeless. You need to be honest with your funders, sponsors, and employers, with your peers and colleagues, with the public, and with yourself. It is difficult when there is a lot of uncertainty about future progress, and when less honest or more self-deluded people make wild claims of future success. That’s why we don’t like hype: it is made by people who are either dishonest or self-deluded, and makes the life of serious and honest scientists considerably more difficult.
When you are in the kind of position as Larry Page and Sergey Brin and Elon Musk and Mark Zuckerberg, you have to prepare for where technology is going in the long run. And you have a huge amount of resources to make the future happen in a way that you think will be good. So inevitably you have to ask yourself those questions: what will technology be like 10, 20, 30 years from now. It leads you to think about questions like the progress of AI, the Singularity, and questions of ethics.
Spectrum: Right. But you yourself have a very clear notion of where computers are going to go, and I don’t think you believe we will be downloading our consciousness into them in 30 years.
LeCun: Not anytime soon.
Spectrum: Or ever.
LeCun: No, you can’t say never; technology is advancing very quickly, at an accelerating pace. But there are things that are worth worrying about today, and there are things that are so far out that we can write science fiction about it, but there’s no reason to worry about it just now.
Spectrum: Here’s another question, this time from Stuart and Hubert Dreyfus, brothers and well-known professors at the University of California, Berkeley: “What do you think of press reports that computers are now robust enough to be able to identify and attack targets on their own, and what do you think about the morality of that?”
LeCun: I don’t think moral questions should be left to scientists alone! There are ethical questions surrounding AI that must be discussed and debated. Eventually, we should establish ethical guidelines as to how AI can and cannot be used. This is not a new problem. Societies have had to deal with ethical questions attached to many powerful technologies, such as nuclear and chemical weapons, nuclear energy, biotechnology, genetic manipulation and cloning, information access. I personally don’t think machines should be able to attack targets without a human making the decision. But again, moral questions such as these should be examined collectively through the democratic/political process.
Spectrum: You often make quite caustic comments about political topics. Do your Facebook handlers worry about that?
LeCun: There are a few things that will push my buttons. One is political decisions that are not based on reality and evidence. I will react any time some important decision is made that is not based on rational decision-making. Smart people can disagree on the best way to solve a problem, but when people disagree on facts that are well established, I think it is very dangerous. That’s what I call people on. It just so happens that in this country, the people who are on side of irrational decisions and religious-based decisions are mostly on the right. But I also call out people on the left, such as those who think GMOs are all evil—only some GMOs are!—or who are against vaccinations or nuclear energy for irrational reasons. I’m a rationalist. I’m also an atheist and a humanist;[*] I’m not afraid of saying that. My idea of morality is to maximize overall human happiness and minimize human suffering over the long term. These are personal opinions that do not engage my employer. I try to have a clear separation between my personal opinions—which I post on my personal Facebook timeline—and my professional writing, which I post on my public Facebook page.
The foregoing points at a basic issue with how quickly a scientifically adequate account of human intelligence can be developed. We call this issue the complexity brake. As we go deeper and deeper in our understanding of natural systems, we typically find that we require more and more specialized knowledge to characterize them, and we are forced to continuously expand our scientific theories in more and more complex ways. Understanding the detailed mechanisms of human cognition is a task that is subject to this complexity brake. Just think about what is required to thoroughly understand the human brain at a micro level. The complexity of the brain is simply awesome. Every structure has been precisely shaped by millions of years of evolution to do a particular thing, whatever it might be. It is not like a computer, with billions of identical transistors in regular memory arrays that are controlled by a CPU with a few different elements. In the brain every individual structure and neural circuit has been individually refined by evolution and environmental factors. The closer we look at the brain, the greater the degree of neural variation we find. Understanding the neural structure of the human brain is getting harder as we learn more. Put another way, the more we learn, the more we realize there is to know, and the more we have to go back and revise our earlier understandings.
Just before birth, mice with the human DNA had brains that were noticeably larger — about 12 percent bigger than the brains of mice with the chimp DNA, according to a report in the journal Current Biology.
"We were really excited when we saw the bigger brains," Silver says. Her team now wants to know if the mice will behave differently in adulthood. They're also looking for other bits of uniquely human DNA that affect the brain. "We think this is really the tip of the iceberg," she says.
The particular region of DNA they found to be important is in a part of the genetic code that was once called "junk DNA." This is DNA that doesn't code for proteins, so scientists used to think it served no purpose. These days, researchers believe this kind of DNA probably regulates how genes get turned on and off — but what exactly is happening there is still mysterious.
The series Serial Experiment Lain proposes the interesting idea that such a super-human hive mind might well appear, but seems to assume that there would only be one such. It ends up as a variation of "the internet wakes up and becomes intelligent", only it is the humans who use the internet who are the fundamental computing elements of that intelligence rather than the computers connected to it.
I think that the spontaneous appearance of hive minds in the internet is a very real phenomenon already, but I don't credit the idea that it would happen exactly once, more or less at a single moment, and that the result would be permanent and omnipresent.
Human hive-minds, in one sense, are a factor in our lives all the time. They adapt and reorganize based on experience and new challenges. And the communication channels within them also adapts. Humans can create new hive-minds as needed, and any given human may be part of many such. As a practical matter, all human organizations do this, including corporations and governments.
All communications between humans is much slower than the processing rate inside human minds. As far as human hive minds are concerned, the communications possible in direct teamwork is fastest but doesn't scale. Creation of organizations involving more than 30 people requires hierarchization, which decreases communication bandwidth and increases latency. The larger the group, the more constricted the bandwidth compared to potential message traffic and the greater the latency. All larger hive-mind constructs among humans have been based on communications which were several orders of magnitude slower and less efficient even than the direct interpersonal communications used in small groups. Geographic distribution usually contributed a few more orders of magnitude of degradation.
With the development of the internet it becomes possible for arbitrarily large groups of people who are geographically distributed to spontaneously form hive-minds and to communicate with one another at speeds and latencies approaching those which previously only had been possible in direct teamwork. The internet largely solves the scaling problem involved in direct teamwork, and totally eliminates the effects of geographic distribution of participants. In the "global village" of the internet, everything is right next door.
Hive-minds will compete and contend. Some will cooperate, forming coalitions. Sometimes that will cause them to merge. Some hive-minds will break into pieces, yielding children whose contributing members sort themselves based on their disagreements. And generally they'll be self-organizing, and many will be able to adapt to changing circumstances.
And now we have reached the point where the science/engineering feedback loop has given engineers the tools and technologies to create the internet, the most recent of my four most important inventions in human history. And just as with the other three (spoken language, writing, movable type printing) it will cause a "knee" in human capabilities and behavior. And because of that, a true superhuman "intelligence" may appear during our lifetime.
« Older Fade To Grey | An Ex Axe Newer »
This thread has been archived and is closed to new comments