In Telepathic Society, One Who Can Hide Thoughts Is King
October 14, 2017 12:21 AM   Subscribe

Hugh Howey: How to Build a Self-Conscious Machine - "Unlike the direction most autonomous vehicle research is going—where engineers want to teach their car how to do certain things safely—our team will instead be teaching an array of sensors all over a city grid to watch other cars and guess what they're doing."
That blue Nissan is going to the grocery store because “it is hungry.” That red van is pulling into a gas station because “it needs power.” That car is inebriated. That one can’t see very well. That other one has slow reaction speeds. That one is full of adrenaline.

Thousands and thousands of these needs and anthropomorphic descriptors are built up in a vast library of phrases or indicator lights. If we were building a person-shaped robot, we would do the same by observing people and building a vocabulary for the various actions that humans seem to perform. Sensors would note objects of awareness by scanning eyes (which is what humans and dogs do). They would learn our moods by our facial expressions and body posture (which current systems are already able to do). This library and array of sensors would form our Theory of Mind module. Its purpose is simply to tell stories about the actions of others. The magic would happen when we turn it on itself.
also btw...
posted by kliuless (26 comments total) 39 users marked this as a favorite
 
Computing hardware can do anything that a brain could do, but I don't think at this point we're doing what brains do."

I think that part of the problem is we’re not entirely sure what brains do, and that our subjective experience might not be a good metric to assess how they do it. I think that’s part of what bogged down Good Old Fashioned AI. The subjective experience of a general symbolic thinking framework breaks down quickly when you try to build it. We’re just as likely a bag of specific tricks, one of which is the belief in an unchanging “I” who subjectively experiences and “makes decisions”. The Buddhists have some insight there. That may make sense from an evolutionary perspective, but be very misleading to a researcher. Maybe we’re just a bundle of expert systems. The design of the brain with specific areas dedicated to, for example visual processing, may lend some credence to that.
posted by leotrotsky at 3:10 AM on October 14, 2017 [6 favorites]


> Thousands and thousands of these needs and anthropomorphic descriptors are built up in a vast library of phrases or indicator lights. If we were building a person-shaped robot, we would do the same by observing people and building a vocabulary for the various actions that humans seem to perform.

Cars don't fuel up because they're thirsty, they do it (to extend the metaphor) because they anticipate the risk of thirst. In city driving, the car can safely afford to run its tank to nearly-empty regularly because it's never far from fuel; when driving in the American southwest, the car might have to refuel at every gas station because of the danger of being too far away from one once it realizes its need.

Anthropomorphism becomes a convenient way to frame behaviors in terms of needs and tasks, but it seems severely self-limiting, because that framework demands continuing internal consistency while becoming increasingly elaborate, and all the while it must continue to map the behaviors of the real world with sufficient accuracy to forestall unintended consequences.

It sounds like a contest between poets and architects.
posted by ardgedee at 4:24 AM on October 14, 2017 [2 favorites]


I'll believe AI has arrived when a machine unilaterally decides to spend the afternoon daydreaming and procrastinating with a cold digital brewskie.
posted by parki at 4:41 AM on October 14, 2017 [1 favorite]


How would a thing like, say, artificial fire work? If consciousness is an emergent result of a very particular set of electrochemical processes, analogous to a natural phenomenon like fire (as the pre-Hindu Vedics argued thousands of years ago), then it might not be a phenomena you could ever in any real sense create an artificial version of (at least, I’m stymied when I try to concieve of anything that can behave like fire in the real world, as opposed to inside the context of a fictional, simulated environment that stops short of being reality, but isn’t simply old fashioned, real fire sparked by the same kinds of non-digital, non-symbolic chemical/physical processes already understood to start fires naturally). If the real natural world is the medium of propagation for the sense of consciousness and self awareness in a way analogous to a complex natural phenomenon like fire, I don’t think you could ever expect to be able to produce a completely isomorphic analogue to it in the real world using digital representations. Consciousness involves the use of symbolic representations but that doesn’t imply consciousness as a natural process is made up of symbolic representations or produced by them.

Try to imagine what “artificial fire” might be. Is there really even any sense in which it makes sense to speak of such a thing other than when we discuss modeling what we know of how real fire behaves in our world in fictional worlds? I’m not convinced you could create an artificial, digital fire that works like real fire in the real world using a computer program without at some point just hooking in some kind of lighter peripheral that just lights a plain old fashioned non-artificial fire. If the base conditions required for starting up and persisting consciousness as a complex phenomenon is dependent on a finite number of specific physical and chemical precursors, in just the right proportions and having just the right physical properties, you could never create it without also creating those exact conditions, and there’d be hard limits on the degree to which you could control and shape the form it takes and its behaviors and effects. I’m still skeptical you really could produce the kind of artificial consciousness hard AI takes as inevitable or that there would be any use or benefit to us humans in doing so.

If we could really produce intelligent, digital consciousness, programming would start to become as much about doing politics with machines as programming them, and what’s the practical benefit for humanity in that anyway? I wouldn’t want to have to debate with my car to get it to take me to the store, would you?
posted by saulgoodman at 5:17 AM on October 14, 2017 [5 favorites]


It’s like imagining roasting real meat over a painting or video image of fire. How could you possibly make that work—bridge the gap between symbolic representation and the physical fact of fire in the real world, without just setting the canvas or display monitor on fire?
posted by saulgoodman at 5:31 AM on October 14, 2017 [2 favorites]


Excellent Saturday morning coffee bunch of links, thanks kliuless!

If this trait [self-consciousness] is so useful, then why aren’t all animals self-conscious?

Well on the cosmological scale it's been (temporarily?) useful for us in the very very short-term ...
posted by carter at 5:48 AM on October 14, 2017


Did somebody mention self driving cars?
posted by DreamerFi at 6:33 AM on October 14, 2017 [1 favorite]


How could you possibly make that work

The darkest of magics, calling on the screaming souls of thousands of the Damned and imperiling the lives and sanity of everyone with a couple miles radius. But, as my mom says, "you can't make an omelet without burning a hamlet."

In reality, meat roasted over a painting usually ends up kind of dry, so I don't recommend the experiment.
posted by GenjiandProust at 6:39 AM on October 14, 2017


Slightly less seriously, I thought the blind spot of the Futurism industry is that it's pretty much guesses, bullshit, and hucksterism that relies on people not remembering "misses" as often as they remember "hits." Burning alive Futurists with a less than 80% hit rate after 5 years would probably solve this problem...
posted by GenjiandProust at 6:42 AM on October 14, 2017 [1 favorite]


Something something spark of consciousness, something something internal combustion. Can you simulate emergent kinetic properties?

This is not a brain.
posted by mneekadon at 6:49 AM on October 14, 2017 [1 favorite]


The future is always "five to ten years away".
posted by octothorpe at 7:01 AM on October 14, 2017 [2 favorites]


I don't really understand the cooking with fire issue. We cook with metaphors all the time, from induction stovetops to microwave ovens.
posted by chavenet at 8:28 AM on October 14, 2017 [1 favorite]


Something the main article brought to mind for me is the lead-crime hypothesis. It is not easy to grapple with the idea that for all our supposed free will and personal responsibility, moral structures in society, deterrence through law enforcement, and sweeping changes in technology and the economy...the incidence of small-scale evil around us might have been driven by one chemical.

No one would ever accept that explanation of human behaviour on an individual scale. But similarly, if Skynet suddenly launches the nukes we would say it's because Skynet decided to be evil, not because maybe there was a signed overflow error in the neural net evaluation code that led to random outputs under certain conditions.
posted by allegedly at 9:02 AM on October 14, 2017 [2 favorites]


Great post, lots to review.

If this trait [self-consciousness] is so useful, then why aren’t all animals self-conscious?

(Naively) Questioning this assertion. How is my cat not self-conscious?

A tangent on autonomous vehicles:
The focus on each vehicle being able to drive itself anywhere at any time seems misplaced to me. If you put equal focus on the environment as on the vehicle - paths or sensors in the roadway, vehicle sensors at intersections, central traffic control - then you could have autonomous vehicles today. Obviously you can't wire every foot of every road, but this is much more possible in large urban areas.

Container ports are already automated like this, with driverless 'trucks' shifting containers around.

I suspect I know why autonomous vehicle technology is currently so popular: it holds the promise of sustaining the automobile industry and permitting higher vehicle density and efficiency without cities or governments spending a dime on infrastructure or on high-cost mass transit like better trains.
posted by Artful Codger at 9:10 AM on October 14, 2017


That blue Nissan is going to the grocery store because “it is hungry.”

Ah, the 'Basil Fawlty' theory of mind.

Seriously. We're saying that to have conscious states is to attribute conscious states to ourselves? But to know what a conscious state is, we have to have had one. So we have to have already attributed one to ourselves? So before can attribute conscious states to ourselves, we have to attribute conscious states to ourselves?
posted by Segundus at 9:13 AM on October 14, 2017 [1 favorite]


America must be as dominant in the heavens as it is on Earth.

Mike Pence is nearly as stupid and tone deaf as his boss.
posted by SPrintF at 9:57 AM on October 14, 2017 [2 favorites]


Container ports are already automated like this, with driverless 'trucks' shifting containers around.

Container ports don't have massive numbers of other (non-truck) road users.
posted by asterix at 11:19 AM on October 14, 2017 [1 favorite]


"you can't make an omelet without burning a hamlet." Isn't that more like cooking an omelet over a burning play script. Though, the slings and arrows of outrageous fortune, never prepared some for the meatless omelet. To thy own self, be stew.
posted by Oyéah at 11:25 AM on October 14, 2017 [2 favorites]


Liquid sodium, cooling nuclear reactors. Why does that send chills down my spine, and set off my need for worry beads?
posted by Oyéah at 11:34 AM on October 14, 2017


I suspect I know why autonomous vehicle technology is currently so popular

The reason autonomous cars are so popular is because they will save millions of lives, and free up billions of hours of people’s time, while reducing their capital costs for transportation; and a lot of capitalists understand that when people fully realize this there will be scads of money to be made.

I’ll wager however that just as with other advancements in transportation safety, the benefits will not be nearly as fully realized in the US as in other OECD nations.
posted by lastobelus at 2:51 PM on October 14, 2017 [3 favorites]


Seriously.... So before can attribute conscious states to ourselves, we have to attribute conscious states to ourselves?

Wikipedia article that gives a reasonably basic explanation of emergence; might help.
posted by lastobelus at 3:00 PM on October 14, 2017


Container ports don't have massive numbers of other (non-truck) road users.

Yes, but my point is that a combination of the current level of vehicle autonomy PLUS data from road sensors is more feasible now than having just autonomous vehicles. And arguably safer.

As a sometime urban cyclist and frequent pedestrian, I dread what kind of hell driverless cars could wreak to the downtown, as far as biking/walking are concerned.

The reason autonomous cars are so popular is because they will save millions of lives, and free up billions of hours of people’s time, while reducing their capital costs for transportation; and a lot of capitalists understand that when people fully realize this there will be scads of money to be made.

I don't see how self-driving cars would free up time, except maybe for the ability to queue up and move like trains... but you could also simply have more and better trains... I also don't think that driverless vehicles can efficiently scale up EXCEPT in dense urban areas... and again these areas would arguably be better served with better public transit.

More people in cities are NOT buying cars, so who would really have the money to own all these autonomous cars? ...Companies... and now you are simply talking of privatized public transit.

I know they are coming, and I don't completely hate the idea, but I think that too much hope is being pinned on driverless vehicles, and not enough on the more practical and efficient alternatives. I also don't think that the solution to auto proliferation is MOAR CARZ. Sorta like the gun debate in the US.
posted by Artful Codger at 8:29 AM on October 15, 2017 [1 favorite]


Beyond the general problems with futurist thinking, there are some specific blindspots that occur when people speculate about the future of AI and robotics. Veteran roboticist Rodney Brooks lays out seven of them:

1. Overestimating and underestimating
2. Imagining magic
3. Performance vs. competence
4. Suitcase words
5. Exponentials
6. Hollywood scenarios
7. Speed of deployment

The speed-of-deployment problem is going to bite self-driving cars especially hard. Hofstadter's essay, linked above, is also a sharp elaboration of the performance vs. competence and suitcase word errors.

To tie this to the OP, Howey's essay, which I found ludicrous, falls prey to systematic over- and underestimating errors, particularly in the passages where he glibly assumes we already have off-the-shelf "sensors" that can be coupled with "learning algorithms" to produce first-order TOM descriptions. This is pure vaporware, and doesn't exist as any plausible extrapolation from any existing technology, or even from any existing well-validated theories. As Brooks points out, it's trivial to see that Google image sorting algorithms are not, as Howey claims, an instance of AI: while they might be fine at associating labels to pictures, they haven't got the slightest idea what the content of the pictures is. The programs are remarkable and useful tools. Let's not make them into something magical and unreal.
posted by informavore at 1:08 PM on October 15, 2017 [2 favorites]


Yes, but my point is that a combination of the current level of vehicle autonomy PLUS data from road sensors is more feasible now than having just autonomous vehicles. And arguably safer.

Well, maybe, but each of those sensors becomes a new point of failure, and building redundancy into our infrastructure is even more rare than keeping up infrastructure....
posted by GenjiandProust at 1:44 PM on October 15, 2017


Q. What kind of futurist are you?
A. If you're not the one the author considers herself, shucks, you sure are awful

I really enjoyed TFA. Mr Howey has a light touch but a very clear way of expressing his thoughts. Thanks for posting.
posted by Sparx at 3:27 PM on October 15, 2017


-Is AI Riding a One-Trick Pony?
-Google's Hinton outlines new AI advance that requires less data
-New Research Aims to Solve the Problem of AI Bias in "Black Box" Algorithms
-When AI becomes too big to fail: "In short, employing AI systems to trade on a black-box basis without corresponding pressure to understand the methods driving their success (or failure) isn't really very different to putting your trust in a 'genius' human trader who can up come up with the goods time and time again, in ways you can't really understand, until he of course doesn't... Just because it's new and technically mysterious doesn't mean it doesn't pose the exact same systemic issues the old stuff did."
posted by kliuless at 4:56 AM on November 8, 2017 [1 favorite]


« Older Two recent short stories by Jess Zimmerman   |   Float like a butterfly, sting like a moth Newer »


This thread has been archived and is closed to new comments