Are machines
September 4, 2001 3:40 PM   Subscribe

Are machines going to take over our planet? Stephen Hawking sure thinks so. Is he just making a fuss about nothing?
posted by yevge (28 comments total)
 
Yes.
posted by delmoi at 3:49 PM on September 4, 2001


Computers really aren't Hawking's area of expertise, as far as I know. Certainly the quote from him in the article severely oversimplifies the relation between computer speed and intelligence. I'm not saying this isn't an issue (though I have yet to see any real cause for concern) but if it is, it's not one that we should expect Stephen Hawking to have any particularly original insight into. He's brilliant, but he's not an A.I. researcher.
posted by moss at 3:57 PM on September 4, 2001


Does anyone else think that Hawking's been reading a little too much Gibson? (I love the wet-ware comment in that link.)
posted by eyeballkid at 4:01 PM on September 4, 2001


*Sigh*. It's just another "those damn machines are going to kill us in the end, I tell you!" prophecy. The difference is that this time it comes from one of the world's leading scientists and that fact alone probably makes it bound to cause a bit of uproar. Which is, of course, totally unjustified, since Hawking as far as I remember is not a CS/AI expert.

This article at The Register explains why we are going to survive after all. Favourite quote: "To observe that we can't out-calculate a computer is exactly like observing that we can't outrun a Ferrari. It's true, yes, but it just doesn't matter. "
posted by happyh at 4:05 PM on September 4, 2001


"To observe that we can't out-calculate a computer is exactly like observing that we can't outrun a Ferrari. It's true, yes, but it just doesn't matter. "

Friends in Cambridge tell me that you can outrun Hawking's wheelchair, but that doesn't stop him from running you over from behind. That does matter.
posted by holgate at 4:24 PM on September 4, 2001


Pfft - pretty hypocritical coming from a guy with a $55 million exoskeleton.
posted by obiwanwasabi at 4:38 PM on September 4, 2001


machines are taking over stephen hawking.
posted by quonsar at 4:42 PM on September 4, 2001


cold kickin' it luddite style
posted by machaus at 5:01 PM on September 4, 2001


I'm curious: What do you who are critical of Hawking's contentions come up with when you extrapolate, even conservatively, increases in machine capability over, say, the next 500 years?
posted by rushmc at 5:21 PM on September 4, 2001


rushmc - although I'm certainly not going to write off Prof Hawking's thoughts, I think what some people might be talking about is the simple fact that the human brain/human intelligence cannot really be measured in Mhz.

Read Ray Kurzweil's "Age of Spiritual Machines" for an argument otherwise, but do note that early on in the book he starts using the phrase 'computational ability of the human brain' in comparison to processing power - that always struck me as a pretty big jump as a comparitive device. We don't even completely understand the mechanics of the human brain today, and all 'AI' projects I've read about recently seem to be operating under the assumption that if you feed the 'AI' enough important facts, trivia and 'common sense' (i.e.; cats and dogs are both small furry mammals kepts as pets, but nevertheless there are no seeing-eye cats) the AI will become as intelligent as as a person.

All this naysaying aside I have no doubt that we will see AI in our lifetime, so never mind, I guess.
posted by GriffX at 5:56 PM on September 4, 2001


Now that I think about it, I've already seen AI, but I felt it completely fell apart in the final act. Forget what I said before.
posted by GriffX at 5:57 PM on September 4, 2001


I'm curious: What do you who are critical of Hawking's contentions come up with when you extrapolate, even conservatively, increases in machine capability over, say, the next 500 years?

I'm not sure... This, maybe?. Just kidding. In my opinion, trying to guess what will happen in the next 500 (or 5000, or 50, or 5) is a bit pointless, and that's what makes Hawking's comments also pointless. There is simply not enough scientific evidence to (even remotely) prove his "theory", so I choose not to accept it.
posted by happyh at 6:25 PM on September 4, 2001


What do you who are critical of Hawking's contentions come up with when you extrapolate, even conservatively, increases in machine capability over, say, the next 500 years?

This is a good question because it does seem as though AI might well become possible given enough computing power. I have two reasons to say that the increase of computing power alone isn't enough to be sure that we'll develop AI.

First: it's not yet clear what the upper limit to the speed of a computer is. We'll almost certainly develop computers a thousand times faster than the ones we have today, but it's not clear whether (or how soon) we'll develop computers a billion or a trillion times faster.

Second, and most importantly: today's software would not show general intelligence at any speed. That is, some innovation is required to build a working AI, even given unlimited computing power. Many cognitive scientists offer reasons to think that this innovation is possible, but it can't be taken for granted.
posted by moss at 6:46 PM on September 4, 2001


Apples can still drive oranges to extinction.
posted by rushmc at 7:37 PM on September 4, 2001


The Matrix Has You.
posted by kingmissile at 8:33 PM on September 4, 2001


``We must develop as quickly as possible technologies that make possible a direct connection between brain and computer, so that artificial brains contribute to human intelligence rather than opposing it,'' Hawking said."

This seems particularly stupid to me - if we're worried about machines taking over the world, do we really want to make ourselves that much more dependent on them by, you know, implanting them in our brains?
posted by Sinner at 8:44 PM on September 4, 2001


Hawking is losing his mind. I think that's a very real possibility.
posted by kindall at 9:02 PM on September 4, 2001


There is AI and there is mere processing power. AI may require a lot of processing power to be effective, but the real challenge is getting the 'I' into the AI. The fundamentals of reasoning are pretty elusive things to program. You could do it by brute force, but it is subject to the validity checking of every input and even if you could be sure all the data was correct, there is no accounting for every nuance of a personality. If the goal is not to recreate human intelligence with all it's emotional baggage and quirks, then AI seems more like a specialized intelligence.

Rather than

Ray Kurzweil likes to paint a nice picture of what machines are capable of. It gets down to that basic question of whether the building blocks of a sentient intelligence need to be biological and if they don't, what is so important about being human?

Hawking makes it out that it's biological vs. mechanic. But I think they are so closely entwined already that it's not a relevant factor. We use machines all the time and I guess there is some sort of residual guilt and fear factor that believes they will one day use us. I think that on that day there won't be a defining 'them' or 'us', just bits and pieces that make up a whole being.

Skallas, the 'brute force' link is about cyc.
posted by john at 9:28 PM on September 4, 2001


I think that whole, tiny article had very few quotes from Hawking and just kind of rambled about them to their logical extent. Hawking says one thing and everyone freaks out. I think it could very easily be a misconstrued quote, added to with alarmism on the part of the journalist.
posted by Kafkaesque at 9:29 PM on September 4, 2001


Saying that advances in processing power will eventually lead to superhuman AI is a little like saying that if we discover enough oil, we'll have cars that go a trillion miles per hour.

On the other hand, computers have already taken over. The fact that humans already modify their behavior to satisfy computer models (credit ratings, for example) shows that the machines have already won.
posted by electro at 9:57 PM on September 4, 2001


Electro...

Modifying behavior to get around computer models of credit ratings is no different than if they calculations were done by hand.

On the other hand, if you believe that the brain is a 'blank slate' when born (aside from having the ability to recieve data from the nerves, of course), as I do, you have to make the further assumption that computers will one day be able to serve as Artificial Intellicence fully. The concept of being self-aware is an interesting one, though. I don't believe in a soul, but I'm always intrigued about how being self-aware comes about, which in my opinion is the strongest argument one could make about being spiritual, even though I'm completely unspiritual. Even if a computer could have a logical conversation with you, and *learn*, that's still a long way from being self-aware.

I think applications such as a database that can answer questions posed logically and negotiate intelligently with ohter machines and humans is definitely possible, and will come in the next 10 to 20 years, and will totally change how the world operates. Imagine a factory that didn't need workers at all, because a computer and its robotic arms would be able to completely self-diagnose as good as any 'intelligent' worker could? However, I'm not sure if a computer will *ever* be self-aware. I'm defining self-aware as wondering. Will a computer ever, without being programmed or being told other people's view on the subject, what the stars look like? Because nearly everyone has thought that at some time. I dunno.
posted by Kevs at 10:23 PM on September 4, 2001


I wonder what Marvin Minsky thinks about black holes?
posted by prodigal at 11:09 PM on September 4, 2001



Modifying behavior to get around computer models of credit ratings is no different than if they calculations were done
by hand.


Not quite. If the calculations are done by hand, the person doing them can say, "hey, this doesn't make any sense; maybe the model is wrong."

The problem is computers don't have Judgement. People like to pretend that they do, so they can avoid making decisions themselves, but this is a lie.

The self-awareness issue is, IMHO, a red herring. What's important to me is whether the machine understands the consequences of its decisions. HAL in 2001 (hmm, why does superhuman AI always end up being 30 years in the future?) knew he was being shut down because he murdered the other astronauts. That, to me, seems many orders of magnitude harder than self-awareness.
posted by electro at 11:24 PM on September 4, 2001


The title of the article didn`t mention Hawking by name, so when I found my there through Yahoo messenger, I thought it was going to be about some unknown physicist had to say on the subject.

I can`t say I`m particularly scared. Brilliant scientists have a tendency to go wacky later on. Linus Pauling went overboard on the vitamin C thing. He later recanted, but the precedent is there. IIR, Pauling`s research was DNA, about as related to vitamin-C-as-cureall as Hawking`s work in astrophysics and the origin of the universe is software, theory of mind and intentionality.

Or maybe all the hi-tech gear to help him deal with the ALS has finally gotten to him.
posted by chiheisen at 12:49 AM on September 5, 2001


Apples can still drive oranges to extinction.

How I hate them.
posted by rodii at 5:25 AM on September 5, 2001


I never understood what all these alarmist "machines are going to take over"-type people (in this case, I think it's the reporter more than Mr. Hawking) were so upset about. I mean, what's the big deal about evolution changing vectors? It's amazing and wonderful that we can help birth an entire new species, not scary. If that species eventually takes over, that's no different from our taking over from apes. Humans are an okay design, but by no means perfect, and I for one would like to think that evolution can do a lot better.

Besides, the whole "we need to enhance the human race to compete"-argument is bogus. We're already enhancing our capabilities with PDAs and cellphones, and we should keep enhancing them as new tools become available - AIs or not!
posted by kvan at 3:25 AM on September 6, 2001


its no less believeable than an "all knowing and invisible guy who lives in the sky and watches over us."
posted by Satapher at 6:22 AM on September 6, 2001


I agree with both of you, kvan and Satapher.
posted by rushmc at 6:56 AM on September 6, 2001


« Older   |   Picsearch Newer »


This thread has been archived and is closed to new comments