Catch you later.
September 15, 2009 9:43 PM   Subscribe

Two AI Pioneers. Two Bizarre Suicides. Wired's David Kushner examines the work of two young, competitive AI researchers, and the eerie circumstances of their deaths.
posted by knave (47 comments total) 15 users marked this as a favorite
 
Interesting article, but I could have sworn this made it on the blue already.
posted by itchylick at 9:49 PM on September 15, 2009


What Really Happened?

Duh: ROBOTS
posted by Sys Rq at 9:53 PM on September 15, 2009


I can't find it either, but I am 99% sure I read a very similar article within the past three years, possibly in MIT Technology Review. I'm not sure if it was posted to the blue or not, though.
posted by Alterscape at 10:02 PM on September 15, 2009


It is interesting that MeFites contributed some tidbits to Singh's Commonsense project, back in 2001.
posted by knave at 10:06 PM on September 15, 2009


AI is modern-day alchemy. Driving great minds to insanity and death at a research department near you!
posted by Avenger at 10:07 PM on September 15, 2009


Interesting article, that I (at least) hadn't seen before. thanks for posting it to the blue!
posted by Chionophilia at 10:29 PM on September 15, 2009


Two years ago.
posted by ten pounds of inedita at 10:31 PM on September 15, 2009


I was reading the Joel on Software "?off" (off-topic) forum the day Chris McKinstry posted his suicide thread. As I've said before, I bet a number of people there regret their jokes and snarky encouragement. He had been a longstanding poster and had a flair for theatrics, but assuming a suicide threat is just a cry for attention is one of those things that's only true until it isn't.

This Wired article says he was diagnosed as bipolar. That's interesting to me. This was a guy who, productively or not, spent a great deal of his life thinking about how cognition and intelligence work. It seems to me that you can't use the DSM IV on someone like that. The field of psychiatry is nowhere near the level of maturity in which it would be able to isolate physiological issues from psychological ones, and this is a person whose academic background completely undermined any kind of traditional psychological diagnoses.

I could turn this comment into cliche-ridden tripe about the price of genius and whatnot, but that's bullshit. "Genius" isn't necessarily a useful word to describe people like McKinstry, whose ideas were often borderline-Timecube, and "price" is just a cop-out we use to protect ourselves from the realization that this was a human being, one of us, and we didn't know how to save him.

I won't sugar-coat it because it's exactly the kind of equivocation he would've hated: I didn't much like Chris when he was alive, but he didn't deserve to die so young and I offer up a much-belated "." in his honor.

(and another . for Singh, whose suicide sounds a lot more traditional but no less tragic)
posted by Riki tiki at 10:31 PM on September 15, 2009 [15 favorites]


Mindpixel was a clearly not-quite-sane member of Kuro5hin. You can read about him here.
posted by Dmenet at 10:31 PM on September 15, 2009


Maybe this is a better link.
posted by Dmenet at 10:33 PM on September 15, 2009


In ninth grade, he had created his own sound digitizer and taught it to play a song

Oh Wired... seriously?

Could you try putting the words in an order that makes sense?
posted by pompomtom at 10:33 PM on September 15, 2009 [1 favorite]


It's not quite eerie. One was a kind of nutty guy with a history of suicidal impulses and mental disorder. The other suffered from chronic pain. They both have clear-ish reasons for killing themslelves. I was a little confused by Singh's story- It seems like he gets injured and then kills himself a few months later? For someone with no history of mental disorder, wouldn't he have sought treatment of some kind? I'm sure this is about 2% of the whole story, but.

Anyway: .. for folks who try to break the mold and attack problems in new ways.
posted by GilloD at 10:44 PM on September 15, 2009


this was a human being, one of us, and we didn't know how to save him.

Well, the line about how he had unsuccessfully tried to institutionalize himself in 1990 points some fingers as to how we could have tried a little harder, I think. As for Singh, well, I don't even know what to think about that one, as it seems so out of the blue for someone without any history of mental illness to react that way to his injury, but as was said above, we've got about 2% of the story.
posted by Navelgazer at 11:16 PM on September 15, 2009


I found both AI and machine vision to be colossally disappointing. Both were a collection of nifty heuristics that were intensely dull to study.

The heuristics were interesting, but I was ultimately disappointed enough to abandon it as a field of graduate study because no area seemed to have progressed beyond applying those heuristics to anything more than toy problems.

Regarding McKinstry's first suicide attempt: Don't gun stores separate the firing pin from the display model, just to avoid what he did (pull out a bullet and load the firearm)? When I was in the Canadian army (reserves, so we're talking about a brick armory in the middle of the city, here), the weapons were in a locked room, in padlocked racks, but the bolts were in a safe.

Who was McKinstry on ?off? I read it for a while, and noted that some pretty extreme personalities hung out there.
posted by fatbird at 11:28 PM on September 15, 2009 [1 favorite]


fatbird: he used his real name.
posted by Riki tiki at 11:44 PM on September 15, 2009


This article, in a way, is a wonderful parable about what makes building an intelligent machine, or even setting the bar for what an intelligent machine should be able to do, hard. We have two people whose lives intersect at a few novel points: they were both deeply interested in AI, and shared a common approach to hard AI for some period of their lives. Eventually they both committed suicide and in similar ways. So we have three data points that differ significantly from the "average" life. The result is that we find the moments of synchronicity in these trajectories creepy. Many of us apparently feel that there is a deeper generalization we should be able to draw from the story (judging by comments in this thread and the fact that Kushner felt it appropriate to draw the parallel with this article).

Ironically, the ability to correlate these two lives, and to try to draw predictive power from this story and apply it to people in general, is what is lacking in the propositional database each individual considered. This ability to look at two situations, encode them to capture the most interesting or important features, and apply them to a new situation, is the part of artificial intelligence where it stops being a science or engineering discipline and starts being an art. I hope that one day we can construct the reasoning system that will be able to understand these two databases of propositions as such, generalize as we would; I hope that this work contributes to the foundation for demonstrating actual artificial intelligence.
posted by agent at 12:05 AM on September 16, 2009 [5 favorites]


Don't gun stores separate the firing pin from the display model, just to avoid what he did (pull out a bullet and load the firearm)?

The small gun store I worked at did not do this.
posted by maxwelton at 12:31 AM on September 16, 2009


They succeeded and created hyperintelligent minds which, by definition, would not accept neocon talking points. So the field of AI was deemed a failure and the program had to be 'erased' completely.

Or not.
posted by jfrancis at 12:35 AM on September 16, 2009


I found both AI and machine vision to be colossally disappointing. Both were a collection of nifty heuristics that were intensely dull to study.
I agree. However, in the field of computer vision, there has been some interesting stuff in the last years, not that well known AFAIK, like scale space theory and image descriptors. An intelligent and quite beautiful way of mathematically encoding what vision is.

This is my field of work actually, but it is my field of work because I find it interesting, not the other way around. Promise.
posted by krilli at 2:05 AM on September 16, 2009


There are two visions of AI. In one vision, there is a romantic quest to create human-like or even super-human cognition. In the other vision, you come up with neat tricks to fool the blindspots in ordinary human cognition.

There's also the one where you just train programs to solve doable real world problems that humans aren't very good at.
posted by kersplunk at 5:36 AM on September 16, 2009 [3 favorites]


That's interesting to me. This was a guy who, productively or not, spent a great deal of his life thinking about how cognition and intelligence work. It seems to me that you can't use the DSM IV on someone like that. The field of psychiatry is nowhere near the level of maturity in which it would be able to isolate physiological issues from psychological ones, and this is a person whose academic background completely undermined any kind of traditional psychological diagnoses.

You misunderstand the distinction between psychiatry and psychology, and the role intelligence plays in them, which by the way underlies all of AI research and why it fails. A true artificial intelligence should be capable of bipolar, depression, sociopathy, etc. Because the brain that gives you your intelligence is the same brain that regulates your emotions and behavior. They aren't separate or even separable. He may have studied intelligence and cognition his whole life, but he appears never to have applied that intelligence to his emotional state to realize that his unfortunate childhood led to the problems and emotions he could not control in adulthood.

Furthermore, the purpose of psychiatry is not to isolate the physiological issues from the psychological ones. It is to treat the psychological ones as if they have physiological origins. The point is that all of psychology is rooted in physiology. Everything that occurs in your mind transpires in the brain.

The psychological point of view is that there is some set of behaviors that we are going to call bipolar. Anyone, including brilliant scientists, who express this set of behaviors and for which it interferes with their ability to live their life (e.g. sustain relationships, meet obligations and responsibilities) is bipolar.

The psychiatrist arrives at the same conclusions, and then reasons as follows: The patient is bipolar. We know that there is this chemical compound called X that in studies has been shown to mitigate the bipolar behaviors in such way that the patient can once again lead a functional life. We don't necessarily know why the compound works (we know what it does from a biochemical view, i.e what receptors it acts on, but we don't know why that affects bipolar behavior), but we know that for a statistically significant number of people in the same situation, it does work. So the shrink prescribes the drug.

Can you mitigate the effects with the talking cure, therapy? Maybe. I hate comparing the brain to a computer, because it in no way operates like a computer, but people are comfortable with them, so I will make them. Imagine you have a wonky hard drive. Drive access are often unreliable, but not fatally so. Psychology takes the approach of applying a software patch to very carefully access the drive ten times instead of once and double check the data coming off of it so that we can trust that what has come out of the drive reflects what is on it. Psychiatry takes the approach of replacing a bunch of the drive's parts in the hope that one of the replaced parts is the broken one.

It goes without saying that the psychological/psychotherapy approach takes a tremendous amount of time.

To say that someones background undermines any attempt at psychological diagnosis is arrogant in the sense that it implies that it is possible to be too smart for the test. You can't beat a psychological test. The fact that you try to game it is itself a well-contemplated response to the test. The fact that you try to cheat, or give dishonest answers, are in fact honest answers, because they give us information about your emotional state, your self-image, your need to be viewed as whatever type of person you are skewing the answers towards. Why do you feel the need to game the test? What are you worried the results will show? etc.

In my opinion, psychology and psychiatry will both flounder in statistical-regression study hell until an MRI machine is developed that can resolve down to a single neuron in real-time. Only then will we be truly able to understand (a) how most brains actually operate, cognitively and emotionally, and more importantly (b) how different brains operate differently given the same stimulus.
posted by Pastabagel at 6:37 AM on September 16, 2009 [3 favorites]


We have two people whose lives intersect at a few novel points: they were both deeply interested in AI, and shared a common approach to hard AI for some period of their lives. Eventually they both committed suicide and in similar ways.

Conclusion: AI research kills.

Maybe we need to start a "War on AI" to get real progress in the field?
posted by happyroach at 7:05 AM on September 16, 2009


As for Singh, well, I don't even know what to think about that one, as it seems so out of the blue for someone without any history of mental illness to react that way to his injury, but as was said above, we've got about 2% of the story.
posted by Navelgazer at 2:16 AM on September 16


Can we be honest with ourselves here? I was a the typical "geek" kid growing up. I programmed computers, loved science, math science fiction, was socially awkward, etc. I think a lot of people can describe being the same way.

But I was not a happy kid. I was programming computers alone because I couldn't make emotional connections with other kids. It was easier to play with legos for hours on end because I was in my own safe bubble. I was unhappy because my parents were domineering, emotionally detached, strict, abusive, etc. Every other geek has a different story but similar. Maybe the parents had a nasty divorce. They drank. They hit. That conditioned me to a certain set of behaviors that made me a target of bullies and other kids in general. It made me the butt of jokes and invisible to girls. It was miserable until I got to do homework or play with legos, D&D, and program on my computer.

See, it isn't normal for kids to want to be alone for hours on end. They gravitate to that if other things aren't safe to them. So when I read this:
Singh's lifelong friend Rajiv Rawat describes an idyllic geek childhood full of Legos, D&D, and Star Trek. One of his favorite films was 2001: A Space Odyssey — Singh was fascinated by the idea of HAL 9000, the artificial intelligence that thought and acted in ways its creators had not predicted.

I see it as a coded message. "idyllic geek childhood full of Legos, D&D, and Star Trek"? Escape, escape, escape. What is he escaping from?

I don't think I'm out of line in suggesting that a lot of us grew up the same way as Wired describes Singh's childhood. It stands to reason that having an idyllic geek childhood meant he was depressed and miserable when not doing something escapist.

If we are being honest, "the idyllic geek childhood" is a symptom of an unhealthy environment, and that is almost always the root cause of every mental illness.
posted by Pastabagel at 7:06 AM on September 16, 2009 [8 favorites]


What they both also had in common were known risk factors for suicide: for McKinstry, bipolar disorder, and for Singh, seemingly hopeless chronic pain.
posted by availablelight at 7:06 AM on September 16, 2009 [1 favorite]


I like the way your argument sounds, Pastabagel, but I had a loving, supportive childhood and I still prefer to be alone much of the time. Some of us are more solitary creatures by nature.
posted by misha at 7:35 AM on September 16, 2009 [2 favorites]


I went to grad school with Push 1996-1999. He was a very kind person, thoughtful and calm in discussion. And very driven, very sure he was working on the right research path.

MIT is not a very healthy place for people's psychology. It's so focussed on intellectual achievement and competitiveness, I knew a lot of students who were quite miserable there. Not salad days miserable, working hard doing math papers until 4am and forgetting to eat. Miserable in your soul, convinced that you were no good for anything because you didn't get the top grade in your class. The school has a disturbing number of suicides. Also a completely inadequate mental health / counseling program, at least when I was there. The problem seems most acute for undergrads, people too young to have perspective. I've got no idea what happened with Push, I'd lost touch with him years before, but it saddens me that someone so smart would choose to leave life early.
posted by Nelson at 7:42 AM on September 16, 2009 [3 favorites]


Benefitting from what Pastabagel's said: what struck me as clear from the 7th paragraph in was that "this is a boy who was never able to develop his emotions" (or if you want to put it in modern terms, his "emotional intelligence"). In that 7th paragraph: "At 4, he had asked his mother to sew a sleeping bag for his toy robot so it wouldn't get cold. "Robots have feelings," he insisted. Despite growing up poor with a single mom[...]".

In other words, a lonely four-year-old's way of saying "Mom, I have feelings that need to be nurtured too, but I don't dare say so outright because you're already so overworked." (I admit I'm making assumptions, but I think it's pretty likely given the "poor and single" description.)
posted by fraula at 7:46 AM on September 16, 2009


You misunderstand the distinction between psychiatry and psychology, and the role intelligence plays in them, which by the way underlies all of AI research and why it fails. A true artificial intelligence should be capable of bipolar, depression, sociopathy, etc. Because the brain that gives you your intelligence is the same brain that regulates your emotions and behavior. They aren't separate or even separable.

The brain is highly compartmentalized with emotions rooted in the frontal lobes. People can lack emotion but still be intelligent, such as those with frontal lobotomies that don't end up vegetables - which would seem to indicate that intelligence and emotions are separable.

I've never seen a good argument to indicate you can't have AI without emotions.
posted by Bort at 7:46 AM on September 16, 2009 [1 favorite]


If we are being honest, "the idyllic geek childhood" is a symptom of an unhealthy environment, and that is almost always the root cause of every mental illness.

That's quite an assertion - do you have any data to back this up?
posted by bigmusic at 7:51 AM on September 16, 2009 [5 favorites]


The brain is highly compartmentalized with emotions rooted in the frontal lobes. People can lack emotion but still be intelligent, such as those with frontal lobotomies that don't end up vegetables - which would seem to indicate that intelligence and emotions are separable....I've never seen a good argument to indicate you can't have AI without emotions.
posted by Bort at 10:46 AM on September 16


A report from the National Institute of Mental Health says a gene variant that reduces dopamine activity in the prefrontal cortex is related to poorer performance and inefficient functioning of that brain region during working memory tasks... This wikipedia article makes clear that the functioning and purpose of the frontal lobes is not as clear-cut and understood as you suggest.

I've never seen an example of any intelligence without emotions. Dogs, cats, and dolphins have emotions. They are all intelligent to some degree. Cockroaches do not have emotion, and they are also not intelligent, unless we are redefining intelligence.

The reality is that we do not know how dog intelligence works. We have a model, that is gross and imperfect.

That's quite an assertion - do you have any data to back this up?
posted by bigmusic at 10:51 AM on September 16


Funny you should mention that. Most of the dopamine receptors are in the frontal lobe which doesn't become fully formed or mature until late adolescence. Dopamine (and dopamine receptors) is involved in mood regulation functions of the brain. In other words, because the emotional apparatus of the brain is still being formed throughout childhood means that the emotional environment of your childhood can negatively affect how the brain develops.
posted by Pastabagel at 8:20 AM on September 16, 2009 [1 favorite]


At the very least, though, we can all agree that the mind is a seven-dimensional hypersurface, right?
posted by rusty at 8:23 AM on September 16, 2009 [4 favorites]


Descartes Error makes the point that emotion is a form of thought, and that people with a flat emotional affect are not super intelligent, but actually brain damaged.
posted by jfrancis at 8:30 AM on September 16, 2009


Funny you should mention that.

I'm not sure how that correlates with "the idyllic geek childhood" is a symptom of an unhealthy environment. I am truly interested in what research you have read that would show that such a childhood environment is unhealthy.
posted by bigmusic at 8:53 AM on September 16, 2009


I am truly interested in what research you have read that would show that such a childhood environment is unhealthy.

He's not saying the "geek" environment is unhealthy. He's postulating that (as you quoted) it's a symptom of an unhealthy environment (which I think is a stretch and based on anecdotal evidence).
posted by Bort at 9:07 AM on September 16, 2009


Pastabagel: "You misunderstand the distinction between psychiatry and psychology, and the role intelligence plays in them, which by the way underlies all of AI research and why it fails..."

"In my opinion, psychology and psychiatry will both flounder in statistical-regression study hell until an MRI machine is developed that can resolve down to a single neuron in real-time. Only then will we be truly able to understand (a) how most brains actually operate, cognitively and emotionally, and more importantly (b) how different brains operate differently given the same stimulus."

You misunderstood me. I'm nowhere near an expert, but I know that psychology is theoretically just the manifestation of our physiological (psychiatric) reactions. But as you acknowledge later in the same comment, the field of psychiatry is a fairly blunt instrument when it comes to actually addressing "emotions" and "behavior" because those patterns often rely on so many (often rapidly-changing) physiological stimuli that we're unable to model them biologically at this time.

Our experiences and knowledge (however you want to represent them) are among those stimuli, and you haven't convinced me that our psychological diagnoses are game-proof. It's one thing to assert (as you have) that they are useful tools even for intelligent patients, but that wasn't the point I was making. This wasn't about a generically intelligent patient, it's about a patient whose specific background was in an field that might inform his emotional state. The model we have for "bipolar disorder" does not incorporate the possible consequences of thinking about intelligence for a living because 99.9% of people don't do that.

Let me give you a simpler example. Assume you had a psychologist suffering from a dependency on antidepressants (but who was not himself depressed). You don't think, by virtue of his professional background, he could go in and "game" the diagnosis? He knows all the symptoms of depression, he knows what they look like, he could simply go in and do a little acting and get his drugs even though his real illness (the drug dependency) went unidentified. It's not because he was "intelligent", it's about the specific nature of his intelligence.

Being an AI researcher is obviously not that extreme, but I do believe that thinking about how intelligence and emotions work introduces a major variable that our psychological diagnoses fail to recognize. I think a competent psychiatrist or psychologist would be very wary about simply consulting a symptom checklist under the circumstances. Maybe that happened for McKinstry, but a bipolar diagnosis seemed like the best option they had. Alternatively, maybe he just got a nine-to-five therapist who didn't really contemplate the effects of his academic background.
posted by Riki tiki at 9:20 AM on September 16, 2009


I've never seen an example of any intelligence without emotions. Dogs, cats, and dolphins have emotions. They are all intelligent to some degree. Cockroaches do not have emotion, and they are also not intelligent, unless we are redefining intelligence.

I agree that higher (in terms of how we define intelligence) life forms have (at least a degree of) emotion. However, this could be due to their evolutionary history of common ancestry, as opposed to emotion being required for intelligence. I'm of the opinion that emotion isn't required for intelligence, though I'm open to the idea and don't see it as black and white as I may have indicated.

Do you believe biology is a prerequisite for intelligence or inseparable from intelligence, since you've presumably have never seen a non-biological intelligence?

I also think that saying that cats, dogs, dolphins and people are intelligent while cockroaches are not is a gross simplification. They are all intelligent to varying degrees on an intelligence continuum - I don't think intelligence is a binary property, that you either have it or don't. I would argue that a cockroach is intelligent, though not near the level of the other animals mentioned.
posted by Bort at 9:20 AM on September 16, 2009


I was a little confused by Singh's story- It seems like he gets injured and then kills himself a few months later? For someone with no history of mental disorder, wouldn't he have sought treatment of some kind

A number of pain medications can cause (or exacerbate existing) depression in some of the people who use them.
posted by Sidhedevil at 9:23 AM on September 16, 2009 [1 favorite]


Bort: "compartment"

Bort: "The brain is highly compartmentalized with emotions rooted in the frontal lobes. People can lack emotion but still be intelligent, such as those with frontal lobotomies that don't end up vegetables - which would seem to indicate that intelligence and emotions are separable."

Separable perhaps, but rather more problematically so than many give credit to. Here's an example of a study (pdf link) on people with damage to some of the brain regions that correlate with the processing of emotional signals. Abstract-quote:
Most theories of choice assume that decisions derive from an assessment of the future outcomes of various options and alternatives through some type of cost-benefit analyses. The influence of emotions on decision-making is largely ignored. The studies of decision-making in neurological patients who can no longer process emotional information normally suggest that people make judgments not only by evaluating the consequences and their probability of occurring, but also and even sometimes primarily at a gut or emotional level. Lesions of the ventromedial (which includes the orbitofrontal) sector of the prefrontal cortex interfere with
the normal processing of ‘‘somatic’’ or emotional signals, while sparing most basic cognitive functions. Such damage leads to impairments in the decision-making process, which seriously compromise the quality of decisions in daily life.

Basically, without emotional involvement, decision-making is pretty much fubar. ("Decision-making goes fubar" was, I'm pretty sure, a rejected alternate title.) Of course, that's human brains and human intelligence, one can certainly argue that it's not inherent to intelligence minus the human. But for people, while brain function tends to sort itself (with a range of plasticity and flexibility) into compartments, those compartments aren't easily removed from each other as neatly as the term suggests.

Unrelated: on the topic of unhappy-"idyllic"-geek-childhood, and the ripples of suffering that can ripple forward from such, I always flash to the refrain of Jonathon Coulton's "The Future Soon":
'Cause it's gonna be the future soon
And I won't always be this way
When all the things that make me weak and strange get engineered away
posted by Drastic at 9:48 AM on September 16, 2009


See, that post of mine right above shows that with a bit more emotional involvement, I could've made much better decisions about the amount of time to spend previewing mangled and unclosed html tags. QED!
posted by Drastic at 9:49 AM on September 16, 2009


Everything that occurs in your mind transpires in the brain.

I wonder if McKinstry believed this. He clearly believed he would upload "himself" into a computer someday. There's a very odd sort of dualism lurking behind the idea that a human consciousness is "just bits" that can be transferred (not copied, transferred). The universe being the odd place that it is and consciousness being one of the things we don't particularly understand, I'm not the kind to rule out odd things, but it seems to me dualism doesn't play very nicely with modern science.

If we are being honest, "the idyllic geek childhood" is a symptom of an unhealthy environment, and that is almost always the root cause of every mental illness.

I'll just toss in my anecdote: my family has its problems, but my parents took pretty good care of their kids. I was never assaulted either at home or particularly at school (well, a few scuffles there, but nothing big). I *was* sometimes still isolated because I wasn't particularly good at sports, and I didn't care much about them, I was sometimes picked on, maybe a bit more tenderhearted than a lot of kids and a bit thin-skinned, I loved books and I was reading 5 or 6 grades above my level most of the time I was in school. But from this vantage point, that isolation looks like it was a consequence of who I was rather than outside circumstances.

And while of that isolation seemed hard at the time... honestly, I'm not sure I had it worse than anybody else and probably a lot better than some. I lived in a neighborhood *full* of families with young kids, some of them were decent friends and some of them were even also interested in technology. My relationship with my siblings wasn't perfect (in particular I fought a lot with my next younger brother) but we played together.
posted by weston at 10:02 AM on September 16, 2009


Emotions are neurochemically induced regional modifiers on signal transmission in the brain. The abstract neural network counterpart would be drawing a big circle around an block of your neural network and adding +1 (or whatever) to the weighting values for all the nodes in the circle. Or blocking them from sending an impulse whatsoever.

However, regional modifiers only have outwardly appreciable results for creatures with complex, familiar behavioral patterns. Dogs, cats, etc. And the human tendency for anthropomorphizing makes any sort of empiricism here nearly impossible.

Let me throw out a definition for artificial intelligence. When, in a conversation about stock prices, I can suddenly throw out the following question: "Okay, so say there's this girl Mary, and she walks two blocks north, three blocks east, and then two blocks south, enters a building and goes up three flights of stairs - where is she relative to her starting position?" and get the correct answer, I'll believe we have a sentient AI on our hand. Why? Because the above requires the other party in the conversation to understand that we have suddenly shifted to abstract questions that require creating a mental model of a simple, artificial universe, for the purpose of answering those questions, and is capable of performing that task.

When your intelligence recognizes contextually that it's supposed to be creating blank-slate abstract universes for problem-solving, and can create an appropriate abstract universe and solve a problem within it, that's the point at which I'm prepared to call it functionally sentient.

In our Comp Sci/Psych program at RPI (before I dropped out, so take this with a grain of salt) it was often posited that consciousness grew out of the social behavior of deception - lying requires you to model the mental state of other individuals in abstract and manipulate it in order to take advantage of their real world behavior. Human consciousness was thought to be a natural iteration of this ability - modeling out entire miniature abstract universes in which to conduct general problem-solving. I still think there's some accuracy to this view, and I've yet to see ANYTHING, ever, in AI research that shows a glimmer of being able to detect the need to do this on the fly from conversational context, and then solves the larger problem of actually performing it. These "lookup tables" of common-sense propositions are great and all for correctly setting the parameters of that abstract universe or for just conducting more casual conversations, but that's all they're good for: without that fundamental abstract modeling capability, they'll never be more than lookup tables.

(this is all off the cuff, I'm swamped with work and probably won't bother to defend any of it today, but I'm happy to read any feedback later tonight)
posted by Ryvar at 10:08 AM on September 16, 2009 [2 favorites]


This wasn't about a generically intelligent patient, it's about a patient whose specific background was in an field that might inform his emotional state. The model we have for "bipolar disorder" does not incorporate the possible consequences of thinking about intelligence for a living because 99.9% of people don't do that.

I'm not sure what you mean. Are you suggesting that merely thinking about intelligence for as long as he has would alter his emotional state in a way that looks like bipolar but isn't? Or are you saying that because of his thinking about intelligence (which has nothing whatsoever to do with any of the DSM criteria for bipolar) gives him the ability to fool the doctor into thinking he's fine?

If so, maybe he can fool the doctor. But this desire in and of itself is a psychological problem, because it reflects a lack of insight (i.e. lack of an ability to understand oneself objectively), some acknowledgment of a problem on some level, and also being unable to cope or process it and move forward, because he is refusing help.

A few months ago, someone posted the Rorschach inkblots to wikipedia, and it sent of a firestorm of debate about whether that would render the test useless. But it doesn't. If you use those inkblots to "cheat" on the test, your immediate thought should be "Why am I bother to cheat on this test? Why am I worried about what an honest examination would reveal?" The failure to ask this question is akin to denial - I know I have a problem, but if I can convince the doctor I don't, then I can go on pretending I don't.

Furthermore, much of the symptoms in the DSM are about the degree of a problem. Notice how many symptoms must be present "nearly every day". Notice also that it recommends consulting the patient's family and friends.

But maybe I'm misunderstanding you, so if you could clarify, it would help.
posted by Pastabagel at 10:57 AM on September 16, 2009


Emotions are neurochemically induced regional modifiers on signal transmission in the brain.

Ryvar, I'm not terribly familiar with questions about real intelligence; I'd be thrilled if you could drop in some references if you get the chance.

When your intelligence recognizes contextually that it's supposed to be creating blank-slate abstract universes for problem-solving, and can create an appropriate abstract universe and solve a problem within it, that's the point at which I'm prepared to call it functionally sentient.

One particularly interesting result in machine learning is that a tabula rasa learner is undesirable. Without any kind of bias underlying generalization, the only thing that a learner can do when making a decision with respect to a new observation is check whether it matches something else it has seen exactly. In other words, there can be no generalization without biases. What this means to me is that we need to accept that our tools are always going to be biased, and move on to methods that can pick and choose among those tools the one that has the most appropriate bias for a situation.

This isn't an attack on your position. All I'm saying is that the tools underlying our problem solving techniques are certainly biased in some way. But, we are able to operate on (apparently) a few different layers of reasoning -- spatially, linguistically, algebraically, and who knows what else (and how different they actually are) -- and our reasoning within each appear biased to some extent. However, in most situations we are able to quickly choose which method of modeling and solving a problem is sufficient to find a solution that is good enough.

The questions are: what primitives do we have that allow us to reason like this, how do we acquire them, and how do we choose the most appropriate for a given situation?
posted by agent at 11:14 AM on September 16, 2009


Emotions are neurochemically induced regional modifiers on signal transmission in the brain.

That is a bold assertion. As an extremely interested layman (believe it or not I have one or two emotional problems) I have read stacks and stacks of books by psychiatrists, psychologists, neurobiologists, neurologists, &c. I do not believe anybody has a convincing explanation of what an emotion is.

All of this is really a fascinating discussion however and I am going to toss in a couple little items that I have found which amaze me and may interest others.

1.) As far as the structure of human emotional experience (e. g. Eckmann's six global emotions--fear, anger, surpriese, etc.) by far the best thing I have seen is Kohnonen's emotion map. Kohonen constructed his map within the context of doing AI research.

2.) By far the most comprehensive vocabulary list of emotional and feeling words I have ever seen is in an appendix to a Carol Queen fetish manual.

As far as understanding the human brain goes, we seem to be about where the western hemisphere geographers were in 1530.
posted by bukvich at 12:14 PM on September 16, 2009


Pastabagel: "maybe he can fool the doctor. But this desire in and of itself is a psychological problem"

Granted. But by definition of him fooling the doctor, the diagnostic criteria are proven to be faulty in his particular situation. That's all I'm saying. It doesn't have to go so far as a conscious desire to game the diagnoses... that was just a useful example to expose how knowledge of the system can undermine the system.

I can easily imagine how spending your life thinking about the inner workings of intelligence and consciousness might reshape your psyche in such a way that some unidentified psychological trait could be mistaken for the symptoms of clinical bipolar disorder.

You mention the Rorschach blots. Yes, "cheating" by knowing the blots in advance is probably itself symptomatic of a larger problem, but the Rorschach test itself does not have the means to tell the difference between a "normal" patient and someone who cheated. You'd have to use some other form of analysis to discern one from the other.

That's what I'm saying about McKinstry's bipolar diagnosis. I don't know that his background would have undermined the criteria for bipolar disorder, but I'd consider it a major confounding variable. I doubt the diagnostic criteria for bipolar disorder has the means to tell the difference between a legitimate bipolar patient and someone who simply exhibits the same symptoms, and I don't know that we have another form of analysis to discern one from the other. That's what I mean by "we didn't know how to save him".
posted by Riki tiki at 12:40 PM on September 16, 2009 [1 favorite]


Just a note, Singh's suicide is not that outlandish, in fact, it very probably comes from Derek Humphry's Final Exit.

Also, Kuro5hin is still up!?! Gee, those days before metafilter...
posted by Laotic at 12:50 PM on September 16, 2009


Tenuously related, but here's a poignant memorial page for Martin Richard Friedmann, an MIT Media Lab student and web pioneer who committed suicide in 1995.

Here's his home page.
posted by zippy at 4:35 PM on September 16, 2009


« Older Everything new is old again   |   Archive Team Newer »


This thread has been archived and is closed to new comments