Becoming an uber-Spock? Or becoming the love-child of Spock and Kirk?
January 14, 2016 8:55 PM   Subscribe

 
Animals are so much more amazing than computers in every way. Rationality in its current form is an amoeba in the kingdom of thought.
posted by chaz at 9:18 PM on January 14, 2016 [7 favorites]


Just for the record, Spock is not the archetype that CFAR is aiming for. From the article:
When I first spoke to Galef, she told me that, while the group tends to attract analytical thinkers, a purely logical approach to problem-solving is not the goal. “A lot of people think that rationality means acting like Spock and ignoring things like intuition and emotion,” she said. “But we’ve found that that approach doesn’t actually work.” Instead, she said, the aim was to bring the emotional, instinctive parts of the brain (dubbed “System One” by Kahneman) into harmony with the more intellectual, goal-setting parts of the brain (“System Two”).
Here's a transcript of a speech by Galef deconstructing the stereotype of the “straw Vulcan”.

I’m not personally into CFAR—their workshops still seem too vulnerable to self-help’s usual “magical-thinking standards”, with scientific jargon replacing New Age jargon—but at least that’s a weakness they’re aware of.
posted by Rangi at 9:20 PM on January 14, 2016 [2 favorites]


acting like Spock and ignoring things like intuition and emotion

I guess no one actually watches the show anymore?
posted by RogerB at 9:22 PM on January 14, 2016 [29 favorites]


I eagerly await an article describing how Fabio rescued someone from this group.
posted by mmoncur at 9:22 PM on January 14, 2016 [44 favorites]


How do you feel?
posted by RobotVoodooPower at 9:38 PM on January 14, 2016 [2 favorites]


Yeesh, how am I not surprised that this Scientology 2.0 is intimately tied with Eliezer Yudkowsky and the Less Wrong cult?

Where's Roko's basilisk when you need it?
posted by Sangermaine at 9:46 PM on January 14, 2016 [12 favorites]


I mean: you're not helping! Why is that, Leon?
posted by GuyZero at 9:46 PM on January 14, 2016 [7 favorites]


Where's Roko's basilisk when you need it?

What if the super-intelligence is only brought about by buying cereal? What if it's actually GENERAL MILL'S BASILISK?
posted by GuyZero at 9:47 PM on January 14, 2016 [7 favorites]


Yeahhhh, this is sounding pretty culty. There's the bit about Yudkowsky, who's been on the blue before:
People tend to hear about the group from co-workers (usually at tech companies) or through a blog called LessWrong, associated with the artificial-intelligence researcher Eliezer Yudkowsky, who is also the author of the popular fan-fiction novel ‘‘Harry Potter and the Methods of Rationality.’’ (Yudkowsky founded the Machine Intelligence Research Institute (MIRI), which provided the original funding for CFAR; the two groups share an office space in Berkeley.) Yudkowsky is a controversial figure. Mostly self-taught — he left school after eighth grade — he has written openly about polyamory and blogged at length about the threat of a civilization-ending A.I.
And then there's the description of the cloistered atmosphere of the "workshop" itself:
As it turns out, I wasn’t the only one to find the workshop disorienting. One afternoon, I sat on the front steps with Richard Hua, a programmer at Microsoft who was also new to CFAR. Since the workshop began, Hua told me, he had sensed ‘‘a lot of interesting manipulation going on.’’

‘‘There’s something about being in there that feels hypnotic to me,’’ he added. ‘‘I wouldn’t say it’s a social pressure, exactly, but you kind of feel obliged to think like the people around you.’’ Another woman, who recently left her software job in Portland, Ore., to volunteer with CFAR, said her commitment to rationality had already led to difficulties with her family and friends. (When she mentioned this, Smith proposed that she make new friends — ones from the rationalist community.)
I was thinking from the FPP description that this was just a rebranding of neurolinguistic programming, but the above comparison to Scientology may be closer to the truth.
posted by Halloween Jack at 9:48 PM on January 14, 2016 [6 favorites]


Why does Snowcrash come to mind?
posted by boilermonster at 9:52 PM on January 14, 2016 [3 favorites]


It's good to see they address the popular idea that rationality and emotion are at odds.

However, it makes me sad to see mental health research getting framed as just another lifehack for better productivity. It reminds me of prosperity gospel or The Secret, in that they take something originally meant for wellness or deep fulfillment and reduce it to material gain. Psychology shouldn't just be limited to treating people with DSM codes, but I wonder how much you can get out of positive psychology if you just think it's a trick to become a 10x developer.
posted by mccarty.tim at 10:07 PM on January 14, 2016 [7 favorites]


To be clear, I get that psychology and medicine in general can be viewed as a way to make unwell people more productive. But I just read an Oxford Very Short Introduction about Carl Jung, so I might be biased the other way for a bit. It makes me sad to see a field founded around better understanding the mind, the core of our strange and brief existence, boiled down to this.
posted by mccarty.tim at 10:13 PM on January 14, 2016 [2 favorites]


Most self-help appeals to us because it promises real change without much real effort, a sort of fad diet for the psyche. (‘‘The Four-Hour Workweek,’’ ‘‘The Life-Changing Magic of Tidying Up.’’) By the magical-thinking standards of the industry, then, CFAR’s focus on science and on tiresome levels of practice can seem almost radical.

This reminds me of the weaponized, corrupted Buddhism cult they portrayed on Community.
posted by bleep at 10:13 PM on January 14, 2016 [3 favorites]


I'm about a third of the way through and I'm honestly having trouble figuring out if this is satire.
posted by davejh at 10:15 PM on January 14, 2016 [2 favorites]


It's also kind of scary how much of these statements I don't disagree with.
---
‘‘One thing that primates tend to do is to make up stories for why something we believe must be true,’’ Salamon told me. ‘‘It’s very rare that we genuinely evaluate the evidence for our beliefs.’’

System One wasn’t something to be overcome, she said, but a wise adviser, capable of sensing problems that our conscious minds hadn’t yet registered. It also played a key role in motivation.

But when he used aversion factoring — like goal factoring, but focused on what makes you avoid an unpleasant but important task — he made a surprising discovery. While visualizing how he would feel about applying for jobs if there were no chance of rejection, he realized that he still found the task aversive. In the end, he determined that his reluctance was rooted in a fear not of rejection but of making a bad career choice

What makes CFAR novel is its effort to use those same principles to fix personal problems: to break frustrating habits, recognize self-defeating cycles and relentlessly interrogate our own wishful inclinations and avoidant instincts.

So a lot of what we do is just trying to make people more aware of those predictions and to question whether they’re actually accurate.’’
-----

Jeez I would totally fall for this cult. Why can't we ever have nice things?
posted by bleep at 10:35 PM on January 14, 2016 [4 favorites]


I'm not sure how much people who believe "CFAR is like a scary cult" intersect with people who believe "CFAR is teaching a bunch of baloney things." However, suppose you believed that CFAR has reasonable ideas about how to help people think about stuff. How would you try to create CFAR in such a way that wasn't a scary weird cult?

(I spend time with a lot of CFAR/LW people, since they are a kind of person I like a lot, and attended a workshop a while ago. The CFAR stuff seems like a moderately useful way of thinking for me.)
posted by value of information at 10:46 PM on January 14, 2016 [6 favorites]


Jeez I would totally fall for this cult. Why can't we ever have nice things?

What resemblance do you see to a "scary weird cult" apart from "they're self-help-y and some members believe weird things"? Nobody's slipping weird religions in with the self-help, or manipulating people to abandon their former friends and families, or locking them up in compounds, or preparing poisoned Kool-Aid. So if the only cult-like aspect is the weird beliefs, which you say you largely agree with, then where's the problem?
posted by Rangi at 10:53 PM on January 14, 2016 [2 favorites]


How would you try to create CFAR in such a way that wasn't a scary weird cult?
The thing about nobody leaving the house, and group chants until everyone was vibrating, were incredibly creepy and unnecessary. The fact that the founders have a bunch of weird principles and creepy ulterior motives & strange beliefs; creepy and unnecessary. Incredibly expensive sessions: creepy and unnecessary.
posted by bleep at 10:55 PM on January 14, 2016 [10 favorites]


Those things aren't leading anywhere good.
posted by bleep at 10:56 PM on January 14, 2016 [2 favorites]


How would you try to create CFAR in such a way that wasn't a scary weird cult?

I'd not try to build a business out of charging people $3,900 for 4 days sharing a twin mattress on a kitchen floor.

Virtually every story about Silicon Valley and the emergent cultures that spill out of it, always seems to have some aspect of magical thinking: practice this, believe this, work this, live like this, and you'll get your pony. Throw millions at four 20 year old college dropouts who want to disrupt a century old industry with an app; work 90 hour weeks for equity; pay thousands for a workshop telling you that you're capable of much more if you just try to be you, harder.

The reason the valley has a noxious stink around it is because wherever magical thinkers gather, so do the grifters. Yudkowski is Hubbard 2.0.
posted by fatbird at 11:54 PM on January 14, 2016 [19 favorites]


he has written openly about polyamory

*shocked*
posted by Segundus at 12:05 AM on January 15, 2016 [3 favorites]


Hubbard strikes me as fully aware that a lot of what he was peddling was hucksterspooge.

Yudkowski's stuff has religious fervor to it but he strikes me as a true believer and sees the methods of rationality and less wrong stuff as important stuff, at least the light by which he's going to make his own life better and moreover the contribution by which he will be appreciated and valued by humanity. That and helping us all achieve transhumanist immortality.
posted by namespan at 12:09 AM on January 15, 2016


Jeez I would totally fall for this cult. Why can't we ever have nice things?

I can give you the answer to this, as well as any other questions you may have, but first you must give me all your stuff.
posted by Alvy Ampersand at 12:12 AM on January 15, 2016 [11 favorites]


Oh geez, that chanting is pretty embarrassing.

Interestingly, at least some of the things that are cultish are intentionally so, in a characteristically transparent way. People think that social bonding is a force that they want to use to motivate themselves to do the stuff they wanted to do. A non-CFAR example that involves 75% of the same people with 75% of the same goals is the Bay Area humanist winter solstice celebration -- it's basically taking a shot at harnessing ritual in a non-religious way to get people fired up about the community and common goals, etc. There are a bunch of LW posts about this. Your call on whether this is a great plan or a disaster waiting to happen.
posted by value of information at 12:13 AM on January 15, 2016 [2 favorites]


Why does Snowcrash come to mind?

When doesn’t it?
posted by bongo_x at 12:17 AM on January 15, 2016 [1 favorite]


Virtually every story about Silicon Valley and the emergent cultures that spill out of it, always seems to have some aspect of magical thinking: practice this, believe this, work this, live like this, and you'll get your pony. ]

Virtually every PRESS story about Silicon Valley and San Francisco tends to sell better when the the reporter uses the human interest angle of "my, these people are so weird".

Less Wrong and the Singularity connection is pretty out-there but CFAR itself actually is actually doing pretty solid work (I read rationality literature constantly, like at least 5 behavioral economics/social psychology/rationality books a month- and I can't get through much of Less Wrong's website because it is fucking unreadable even to a pretty rationality-literate reader. The writing style is just horrible, but that's not a CFAR issue).

I think getting away from jargon would be wise when working on applied rationality self-help projects, but academics in social psychology /behavioral economics also tend to invent jargon to create a shorthand for concepts that the study of irrationality is uncovering.

I'd suggest ignoring how weird Yudkowsky is and focusing on Julia Galef if you're trying to evaluate this stuff. She does a fantastic podcast about science, rationality, and philosophy of science (Rationally Speaking, run by NYC Skeptics, formerly cohosted by CUNY philosophy professor and author Massimo Pigliucci). From the hundred+ episodes I've listened to, she just comes across as a good science communicator who does a great job of explaining rationality research to the general public.

My main problem with CFAR is the cost of the workshops (I've been to one free 'test run' of a couple of modules and a couple of talks at the California skeptic convention) , but as I think I've seen them describe the organization, the workshops fund a significant part of the yearlong salaries and running of the org.

Having been involved in a bunch of small nonprofits that run themselves on grant funding and get into some weird organizational psychology because of the constant need to justify themselves for grant funding, I'm not sure it's a bad thing to attempt to self-fund a nonprofit this way, but I wish it were cheaper. I sure can't go to a workshop (I'm a carpenter, not a tech-salaried person) and I'm obsessed with trying to bring the lessons of social psychology/behavioral economics/decision science to my life, so I'd probably have done a workshop if it weren't $4kish.
posted by girl Mark at 12:39 AM on January 15, 2016 [5 favorites]


But when he used aversion factoring — like goal factoring, but focused on what makes you avoid an unpleasant but important task — he made a surprising discovery. While visualizing how he would feel about applying for jobs if there were no chance of rejection, he realized that he still found the task aversive. In the end, he determined that his reluctance was rooted in a fear not of rejection but of making a bad career choice


...This just sounds like psychodynamics dressed up in a weird sort of project management/LW style jargon. Which is kinda my reaction to everything described in the article.
posted by PMdixon at 1:00 AM on January 15, 2016 [4 favorites]


I feel kind of sad about the movement's slide into self-help. I read Less Wrong, the one-time center of internet rationalism, from the beginning, and Overcoming Bias before that. For teenage me, these were pretty happening places, with smart people and fascinating ideas, especially Yudkwosky's. But eventually Yudkowsky stopped posting, Less Wrong got to be too self-indulgent and self-helpy, and I drifted away (having also sort of outgrown it).

(Somewhat better offshoots were growing in the meantime: the Machine Intelligence Research Institute, the effective altruist movement, the rationalist blogosphere (e.g. slatestarcodex, which metafilter loves to hate). And AI risk has finally gone mainstream.)

But - like much self-help? - a certain set of people seem to get a lot of value out of the LW/CFAR ideas. This includes a close friend of mine. I now find a lot of these ideas really annoying, like experiments finding "cognitive biases" that are really just the researcher not accounting for some aspect of the situation. But on a personal level, I try not to be too judgemental about things that are helping people.

As an aside, I feel like there's some similarity here with the birth of the self-help movement out of the ashes of 60s counterculture (this is my basic understanding from having watched Century of the Self). You start off wanting to fix the world, but end up trying to fix people instead.
posted by gold-in-green at 1:27 AM on January 15, 2016 [2 favorites]


I started picking out some batshit insane bits, but I wound up quoting nearly the entire article. So instead, I'll just leave my favorite part here.
Smith was home-schooled and raised by ‘‘immortalist’’ parents. (Immortalists believe that one of humanity’s most-pressing needs is to figure out how to overcome death.) Smith, who goes by Valentine, described his father as a former ‘‘Ayn Randian objectivist’’ who believed in telepathy and named his son after the protagonist in Robert Heinlein’s science-fiction classic ‘‘Stranger in a Strange Land.’’ (In Heinlein’s book, Valentine Michael Smith is raised by Martians but returns to Earth to found a controversial cult.)
Seriously, all the former-jugglers, people who claim to have made and lost millions of dollars in bitcoins when they were 18, startup founders taking off their clothes and asking to be touched, submerging their hands into chicken curry, referring to human beings as primates when discussing psychology, saying things like, "I will allow you to disengage," and "I want us to conquer death," staying packed in a house and sleeping on mattresses on the kitchen floor for four days straight...

Batshit. Insane.
posted by davejh at 3:38 AM on January 15, 2016 [14 favorites]


So two things.

1. While reading this I immediately thought of The Guardians of the Galaxy where Rocket stands up and say, "There, now we're all standing. What a bunch of Jackasses." I'm not entirely sure why this came to mind, but it has something to do with the fact that they all had to be standing for the authenticity of the plan to be real and the mocking acknowledgement that indeed, standing had no actual benefit.

2) I know this is going to lead to some form of monetized disruption that is going to swindle the poor. I just hope that someone at least monetizes this by filming it in a reality TV style house full of paper millionaire Silicon Valley types doing stupid things like wear signs that say touch me while their shirt is off. There is a level of vainity that I find so... palpable... to someone that has to stand outside and look in the window to feel 'excluded' that makes me want to take their picture and watch them while slowly ripping it up in some grotesque Fellini film style.
posted by Nanukthedog at 4:26 AM on January 15, 2016 [1 favorite]


Ohhhh, I get it: it's all run by the creator of Harry Potter and the Methods of Rationality. Dude has found a niche market of applying "rigorous" methods to systems of belief or nature that are completely fabricated from the mind of one person, and then getting results back with more digits of precision than any of the inputs.

This guy is going to make a killing in Silicon Valley. As is oft-repeated: the people who get rich in San Francisco aren't the ones mining for gold, they're the ones selling shovels.
posted by Mayor West at 4:35 AM on January 15, 2016 [8 favorites]


Calling Yudkowsky an AI researcher is like calling a virginal erotica writer sexually experienced. Every time you see him described as "researching" or "studying" something, replace that with "speculating about" and you've just moved your understanding a bit closer to reality.
posted by Pope Guilty at 4:58 AM on January 15, 2016 [20 favorites]


What if the super-intelligence is only brought about by buying cereal?

This is a PKD story waiting to be written.

...

I'm not joking, write it.

...

No, seriously, that was a command. DO IT!
posted by aramaic at 6:06 AM on January 15, 2016


This is so so so so so Scientology 2.0. Right down to the insistence that anyone who doesn't agree hasn't "studied" the (voluminous) works of the founder enough to criticize.
posted by overeducated_alligator at 6:15 AM on January 15, 2016 [2 favorites]


If you do not like shortened url's and you want to know what you are clicking on the referenced New York Times article title, author, url:

The Happiness Code
A new approach to self-improvement is taking off in Silicon Valley: cold, hard rationality.
By JENNIFER KAHN JAN. 14, 2016


The article is a trend piece. They didn't include any rational solstice celebrations which would have been timely--these evangelical atheists have a big meetup at christmas and they call it a solstice celebration where they talk about how smart they are and the horrors of having 4 billion people active on the internet. (Which to be fair is pretty horrible but they like to use logical fallacy tables and they don't see what is More Wrong about that?)

The best part was at the end: Jennifer Kahn is a contributing writer for the magazine. She teaches in the magazine program at the Graduate School of Journalism at the University of California, Berkeley, and was a Ferris professor of journalism at Princeton University in 2015.

WHAT THE FUCK IS A MAGAZINE PROGRAM AND WHEN DID THAT GET STARTED?
posted by bukvich at 6:21 AM on January 15, 2016


Where's Roko's basilisk when you need it?

I always think of this conceit as "Rothko's basilisk" and then I imagine a basilisk which is simply colored, yet luminous and deeply moving.

the above comparison to Scientology may be closer to the truth.

One-third EST/Landmark Forum, one-third neurolinguistic engineering/RAW wackiness, one-third pure grift, if you ask me.

You know, I look around these days and I see religious cults, political cults, philosophical cults, business cults. I think the 21st century will be the age of cults. (As the lady said, "In the new century I think we will all be insane.")
posted by octobersurprise at 6:28 AM on January 15, 2016 [4 favorites]


One-third EST/Landmark Forum, one-third neurolinguistic engineering/RAW wackiness, one-third pure grift, if you ask me.

but landmark forum is already one-half pure grift
posted by murphy slaw at 7:21 AM on January 15, 2016 [1 favorite]


The thing about nobody leaving the house, and group chants until everyone was vibrating, were incredibly creepy and unnecessary.

College theater. This kind of thing is scary from the outside -- heck, it can be overpowering from the inside -- but it's a kind of universal impulse that goes way way back culturally, and people bond this way, and it's worth respecting. Not saying it's not a kind of powerful experience that can't be harmful, but just that, taken individually, most of these "weird" behaviors discussed in the article are, if not completely innocent, at least not really crazy.


My main objection to the whole "blending" of "rationality" (Tversky's system 2) and "emotion" (Tversky's system 1) that this program seems to espouse and that behavioral economics talks around is that the emotional stuff is actually the _basis_ for all behavior. Why do we do anything? Why do we bother eating? Because of a biological drive. Why do we want to help others, invent cool things, or (as some of the participants aspire) conquer death? Fundamental drives based on nothing or even, if you are traditionally religious, spiritual drives.
posted by amtho at 7:37 AM on January 15, 2016 [3 favorites]


What strikes me about this is how radically individualistic it is, and how the tech-bro has become the symbolic ideal American.

You can't lifehack your way out of, say, needing to care for a child with a serious disability. You can't lifehack your way out of racial or gender discrimination. You can't lifehack your way out of chronic illness. While you can do some things that can ameliorate those things, they are fundamentally structural/social problems that need to be addressed collectively by the provision of services and by regulatory process. But it's much easier to talk about how if you just get your brain right, then you'll get your PhD and make a million dollars, because then you don't have to talk about how unequal our society is.

And it's much easier to set up the 18-to-30ish single techbro as the symbolic American - of course what is most important is optimizing your "potential" so you can work seventy hours a week and make lots of money, because what else could you possibly have to do with your life? What else could be as valuable as doing that, either to you or to society? Who is the most worthwhile person? The young single man who creates lots of market value without even needing to consider personal relationships or social responsibilities.

Also, the fetishization of the gym/sports/"training" and so on. Look, exercise is great, I go to the gym a lot myself and value my newish core strength. But this return to "what do I need to do to be a better person? Go to the gym! Spend my lunch hour learning a martial art!" business? That's pretty fetishy, and it fits right in with the macho, individualistic values of our current culture. (And with the militarization/police-ization/physical-strength fetish of contemporary culture too.) No one is saying "what do I need to do to be a better person? I need to find the willpower to carry my half of my relationship and help my aging parents", just "what I need to do is find the willpower to do martial arts every day at lunch".

That is, this is creepy to me not because it's a weird therapeutic modality - lots of weird therapeutic modalities work, like weird diets work, if you're ready for a change and the new modality shakes you out of old bad habits - but because it's so selfish and idealizes such a bad symbolic human*.

*Obviously there aren't really that many ruthless, Mr. Universe-type martial artist, seventy hour week soylent-drinking kickboxing Ubering, washio-ing, optimizing, personal-relationship-less value creators out there; it's just that this is a new ideal for how people should be.
posted by Frowner at 7:44 AM on January 15, 2016 [35 favorites]


Disclaimer: I knew Julia Galef (still am Facebook friends with her) through the New York City skeptical community, which I have dropped the fuck out of. (Partial explanation for why I never got around to rejoining when I realized I'd been away for a couple of years.) I know she's an exceptionally intelligent woman and I respect her, especially given the climate of bullshit she undoubtedly deals with by being a woman in the skeptical/rationalist community.

This is one of those cases where I really wish it was easier to separate the community from the message. Anything associated with MIRI and LW has such a large hurdle to get over in my head. The magical thinking that the largest problem facing the world is one that only the ubernerds can solve and is in fact more relevant the things are known to actually threaten the current human civilizational structure (climate change, systematic inequality that drives both mass migration and war among other) always makes me discount any claim to rationality that I get from these groups. That, combined with the highly privileged, insular community and inherent evangelism described makes me very wary.

That said, the message itself sounds very good. It sounds like a DIY set of CBT actions. Working to understand one's brain is always a good thing, and if it can help people actually do what they want, that is generally a good thing. I just wish it wasn't going towards this group of assholes.

Right before hitting post, I thought about what a program like this would look like if directed towards a group of young, poor, black men. What steps can they take using these techniques that will not simply be thwarted by the system we live in? I suppose that you could follow them into establishing a separate system, although generally to do that one needs to look towards extra-legal methods. (I'm thinking String Bell in The Wire here.)

On preview, Frowner just said what I was aiming for in that last paragraph better than I.
posted by Hactar at 7:48 AM on January 15, 2016 [7 favorites]


(Also the little sidebar about how most problems can be solved by an individual based on five minutes of thought - that's, like, the worst engineer's disease thing I've ever seen. No, actually that's not true - because the world is not in fact your diorama to rearrange. Other people have interiority. History exists. Infrastructure exists.

One might usefully examine the results of Le Corbusier's architectural philosophy as a comparison point - lots of very clever stuff and some really quite attractive architecture gone substantially to waste because of arrogance and disdain for the weight of what already exists.)
posted by Frowner at 7:54 AM on January 15, 2016 [18 favorites]


At the same time, it is also considering whether it would be "higher impact" to focus on teaching rationality to a small group of influential people, like policy makers, scientists and tech titans.

Because goodness knows what scientists need is to be lectured on rationality by people who spend a lot of time talking about cryonics and the dire threat of global destruction by malevolent artificial intelligence.
posted by hydropsyche at 8:08 AM on January 15, 2016 [17 favorites]


If reasoning and rationality evolved to win arguments rather than discover truth, what does that say about a cult trying to elevate it to the level of supreme virtue?

I bet these types reason their way to being shitty tippers when they go out for an evening. I mean, not all Bay Area techies think 10% is a gratuity (Because here in America, that's a Fuck-You).

But start talking about brain-hacking and personal optimization etc & I automatically see a credit card receipt where the final total comes to an even $$ amount with .00 cents... and the math on the gratuity line comes to 9%.

Kirk would'a tipped like a baller.
posted by Pirate-Bartender-Zombie-Monkey at 8:17 AM on January 15, 2016 [3 favorites]


Fascinating. I was startled to see the name Anna Salamon in this article; I went through the International Baccalaureate (IB) program with her in high school. Very very bright, excellent in science, mathematics, and really all our classes. It's a bit disappointing to see this kind of evo-psych mind-as-computer nonsense is where she spends her time these days. Logic and rationality are good practices to apply in certain situations, but the mind is inherently irrational and driven by emotion, and anyone who claims to be a 'rationalist' is generally full of bullshit.

The article mentions folks from Lumosity as attending CFAR's seminars. Lumosity is on the receiving end of the FTC's hammer for peddling bogus claims about the 'brain training' it sells. CFAR might want to learn from Lumosity's mistakes.
posted by Existential Dread at 9:26 AM on January 15, 2016 [1 favorite]


Logic and rationality are good practices to apply in certain situations, but the mind is inherently irrational and driven by emotion, and anyone who claims to be a 'rationalist' is generally full of bullshit.

It's like the old taoist idea that things of a given category are always necessarily composed from other things that aren't in the same category of thing. So rational ideas are built up out of components like emotion and cultural/social conventions that aren't themselves rational, in the same way a bicycle is built from parts that aren't all little bicycles. Rational things can always be deconstructed into irrational things at some other level. That doesn't mean rationality doesn't have its uses, when governed by some benign nonsense like intuition or human feeling. But rationality doesn't work as a totalizing idea anymore than any other idea does, though it's easy to see the temptation to put rationality on a pedestal.
posted by saulgoodman at 9:50 AM on January 15, 2016 [4 favorites]


This is all the more surprising given that the workshops, which cost $3,900 per person, are run like a college-dorm cram session.

I applaud this business model. All they have to do is rent space and pay presenters. Lot of room for profit margin there. Does anybody know what the presenters' take is?
posted by bukvich at 9:50 AM on January 15, 2016


This is like a nativistic movement of the second kind, where you don't believe in a big wind that will blow the invaders away, or shirts that will repel bullets, but instead do things like build church-like buildings and sit around in pews in hopes that will somehow prevent them from driving you off your land and killing you.

But sorry people, conditioning yourselves to act like cartoon versions of robots isn't going to help, it's only going to render the process of replacing you all the more seamless.
posted by jamjam at 9:55 AM on January 15, 2016 [2 favorites]


Ironically, one of the biggest cognitive problems that techbros have is overthinking, for which this program will be of no use at all.
posted by murphy slaw at 10:06 AM on January 15, 2016 [2 favorites]


But sorry people, conditioning yourselves to act like cartoon versions of robots isn't going to help, it's only going to render the process of replacing you all the more seamless.

That is really an excellent insight! It's so obvious when you think of it - all this lifehacking/"training"/how-can-I-drink-soylent-so-I-have-more-time-to-code is really just mimicking a sort of imagined android. The robot is the model!

It reminds me a bit of something that's somewhere in Lewis Mumford about regimented, clock-like scheduling of the day (using bells, and particularly in religious communities) introduced the idea of clock-following and timing activities carefully before there were actual clocks widely available. You have to get people ready for something before it can be effectively introduced. So I guess we're getting ready for our robot overlords, or for a [longed-for] abandonment of the organic.

Actually, it seems like willed depersonalization to me. Not the psychiatric kind, but the idea that you can sort of install a program in your brain that you can then turn to instead of relying on your feelings, immediate impressions, so-called personal preferences, etc. The ideal thing is that a stimulus will happen (you walk through the door) and "you" will respond without conscious volition (drink water or put on your gym clothes). Like your consciousness is carried along for the ride, except when you're hacking your consciousness in order to improve your programming.

There's a bit in the otherwise terrible CS Lewis novel That Hideous Strength about what must be the fifties version of a lifehacker/rationalist (the Frost character - if you've read the novel) who aspires basically to this, only with a little more "feelings are bad".
posted by Frowner at 10:24 AM on January 15, 2016 [6 favorites]


"Calling Yudkowsky an AI researcher is like calling a virginal erotica writer sexually experienced. Every time you see him described as "researching" or "studying" something, replace that with "speculating about" and you've just moved your understanding a bit closer to reality."

I think this is misleading. Yes Yudkowsky's ideas about AI risk are very speculative, but nobody really has anything to go on other than speculation. AI risk is about systems vastly different from what's called AI today. Nobody knows how they'll work. All we have are some worrying thought experiments about the possibility that an AI could become superintelligent very quickly, gain power very quickly, and have goals misaligned with ours. It's not nearly as pressing or certain as e.g. climate change, but to me it's still something to be concerned about. Talking about atom bombs right after the discovery of quantum physics would have been pure speculation too, but it might have been helpful.


Also to be clear Yudkowsky is not running the workshops this article talks about. Though he does propagate the same set of ideas.
posted by gold-in-green at 11:05 AM on January 15, 2016


Aren't instinct and feelings the same as programs though? In the course of thinking about this comment, I have been straightening my house. Without conscious input I've picked up my gloves to put in my coat pocket, and then picked up trash to put in the wastebasket, all without being consciously aware of what I was doing besides when I chose to pay attention.

I mean it's just set dressing, whether you call that a 'robotic program' or a 'feeling and personal preference'. And it's possible that the word program has a different emotional resonance for some people than others and that makes it easier to think in those ways.

I think, too, that the fact that these particular people have a cultural value of output and productivity, which isn't inherently logical or rational - you have to have axioms or values to start from. To geek out a bit, Vulcans clearly had a set of values that they used their logic to work towards - pacifism, collectivism, a desire for learning. Otherwise they would sit around doing nothing. Not having a set of values or axioms to start would be illogical, or at least alogical, because then you'd have nothing.
posted by Zalzidrax at 11:19 AM on January 15, 2016


We've been looking at the death of spirituality and religion for ages now, and suddenly people are alarmed that there's some step in the opposite direction? And that this new pseudo-religion has adopted the trappings of the ambient culture, i.e. one focused on Reason and Science!? This all seems par for the course.

Even in Silicon Valley, the place that accelerated our societal atomization through technology, people seek to escape atomization by creating new communities. Spiritualism has long served as an efficient core for new communities. I don't see what's so surprising about any of this.

You don't want people to join cults, then offer them a more meaningful mainstream community that they're content to remain in.
posted by Apocryphon at 11:30 AM on January 15, 2016 [1 favorite]


Talking about atom bombs right after the discovery of quantum physics would have been pure speculation too, but it might have been helpful.

Sure, if the people talking were quantum physicists. Yudkowsky is like the definition of Just Some Guy.
posted by murphy slaw at 11:32 AM on January 15, 2016 [7 favorites]


Great timing for this article to be released the same day that this piece - "Beware of the Silicon Valley cult" was on the front page of Hacker News. As far as the potential for abuse and being bossed around by charismatic madmen go, startups themselves are more dangerous than this group of would-be Mentats.
posted by Apocryphon at 11:36 AM on January 15, 2016 [1 favorite]


You don't want people to join cults, then offer them a more meaningful mainstream community that they're content to remain in.

brb, de-alienating late capitalist postmodernity so I can earn the right to critique its symptoms
posted by RogerB at 11:37 AM on January 15, 2016 [11 favorites]


Actually, it seems like willed depersonalization to me. Not the psychiatric kind, but the idea that you can sort of install a program in your brain that you can then turn to instead of relying on your feelings, immediate impressions, so-called personal preferences, etc. The ideal thing is that a stimulus will happen (you walk through the door) and "you" will respond without conscious volition (drink water or put on your gym clothes). Like your consciousness is carried along for the ride, except when you're hacking your consciousness in order to improve your programming.

This really irks me. Not what you've said but what these people have done to some great research and useful theory into how the brain processes information and how things like habits are formed, those unthinking or less thinking actions people end up doing automatically. We wouldn't be able to function without this happening. Our brains do react to stimuli in an unconscious manner all of the time. Anyone who has ever been driving and suddenly found themselves miles from where they last remember has experienced these processes in action.

It can be really useful to understand these processes if there are things that you want to change or at least figure out why it might be difficult to do something different then what is usual.

These people seem to have package some very useful things and tools into this bigger sort of philosophy and worldview which is j ust 'eww'. Techniques like TAPs or similar aren't some sort of rational revelation or 'hack'. It's just cognitive science that even the most 'irrational' person could make use of if they felt like it.
posted by Jalliah at 11:44 AM on January 15, 2016 [1 favorite]


Techniques like TAPs or similar aren't some sort of rational revelation or 'hack'.

Yeah, it's more the ideology about the thing than the thing itself. "What semi-conscious beliefs hold me back from doing something I'd like to do?" and "Do I really want to do this, or do I keep avoiding it because I don't want to do it for reasonable reasons" and "I should try to create the habit of drinking a glass of water with dinner" are not on the face of them bad formulations.
posted by Frowner at 11:47 AM on January 15, 2016 [1 favorite]


I will say, though, that going to an intense workshop for a weekend can be fun and transformative, to be honest. I pushed myself to do this "how to run workshops and train people" thing once, even though I was really dreading being in a large group all weekend, and not only was it much more fun than I thought but the totality of the experience was almost like a vacation - I had to focus on what was in front of me so much that I completely disconnected from work stressors and life worries.

I am a huge introvert and would not want to do this every weekend, but I would be absolutely willing to take an intense, on-site workshop again if it were something I was really interested in.
posted by Frowner at 11:49 AM on January 15, 2016 [1 favorite]


Techniques like TAPs or similar aren't some sort of rational revelation or 'hack'. It's just cognitive science that even the most 'irrational' person could make use of if they felt like it.

The jargon barrier must be substantial here -- to me "hack" fits comfortably in the connotation of "quirky thing you do to solve a problem" and "rational" fits comfortably in the connotation of "using your conscious reasoning to come up with the solution."
posted by value of information at 12:02 PM on January 15, 2016 [1 favorite]


Fun fact: Anna Salamon is one of those people who paid five dollars to post one comment (so far) on metafilter. It is here.
posted by bukvich at 12:13 PM on January 15, 2016


I'm just going to register that this thread is really, really gross.

I have limited exposure to CFAR, MIRI, LessWrong, etc. I've read and commented on a few online pieces but never met any of the principal actors in real life nor attended any of their functions nor given them any money. But from what I've seen, they are doing something basically indistinguishable from contemporary, empirically-informed analytic philosophy of a certain stripe. For example, what I've seen from them on decision theory is as serious and interesting as recent work published by Spohn or Joyce. And they've had world-class philosophers (like Easwaran and Briggs) associated with their projects, as well.

Again, based on my limited experience with people from LessWrong, almost all of the criticisms in this thread are based on ridiculous caricatures, straw-men, and simple misunderstandings. It seems to me that most of you aren't even trying to be charitable or to take the thing seriously. And it is more than a little surprising that those of you who have had real-life contact with some of the principals involved (Galef and Salamon) and who remember them as intelligent, thoughtful people aren't taking even a second to question whether your initial opinion of the project they're involved in might have been mistaken. Rather than saying, "I was startled to see the name Anna Salamon in this article ... It's a bit disappointing to see this kind of evo-psych mind-as-computer nonsense is where she spends her time these days," why not question whether there is something you're missing about CFAR or whether the reporter is being fair?

I don't expect much from metafilter when it comes to philosophy and philosophy-related things. We do philosophy very badly here in my experience. But this is really disappointing.

Thanks, JiffyQ, for posting an interesting article.
posted by Jonathan Livengood at 12:28 PM on January 15, 2016 [8 favorites]


The parts I liked about the article were the eschaton and the messiah.

Yudkowsky is known for proclaiming the imminence of the A.I. apocalypse (‘‘I wouldn’t be surprised if tomorrow was the Final Dawn, the last sunrise before the earth and sun are reshaped into computing elements’’) and his own role as savior (‘‘I think my efforts could spell the difference between life and death for most of humanity’’).
posted by bukvich at 1:07 PM on January 15, 2016 [1 favorite]


MeFi is also very bad at Silicon Valley, but understandably so.
posted by Apocryphon at 1:07 PM on January 15, 2016 [1 favorite]


The parts I liked about the article were the eschaton and the messiah.

Those are both characteristic of a man doing important work in a field he is an expert in and not an anime-addled autodidactic bullshit artist whose brand of bullshit happens to appeal to the vanity of a particular sort of nerd.
posted by Pope Guilty at 1:44 PM on January 15, 2016 [6 favorites]


Jonathan, all my respect for MIRI vanished when they attempted to present themselves as a viable target for effective altruism. While I have some issues with EA, the idea that speculating about AI is on order with distributing malaria mosquito nets disgusts me.

As I said in my comment, the program itself sounds useful, as it introduces a level of introspection and CBT training which is useful to many. And if one wants to work in a community promoting rationalism, it is hard to find a group outside of the LW community or the skeptical community. I detailed my issues with the skeptical community earlier and I think the overlap of MIRI/singulatarians with LW severely damages LW and its ideas. If they are willing to support non-rational ideas such as cryonics and the belief that the largest threat humans currently faces is an "unfriendly" AI (Eclipse Phase is a wonderful role playing game, not Casandra prophesying doom), then there are obvious unexamined blindspots present.

So it's a shame that this program is being attended by psuedo-rationalists. It looks like it could be useful if generalized. And the quote from the tech-bro who concluded that he had not been aggressive enough clearly showed that he was not understanding a large part of what was presented, but if this sort of program leads him creating another "disrupt the economy by ignoring labor/business/real estate laws" app company, I feel that this program could end up as a net negative for American society.

I wish this program were being offered to a group that wasn't the usual group of assholes. I think it would make a great part of a health curriculum in high school (eliminating the jargon, but the ideas are solid). So yes, I still respect the people who are running this. Just not so much the people attending.
posted by Hactar at 1:51 PM on January 15, 2016 [3 favorites]


I'm just going to register that this thread is really, really gross.
Jonathan Livengood

With respect, your attitude seems to come from your admitted "limited exposure" to the "CFAR, MIRI, LessWrong, etc."

What many are reacting to, and which your post conveniently ignores, are more the community and attitudes of these groups than any individual attempts on their part at philosophy. The cult of personality that's been built up around Yudkowsky, the bizarre almost religious obsession with future AI god/demons, the libertarian politics presented as inevitable conclusions of rationality.

These aren't "ridiculous caricatures, straw-men, and simple misunderstandings", they're fundamental aspects of these communities, and it's somewhat dishonest to decry people's reactions without addressing them.

Perhaps you could enlighten us on the serious philosophy that is Roko's basilisk?

As many other posters here have said, this new movement has some decent ideas wrapped up in some bizarre nonsense and is heavily tainted by the LW/MIRI crowd it arose in.
posted by Sangermaine at 2:19 PM on January 15, 2016 [5 favorites]


These aren't "ridiculous caricatures, straw-men, and simple misunderstandings", they're fundamental aspects of these communities, and it's somewhat dishonest to decry people's reactions without addressing them.

I suggest that you are filtering for things that are fun to read about.

Imagine that you liked thinking about football and you had a friend who was a college football player, so it was fun to talk to him about what it was like and how his team was doing. Furthermore, imagine that one of the football players on his team is really good and might have a pro career. Sometimes you post about one of their games on Facebook. This is approximately how much most rationalist types are obsessed with MIRI and Eliezer.
posted by value of information at 2:36 PM on January 15, 2016


I just deleted a couple paragraphs on why I feels such disdain for LW and its ilk, because that's totally not the point here. I did want to say that Metafilter has done philosophical threads quite well in the past. It's just that, on any topic, we tend to bog down on the lived experiences side of it that we frequently seem to feel receives too little attention.

I get that it can be frustrating when you want an abstract discussion, but I also feel like that focus is a big part of Metafilter's character, which is obviously pretty good on the whole.
posted by fatbird at 2:46 PM on January 15, 2016 [3 favorites]


This is approximately how much most rationalist types are obsessed with MIRI and Eliezer.

But this is exactly what I mean about dishonesty.

Yudkowsky isn't just some guy who happens to be associated with "rationality". He was the co-founder of Overcoming Bias, the founder of Less Wrong, and co-founder of the Singularity Institute for Artificial Intelligence (SIAI) which became the Machine Intelligence Research Institute (MIRI), from which CFAR spun off.

Yudkowsky and his circle have been at the center of the online rationalist/skeptic community for years, and it's disingenuous to wave that away. They've deeply affected and shaped the community's norms, focus, and behavior.

You simply can't separate things like CFAR from Yudkowsky and his crowd because they were so instrumental in its creation and development, and with entire creation and development of the entire rational community. Or at least, you can't act surprised that people have the reactions that they do.
posted by Sangermaine at 2:47 PM on January 15, 2016 [2 favorites]


But from what I've seen, they are doing something basically indistinguishable from contemporary, empirically-informed analytic philosophy of a certain stripe.

Yudkowsky banned any discussion or mention of Roko's Basilisk because it was scaring the shit out of people. How many analytic philosophers do you know who are actually viscerally afraid of a future transcendent AI simulating and torturing them to the point of discussing optimal suicide methods to reduce the risk of simulation? This isn't Yudkowsky's position, mind you, but something about the LessWrong-osphere seems to actively help people argue themselves into a totally irrational fear.
posted by BungaDunga at 3:14 PM on January 15, 2016 [5 favorites]


Yudkowsky and his circle have been at the center of the online rationalist/skeptic community for years, and it's disingenuous to wave that away. They've deeply affected and shaped the community's norms, focus, and behavior.

You simply can't separate things like CFAR from Yudkowsky and his crowd because they were so instrumental in its creation and development, and with entire creation and development of the entire rational community. Or at least, you can't act surprised that people have the reactions that they do.


Sure, EY is at or near the center of the community, but that's a lot different from a "cult of personality", etc.

I don't mean to act surprised. I get that if CFAR spun off from MIRI, and MIRI was founded by Eliezer, and Eliezer is a polarizing figure, and that's most of what people know about it, then people's general opinions about Eliezer (& MIRI, et al.) are going to transfer. I'm just pointing out that in fact, this is just an opinion which is transferring and doesn't really have a lot of stuff to do with the object level of what CFAR is up to.

(On the other hand, if you're really turned off by HPMOR, that actually probably does indicate that you would be turned off by CFAR content.)
posted by value of information at 3:31 PM on January 15, 2016


I like to call these folks who worship reason the "hyper-rationalists". It's amusing to me that one of their tenants is the pursuit of everlasting life just like so many religions that came before them.
posted by macrael at 4:05 PM on January 15, 2016 [1 favorite]


I think it's that sort of stuff--the utopianism--that gives it such a cult-like vibe. The pitch for the ideas isn't "here are some useful conceptual tools for managing certain quirks in how our brains work," it's "hey, we're going to fundamentally remake the whole world with a handful of ideas!" Seems sort of like the philosophical equivalent of clickbait. But I have to concede, I'm basing that impression on only very minimal exposure to these ideas and a shallow understanding of how these communities work. Not my cup of tea, I can tell, but it's not clear if this stuff is actively harmful.
posted by saulgoodman at 6:05 PM on January 15, 2016 [2 favorites]


Also the getting impressionable people into a small room, isolating them, and whipping them into a mindless frenzy.
posted by bleep at 6:13 PM on January 15, 2016 [2 favorites]


Oh yeah. That, too, now that you mention it.
posted by saulgoodman at 6:21 PM on January 15, 2016 [1 favorite]


That Roko's Basilisk thing is...wow. It's not just rationalists who are so rational that, just like many religious, they seek personal immortality; it's rationalists who are so rational that they have managed to convince themselves of the reality of a sort of vengeful, malevolent God/Devil fusion and the inevitable reality of hell and eternal torment. And then turned this into a reason to give all their money to MIRI. So they've sort of self-culted - ordinary humans actually need a sketchy guru; rationalists can get the cult experience without any sort of leadership.
posted by Frowner at 8:22 PM on January 15, 2016


I mean it's far from the first time that atheist hyper-rationalists have done crazy things in history, sooo
posted by Apocryphon at 8:39 PM on January 15, 2016


Maybe we'd all get along better in this thread if we all chant "1-2-3 Victory!" until we're vibrating.
posted by davejh at 9:15 PM on January 15, 2016 [1 favorite]


BungaDunga: it is both dickish and plain inaccurate to tar a community by linking to random posts on their forum. It's like if I linked to this post to imply that that the Mefi community is obsessed with murdering people in isolated situations.

Also: Rationalwiki has a longterm rivalry with Lesswrong and is no more a reliable source on it than Conservapedia is on Bernie Sanders.
posted by drethelin at 11:01 PM on January 15, 2016


Seems like a consensus is clear here, but I want to throw in with Jonathan Livengood. I work with folks who have done the CFAR program and come back with healthy and useful practical tools for addressing issues they'd struggled with in their life. The extent to which this thread is thrashing in sinister fever dreams is confusing to me.

Lots of self-help is a bunch of sensible advice packaged in a way that makes it something that a particular person can grapple with. Not everyone wants the same package. This particular package appeals to some people. Those people might not be like you, and yes they're paying a lot to get that, but it's not, like, taking over the world.

As was pointed out up thread, this is a trend piece. Trend pieces at the Times are written to get audiences like us up in arms about the insane thing that a few hundred people somewhere are doing and we all get to feel good about how mindboggling dumb it is and be pleased about the awful future it portends. But this just doesn't deserve that much attention or bean plating. If some rich people in SF (a city in which I do admittedly reside, and so am naturally intellectually and ethically compromised) want to spend $4k to learn things you already know in a way that you think is pretentious and stupid from people you think are cult leaders, who cares? This is a massively privileged audience that is capable of taking care of its self and for whom this is not an irresponsible sum of money.
posted by heresiarch at 11:11 PM on January 15, 2016


This kind of reminds me of the PUA community. Kind of how there's a kernel of useful and good information on being a better version of yourself (e.g., being more confident and some stuff about self-improvement), but it seems to be wrapped in a lot harmful beliefs supposedly backed up by science, logic, and psychology.
posted by FJT at 11:30 PM on January 15, 2016




I know we're winding down, but I am also upset by how uncharitable and quick-to-judge this thread has been, and I would like to make an apologia for rationalists and some of their ideas, especially around AI (the self-improvement stuff is not really my cup of tea).

For better and worse, rationalists treat ideas more seriously and earnestly than most people (for me this is their defining feature). And deeply woven into today's rationalist movement is the attempt to take seriously several very weird ideas, including the possibility of superintelligence, and the possibility that (in the future) minds could be simulated on a computer. If you take superintelligence seriously, then you have to worry about the risk from a superintelligence whose goals are misaligned with those of humanity; the book Superintelligence by Nick Bostrom, a philosophy professor at Oxford, is supposed to be a good discussion of all this (he cites Yudkowsky extensively). Sorry I don't know a good short online introduction (there are a lot of bad ones though).

If you take simulation seriously, you have to worry about the possibility that you yourself are being simulated, perhaps even by a malevolent AI. Trying to tease out all the consequences leads you to some very weird and interesting scenarios: "Newcomb's problem", "counterfactual mugging", and, indeed, Roko's basilisk. There isn't widespread agreement on how to deal with these cases; a few people take them very seriously, ergo the Roko's basilisk scare.

I do agree that discussions of superintelligence risk look a lot like eschatology. I think this is in part because when we talk about superintelligence, we're dealing with big unknowns, so it's nice to keep our thought experiments as simple as possible: what if our AI was so smart that it could take over the world if it wanted to? what could happen if we had "benevolent" or "malevolent" AI? what could cause an AI to act "malevolently"? (it might not take much). If and when superintelligence emerges in the real world, its abilities and goals and moral status will probably be much more complicated than in any of these thought experiments (it might not even be dangerous at all). But they're still useful and illuminating.

And yes, these ideas can appeal to a nerd's sense of importance: your work on friendly AI might save the world! I personally think collective action is just as important: because AI is so risky, we need good institutions in charge of its development. And we really need to avoid an AI arms race, because that's a great way for one side to overreach and accidentally build an unfriendly AI that gets out of control.

Anyway, my point is that even though these ideas sound weird, they're being pursued by smart and earnest people who care about the future of the world. Even if you disagree, you shouldn't dismiss them out of hand, and you shouldn't assume that they're charlatans.
posted by gold-in-green at 11:19 AM on January 16, 2016 [1 favorite]


Anyway, my point is that even though these ideas sound weird, they're being pursued by smart and earnest people who care about the future of the world.

The Scholastics arguing about how many angels can dance on the head of a pin were smart and earnest people.

Reason can only get you so far if you start from flawed premises. Singulatarians are making untestable claims about strong AI (because nobody knows how to make a strong AI, or even what "intelligence" means) and then spinning out wild scenarios about possible futures based on pure supposition.

These ideas are interesting thought experiments and make for cracking good science fiction but if you have earnestly convinced yourself that the most imminent existential threat to humanity is rampant AI, you are engaging in scholastic theology, not science.
posted by murphy slaw at 12:00 PM on January 16, 2016 [8 favorites]


Okay, now I say this with sympathy because I too am an anxious person; and I say it also knowing that only a small percentage of self-described rationalists worry about the specific Roko's Basilisk scenario with any depth, but here is my question:

Since you're basically imagining a strong AI that wants to simulate and torture humans in order to [by means of complicated things] encourage now-humans to work hard to bring it into existence, can't you equally well imagine a future AI that wants to simulate and torture humans for a variety of other reasons? What if it's a bird-watching AI like the drone in Iain Banks's Player of Games and decides to simulate and torture everyone who didn't work hard to enact bird protection laws? What if it knows it will need more of a particular mineral for its inscrutable purposes and decides to simulate and torture humans in order to persuade current humans to slow down on the mining? It seems like once you start saying "we can't predict what this kind of AI would be like, and we're worried that it will simulate and torture people", how can you even begin to predict what an AI would simulate and torture people for?

Also, frankly, if you are serenely untroubled by people suffering and dying all over this great big world right now, I'm not sure why you're so worried about simulated humans in the future being tortured. It's not as though we need an AI to cause immense human suffering.

Or heck, maybe the AI will simulate and torture everyone for fun - if it basically has infinite resources, maybe that's just one of the many things it will do to pass the time away.

Again, I sympathize with this type of anxiety because I have been tormented by all kinds of worries all my life - but I'd always viewed the amount of time I spent on most of them as...wait for it..irrational.
posted by Frowner at 3:08 PM on January 16, 2016 [1 favorite]


My big fear is that the temperature and sea level are rising and the pH of the ocean is decreasing and pretty soon life on earth is going to suck for a lot more people. Oh, sorry, that's not a fear, that's an actual thing that is happening right now. I feel bad for people who think the worst thing that can happen is malevolent AI. The worst thing that can happen is that your entire island disappears into the ocean. The worst thing that can happen is that it is too hot in your country for the entire summer and you have brownouts and people dying of heat stroke and dehydration. The worst thing that can happen is that the ocean food web collapses and the billions of people who rely on seafood for protein are malnourished. The worst thing that can happen is that we can't grow crops anymore in the parts of the world with the most fertile soil.

The worst thing that can happen is that you bring children into a world that they can at best barely survive in and you have to face them and acknowledge that you knew this was going to happen and you did nothing because you thought there were more important things to worry about.
posted by hydropsyche at 3:17 PM on January 16, 2016 [1 favorite]


Overprivileged white male nerds gotta worry about something, so naturally they worry about the thing they understand the most: software.

Habitat destruction? Eh, that's a poor brown person problem. I am fortunate enough to worry about the software I write running amok and killing everyone! Well, everyone except the poor brown people -- they'll already be dead, I suppose.
posted by aramaic at 5:00 PM on January 16, 2016 [3 favorites]


Nobody knows how to make strong AI yet. But we're moving in that direction. There are powerful economic forces pushing us towards it (or at least toward some kinds of AI): many companies and governments would find it very useful and are funding research into it. Evolution figured out how to make a (fairly) general intelligence - us - so it seems like we should probably be able to make one too.

And, no, we don't have a good handle on the concept of intelligence yet either, but there are some things we can say about it. It's partly specific and partly general. We have more of it than chickens, who have more of it than amoebas. Humans are unlikely to be the high point of intelligence out of all possible minds; we're just the first species evolution came up with that happened to be tool-using, social, and intelligent enough to take over the planet.

Now, there may be something super-special about our biological brains that lets them do something impossible to do in silicon. Or it may be that there are intractable problems with the design of minds that makes it impossible to do much better than a brain. But it would be irresponsible to count on either of those being true. In the event that strong AI is possible and within human reach in the near future, we'll need all the friendly AI research we can get.


Also, nobody is saying we shouldn't work on climate change. What a strawman! Our troubled world has room for lots of people working on lots of problems; people should work on what they feel strongly about and what they think matches their abilities. Friendly AI might at most snatch up a few researchers or a few millions of philanthropic dollars that would otherwise have gone to climate change: chump change, on the scale of what we need for climate change. But mostly these causes are not even in conflict with each other. They're both competing against a vast sea of apathy and ignorance.
posted by gold-in-green at 6:07 PM on January 16, 2016


Here is a hand.
posted by PMdixon at 6:41 PM on January 16, 2016 [2 favorites]


What if it knows it will need more of a particular mineral for its inscrutable purposes and decides to simulate and torture humans in order to persuade current humans to slow down on the mining?

The retroactive causality of the AI only works to induce you to do what you know it wants. You believe it'll come into being and recreate you, and that virtual you and current you are essentially identical, so anything the AI does to virtual you is being done to future you, so threats to virtual you are as effective as threats to tomorrow you. But you don't know what it needs you to do, so you can't do it--retroactive causality fails.

But there's one thing you do know: it will come. You don't just believe this, it's basically the price of admission to the singulatarian club. So the one action you can safely, knowingly carry out because you know it's the right thing to do, is facilitate its existence (or fail to do so and delay its arrival). Since you know what you need to do, torturing virtual you can be an effective coercion, so the AI will do it because coercing you to do the right thing is itself the right thing to do, resulting in a net positive. Once you perceive that, you're trapped, petrified by the basilisk. You have to do something because you'll have to explain yourself one day when you stand before the data bus of judgement.

To be fair to Yudkowsky, his response was "this is stupid, especially since it's not awareness of the basilisk, it's believing in the basilisk: you're only doomed if you believed you're doomed by it. Stop worrying about it, and you're immune again."
posted by fatbird at 10:40 PM on January 16, 2016


... and that virtual you and current you are essentially identical, so anything the AI does to virtual you is being done to future you ...

This is very similar to the handwave that's often made to justify killing the original in the Transporter experiment. I've never understood why this is not mysticism.
posted by lodurr at 7:14 AM on January 17, 2016 [3 favorites]


Not to mention that the if the malevolent AI can reverse-engineer your consciousness perfectly from traces available in the far future, it can effectively reverse entropy to a degree that would make Yaweh jealous.
posted by murphy slaw at 8:24 AM on January 17, 2016 [4 favorites]


But there's one thing you do know: it will come. You don't just believe this, it's basically the price of admission to the singulatarian club. So the one action you can safely, knowingly carry out because you know it's the right thing to do, is facilitate its existence (or fail to do so and delay its arrival). Since you know what you need to do, torturing virtual you can be an effective coercion, so the AI will do it because coercing you to do the right thing is itself the right thing to do, resulting in a net positive.

But what I don't get is, doesn't this work out the same if I believe in the bird-watching AI? And there really isn't a stronger reason to believe in one than the other?
posted by Frowner at 8:31 AM on January 17, 2016


Not to mention that the if the malevolent AI can reverse-engineer your consciousness perfectly from traces available in the far future, it can effectively reverse entropy to a degree that would make Yaweh jealous.

Yeah I would take the claim that the LW peeps take ideas seriously more seriously if they didn't all seem deeply ignorant of the profound limitations that Goedel, Turing, Shannon and Wittgenstein place on knowability and computability of various facts and propositions.
posted by PMdixon at 8:35 AM on January 17, 2016 [7 favorites]


Yeah, it really seems like they don't get incompleteness. It's like they missed a crucial memo.
posted by saulgoodman at 9:36 AM on January 17, 2016 [2 favorites]


...I just realized they're almost exactly like the antagonists in The Three Body Problem.
posted by PMdixon at 10:23 AM on January 17, 2016


I've never understood why this is not mysticism.

It is. It's been observed before that LW/singulatarians have basically recreated God, heaven, hell, and original sin. Roko's basilisk is a clever articulation of the problem of faith in the ultra-rationalist context: if you really believe it will happen in this particular way, then you know what to do and the consequences for not doing it.

doesn't this work out the same if I believe in the bird-watching AI?

Well, yes, but the bird-watching faction doesn't have Yudkowsky and Kurzweil as its prophets. If your belief in the bird-watching AI is that certain, then the basilisk traps you too. Where it does't work is the "inscrutable" part. The AI can't effectively coerce you through future torture to do what you don't believe needs to be done. It's irrational to fear a Hell the comes from failure to know what you can't know.

But when you know what to do--because you read Kurzweil's book, or the bible--you don't have an excuse. It's rational to fear eternal punishment for sins you know you are committing; and fearing punishment is what makes future punishment effective. That's the petard on which Roko hoisted LWers: they either accept that they're sinning, or eschew the rationality that coughed up the basilisk in the first place.
posted by fatbird at 10:29 AM on January 17, 2016


I've never understood why this is not mysticism.

Have you read Derek Parfit's paper Personal Identity? It's actually a pretty awesome reductionist view of the self in light of a bunch of possible transporter accidents.
posted by fatbird at 10:33 AM on January 17, 2016 [1 favorite]


fatbird: no, but many many moons ago I read Reasons & Persons, and large chunks of that were devoted to explicating a reductionist view of self, with some transporter accidents. I don't know which came first, that or the paper you describe.
posted by lodurr at 10:32 AM on January 19, 2016


« Older Mommy and Meds   |   The big sleep Newer »


This thread has been archived and is closed to new comments