Some people familiar with the LessWrong memeplex have suffered serious psychological distress after contemplating basilisk-like ideas ... The notion is taken sufficiently seriously by some LessWrong posters that they try to work out how to erase evidence of themselves so a future AI can't reconstruct a copy of them to torture. Yudkowsky considers the basilisk would not work, but will not explain why because he does not consider open discussion of the notion of acausal trade with possible superintelligences to be provably safe.If it's the first time you've heard of Roko's Basilisk, this post may have unfortunately put (a perfect future simulation of) you in danger of eternal torture by a Friendly Artificial Intelligence.
« Older Earlier this year venerable academic publishers Sp... | Pistola Derringer hecha en cas... Newer »
This thread has been archived and is closed to new comments