Before I move away from the topic of Rationalism and EA, I want to talk about Roko’s Basilisk, because WTF else am I supposed to do with this useless knowledge that I have.
From sci-fi, a “basilisk” is an idea or image that exploits flaws in the human mind to cause a fatal reaction. Roko’s Basilisk was proposed by Roko to the LessWrong (LW) community in 2010. The idea is that a benevolent AI from the future could coerce you into doing the right thing (build a benevolent AI, obv) by threatening to clone you and torture your clone. It’s a sort of a transhumanist Pascal’s Wager.
Roko’s Basilisk is absurd to the typical person, and at this point is basically a meme used to mock LW, or tech geeks more broadly. But it’s not clear how seriously this was really taken in LW. One thing we do know is that Eliezer Yudkowsky, then leader of LW, banned all discussion of the subject.
What makes Roko’s Basilisk sound so strange, is that it’s based on at least four premises that are nearly unique to the LW community, and unfamiliar to most anyone else. Just explaining Roko’s Basilisk properly requires an amusing tour of multiple ideas the LW community hath wrought.