Newcomb’s paradox is a philosophical thought experiment. There is an entity called Omega, who can predict your choices. Omega presents you with two boxes; you may open one or both boxes, and take whatever you find. The first box contains $1k, guaranteed. The second box contains $1M if and only if Omega predicts that you will leave the first box alone. So the dilemma is between “one-boxing” (taking only the $1M), or “two-boxing” (taking both boxes, finding a total of $1k).
When I put it that way, it seems obvious that $1M is more than $1k, so therefore you should open only one box. The two-boxer argument is that Omega has already decided whether the box contains $1M or not. So whatever’s in the second box is a constant, and it’s only rational to take the free $1k. Omega may have chosen to arbitrarily punish players who behave rationally, but what’s done is done, might as well collect the $1k consolation prize.
Do we care about Newcomb’s paradox?
Newcomb’s paradox has received a great deal of discussion from Rationalists, i.e. the community popularized by Eliezer Yudkowsky. That’s how I know about the paradox. But I’m an outsider, and it appears to me like Rationalists stared at this paradox for so long that they went mad. Yudkowsky is a dedicated one-boxer, and has attempted to construct elaborate theories to justify it. Some of these ideas were crucial in the construction of Roko’s Basilisk.
I believe the reason Yudkowsky and others are so obsessed with Newcomb’s paradox, is because they’re transhumanists. They believe the future will contain a super powerful AI. To most people Omega sounds fantastical—how can any entity make perfect predictions about our actions? But to a transhumanist, a super powerful AI could easily step into the role of Omega. Additionally, we can think about what happens when AI steps into the role of the player. If the AI is deterministic, then of course we can predict what the AI will choose. So Yudkowsky’s interest is ensuring that an AI will choose correctly in this situation.
But for the rest of us folks who aren’t transhumanists, does Newcomb’s paradox make sense? Is this a problem we even need to think about?

