Newcomb’s paradox is the name usually given to the following problem. You are playing a game against another player, often called Omega, who claims to be omniscient; in particular, Omega claims to be able to predict how you will pl...
Newcomb’s paradox is the name usually given to the following problem. You are playing a game against another player, often called Omega, who claims to be omniscient; in particular, Omega claims to be able to predict how you will play in the game. Assume that Omega has convinced you in some way that it is, if not omniscient, at least remarkably accurate: for example, perhaps it has accurately predicted your behavior many times in the past.
Omega places before you two opaque boxes. Box A, it informs you, contains $1,000. Box B, it informs you, contains either $1,000,000 or nothing. You must decide whether to take only Box B or to take both Box A and Box B, with the following caveat: Omega filled Box B with $1,000,000 if and only if it predicted that you would take only Box B.
What do you do?
(If you haven’t heard this problem before, please take a minute to decide on an option before continuing.)
The paradox is that there appear to be two reasonable arguments about which option to take, but unfortunately the two arguments
support opposite conclusions.
The two-box argument is that you should clearly take both boxes. You take Box B either way, so the only decision you’re making is whether to also take Box A. No matter what Omega did before offering the boxes to you, Box A is guaranteed to contain $1,000, so taking it is guaranteed to make you $1,000 richer.
The one-box argument is that you should clearly take only Box B. By hypothesis, if you take only Box B, Omega will predict that and will fill Box B, so you get $1,000,000; if you take both boxes, Omega will predict that and won’t fill Box B, so you only get $1,000.
The two-boxer might respond to the one-boxer as follows: “it sounds like you think a decision you make in the present, at the moment Omega offers you the boxes, will affect what Omega did in the past, at the moment Omega filled the boxes. That’s absurd.”
The one-boxer might respond to the two-boxer as follows: “it sounds like you think you can just make decisions without Omega predicting them. But by hypothesis he can predict them. That’s absurd.”
Now what do you do?
(Again, please take a minute to reassess your original choice before continuing.)
The von Neumann-Morgenstern theorem
Let’s avoid the above question entirely by asking some other questions instead. For example, a question one might want to ask after having thought about Newcomb’s paradox for a bit is “in general, how should I think about the process of making decisions?” This is the subject of decision theory, which is roughly about decisions in the same sense that game theory is about games. The things that make decisions in decision theory are abstractions that we will refer to as agents. Agents have some preferences about the world and are making decisions in an attempt to satisfy their preferences.
One model of preferences is as follows: there is a set of (mutually exclusive) outcomes, and we will model preferences by a binary relation on outcomes describing pairs of outcomes such that the agent weakly prefers to . This means either that in a decision between the two the agent would pick over (the agent strictly prefers to ; we write this as ) or that the agent is indifferent between them. The weak preference relation should be a total preorder; that is, it should satisfy the following axioms:
Reflexivity: . (The agent is indifferent between an outcome and itself.)
Transitivity: If and , then . (The agent’s preferences are transitive.)
Totality: Either or . (The agent has a preference about every pair of outcomes.)
If and then this means that the agent is indifferent between the two outcomes; we write this as . The axioms above imply that indifference is an equivalence relation.
The strong assumptions here are transitivity and totality. One reason to