Why ain't causal decision theorists rich? Some speculations

Note added on 24 June 2018: This is an old post which no longer reflects my views. It likely contains mistakes.

I.

CDT is underspecified

Standard causal decision theory is underspecified. It needs a theory of counterfactual reasoning. Usually we don’t realise that there is more than one possible way to reason counterfactually in a situation. I illustrate this fact using the simple case Game, described below, where a CDT agent using bad counterfactuals loses money.

But before that I need to set up some formalisms. Suppose we are given the set of possible actions an agent could take in a situation. The agent will in fact choose only one of those actions. What would have happened under each of the other possible actions? We can think of the answer to this question as a list of counterfactuals.

Let’s call such a list K of counterfactuals a “causal situation” (Arntzenius 2008). The list will have n elements when there are n possible actions. Start by figuring out what all the possible lists of counterfactuals K are. They form a set P which we can call the “causal situation partition”. Once you have determined P, then for each possible K, figure out what the expected utility:

E[UK(A)]=jCr(OjAK)U(Oj)E[U_K(A)]= \sum_{j}Cr(O_j | A \land K)*U(O_j)

of each of your n acts {A1,A2,...,An}\{A_1,A_2,...,A_n\} is. (Where OjO_j are the outcomes, U(.)U(.) is the utility function and Cr(.)Cr(.) is the credence function.) Then, take the average of these expected utilities, weighted by your credence in each causal situation:

E[U(A)]=KCr(K)E[UK(A)]E[U(A)]=\sum_{K} Cr(K) * E[U_K(A)]

Perform the act with the highest E[U(A)]E[U(A)].

What will turn out to be crucial for our purposes is that there is more than one causal situation partition P one can consistently use. So it’s not just a matter of figuring out “the” possible Ks that form “the” P. We also need to choose a P among a variety of possible Ps.

In other words, there is the following hierarchy:

  • Choose a causal situation partition P out of the set of possible partitions {P1,P2,...,Pk}\{P_1,P_2,...,P_k\} (the causal situation superpartition?).
  • This partition defines a list of possible causal situations: P={K1,K2,...,Kj}P = \{K_1,K_2,...,K_j\}.
  • Each causal situation K defines a list of counterfactuals of length n: K={C1,C2,...,Cn}K = \{C_1,C_2,...,C_n\}. Where each counterfactual CiC_i is of the form “Ai ⁣XA_i \mathbin{\square\!\mathord\to} X”. You have a credence distribution over Ks.

Game

Now let’s consider the following case, Game, also from Arntzenius (2008):

Harry is going to bet on the outcome of a Yankees versus Red Sox game. Harry’s credence that the Yankees will win is 0.9. He is offered the following two bets, of which he must pick one:

(1) A bet on the Yankees: Harry wins $1 if the Yankees win, loses $2 if the Red Sox win

(2) A bet on the Red Rox: Harry wins $2 if the Red Sox win, loses $1 if the Yankees win.

What are the possible Ps? According to Arntzenius, they are:

P1: Yankees Win, Red Sox Win

P2: I win my bet, I lose my bet

To make this very explicit using the language I describe above, we can write that the set of causal situations (the “superpartition”) is {Pb,Pw}\{ P_b , P_w\}. (I use PbP_b for “baseball” and PwP_w for “win/lose”.)

Let’s first deal with the baseball partition: Pb={Ky,Ks}P_b= \{K_y,K_s\}. (I use KyK_y for “Yankees win” and KsK_s for “Sox win”.)

Ky={C1,C2}K_y = \{C_1,C_2 \}
C1=Harry bets on the Yankees ⁣Harry +1$C_1 = \text{Harry bets on the Yankees} \mathbin{\square\!\mathord\to} \text{Harry +1\$}
C2=Harry bets on the Sox ⁣Harry -1$C_2 = \text{Harry bets on the Sox} \mathbin{\square\!\mathord\to} \text{Harry -1\$}

Ks={C3,C4}K_s = \{C_3,C_4 \}
C3=Harry bets on the Yankees ⁣Harry -2$C_3 = \text{Harry bets on the Yankees} \mathbin{\square\!\mathord\to} \text{Harry -2\$}
C4=Harry bets on the Sox ⁣Harry +2$C_4 = \text{Harry bets on the Sox} \mathbin{\square\!\mathord\to} \text{Harry +2\$}

And Harry has the following credences1:

Cr(Ky)=0.9Cr(K_y)=0.9
Cr(Ks)=0.1Cr(K_s)=0.1

When using this partition and the procedure described above, Harry finds that the expected value of betting on the Yankees is 70c, whereas the expected value of betting on the Sox is -70c, so he bets on the Yankees. This is the desired result.

And now for the win/lose partition: Pw={Kw,Kl}P_w= \{K_w,K_l\}. (I use KwK_w for “Harry wins his bet” and KlK_l for “Harry loses his bet”.)

Kw={C5,C6}K_w = \{C_5,C_6 \}
C5=Harry bets on the Yankees ⁣Harry +1$C_5 = \text{Harry bets on the Yankees} \mathbin{\square\!\mathord\to} \text{Harry +1\$}
C6=Harry bets on the Sox ⁣Harry +2$C_6 = \text{Harry bets on the Sox} \mathbin{\square\!\mathord\to} \text{Harry +2\$}

Kl={C7,C8}K_l = \{C_7,C_8 \}
C7=Harry bets on the Yankees ⁣Harry -2$C_7 = \text{Harry bets on the Yankees} \mathbin{\square\!\mathord\to} \text{Harry -2\$}
C8=Harry bets on the Sox ⁣Harry -1$C_8 = \text{Harry bets on the Sox} \mathbin{\square\!\mathord\to} \text{Harry -1\$}

What are Harry’s credences in KwK_w and KlK_l? It turns out that it doesn’t matter. Arntzenius writes: “no matter what Cr(Kw)Cr(K_w) and Cr(Kl)Cr(K_l) are, the expected utility of betting on the Sox is always higher”.

So Harry should bet on the Sox regardless of his credences. But the Yankees win 90% of the time, so once Harry has placed his bet, he will correctly infer that Cr(Kl)=0.9Cr(K_l)=0.9. Harry will lose 70c in expectation, and he can foresee that this will be so! It’s because he is using a bad partition.

Predictor

Now consider the case Predictor, which is identical to Game except for the fact that:

[…] on each occasion before Harry chooses which bet to place, a perfect predictor of his choices and of the outcomes of the game announces to him whether he will win his bet or lose it.

Arntzenius crafts this thought experiment as a case where, purportedly:

  • An evidential decision theories predictably loses money.2
  • A causal decision theorist using the baseball partition predictably wins money.

I’ll leave both of these claims undefended for now, taking them for granted.

I’ll also skip over the crucial question of how one is supposed to systematically determine which partition is the “correct” one, since Arntzenius provides an answer3 that is long and technical, and I believe correct.

What is the point of proposing Predictor? We know that EDT does predictably better than CDT in Newcomb. Predictor is a case where CDT does predictably better than EDT, provided that it uses the appropriate partition. But we already knew this from more mundane cases like Smoking lesion (Egan 2007).

II.

The value of WAYR arguments

Arntzenius’ view appears to be that “Why ain’cha rich?”-style arguments (henceforth WAYRs) give us no reason to choose any decision theory over another. There is one sense in which I agree, but I think it has nothing to do with Predictor, and, more importantly, that this is not an argument for being poor, but instead a problem for decision theory as currently conducted.

One way to think of decision theory is as a conceptual analysis of the word “rational”, i.e. a theory of rationality. Some causal decision theorists say that in Newcomb, rational people predictably lose money. But this, they say, is not an argument against CDT, for in Newcomb, the riches were reserved for the irrational: “if someone is very good at predicting behavior and rewards predicted irrationality richly, then irrationality will be richly rewarded” (Gibbard and Harper 1978).

This line of reasoning appearts particularly compelling in Arntzenius’ Insane Newcomb:

Consider again a Newcomb situation. Now suppose that the situation is that one makes a choice after one has seen the contents of the boxes, but that the predictor still rewards people who, insanely, choose only box A even after they have seen the contents of the boxes. What will happen? Evidential decision theorists and causal decision theorists will always see nothing in box A and will always pick both boxes. Insane people will see $10 in box A and $1 in box B and pick box A only. So insane people will end up richer than causal decision theorists and evidential decision theorists, and all hands can foresee that insanity will be rewarded. This hardly seems an argument that insane people are more rational than either of them are.

But, others will reply: “The reason I am interested in decision theory is so that I can get rich, not so that I can satisfy some platonic notion of rationality. If I were actually facing that case, I’d rather be insane than rational.”

What is happening? The disputants are using the word “rational” in different ways. When language goes on holiday to the strange land of Newcomb, the word “rational” loses its everyday usefulness. This shows the limits of conceptual analysis.

Instead, we should use different words depending on what we are interested in. For instance, I am interested in getting rich, so I could say that act-decision theory is the theory that tells me how to get rich if I find myself in a particular situation and am not bound by any decision rule. Rule-decision theory would be the theory that tells you which rules are best for getting rich. Inspired by Ord (2009), we could even define global decision theory as the theory which, for any X, tells you which X will make you the most money.

Which X to use will depend on the context. Specifically, you should use the X which you can choose, or causally intervene on. If you are choosing a decision rule, for example by programming an AI, you should use rule-decision theory. (If you want to think of “choosing a rule for the AI” as an act, act-decision theory will tell you to choose the rule that rule decision theory identifies. That’s a mere verbal issue.) If you are choosing an act, such as deciding whether to smoke, ou should use act-decision theory.

Kenny Easwaran has similar thoughts:

Perhaps there just is a notion of rational action, and a notion of rational character, and they disagree with each other. That the rational character is being the sort of person that would one-box, but the rational action is two-boxing, and it’s just a shame that the rational, virtuous character doesn’t give rise to the rational action. I think that this is a thought that we might be led to by thinking about rationality in terms of what are the effects of these various types of intervention that we can have. […]

I think one way to think about this is […] trying to understand causation through what they call these causal graphs. They say if you consider all the possible things that might have effects on each other, then we can draw an arrow from anything to the things that it directly affects. Then they say, well, we can fill in these arrows by doing enough controlled experiments on the world, we can fill in the probabilities behind all these arrows. And we can understand how one of these variables, as we might call it, contributes causally to another, by changing the probabilities of these outcomes.

The only way, they say, that we can understand these probabilities, is when we can do controlled experiments. When we can sort of break the causal structure and intervene on some things. This is what scientists are trying to do when they do controlled experiments. They say, “If you want to know if smoking causes cancer, well, the first thing you can do is look at smokers and look at whether they have cancer and look at non-smokers and look at whether they have cancer.” But then you’re still susceptible to the issues that Fisher was worrying about. What you should actually do if you wanted to figure out whether smoking causes cancer, is not observe smokers and observe non-smokers, but take a bunch of people, break whatever causes would have made them smoke or made them not smoke, and you either force some people to smoke or force some people not to smoke.

Obviously this experiment would never get ethical approval, but if you can do that – if you can break the causal arrows coming in, and just intervene on this variable and force some people to be smokers and force others to not be smokers, and then look at the probabilities – then we can understand what are the downstream effects of smoking.

In some sense, these causal graphs only make sense to the extent that we can break certain arrows, intervene on certain variables and observe downstream effects. Then, I think, in all these Newcomb type problems, it looks like there’s several different levels at which one might imagine intervening. You can intervene on your act. You can say, imagine a person who’s just like you, who had the same character as you, going into the Newcomb puzzle. Now imagine that we’re able to, from the outside, break the effect of that psychology and just force this person to take the one box or take the two boxes. In this case, forcing them to take the two boxes, regardless of what sort of person they were like, will make them better off. So that’s a sense in which two-boxing is the rational action.

Whereas if we’re intervening at the level of choosing what the character of this person is before they even go into the tent, then at that level the thing that leaves them better off is breaking any effects of their history, and making them the sort of person who’s a one-boxer at this point. If we can imagine having this sort of radical intervention, then we can see, at different levels, different things are rational.

To what extent we human beings can intervene at the level our acts, or at the level of our rules, is, I suspect, an empirically and philosophically deep issue. But I would be delighted to be proven wrong about that.

A problem for any decision theory?

I think using these distinctions can solve much of the confusion about WAYRs in Newcomb and analogous cases. But Insane Newcomb hints at a more fundamental problem. Both EDT and CDT can be made vulerable to an WAYR, for example in Insane Newcomb.

Moreover, any decision theory can be made vulnerable to WAYRs. Imagine the following generalised Newcomb problem.

The predictor has a thousand boxes, some transparent and some opaque, and the opaque boxes have arbitrary amounts of money in them. Suppose you use decision theory X, which, conditional on your credences, determines a certain pattern of box-taking (e.g. take box 1, leave boxes 2 and 4, take boxes 3 and 5, etc). The predictor announces that if he has predicted that you will take boxes in this pattern, he has put $0 in all opaque boxes, while otherwise he has put $1000 in each opaque box.

This case has the consequence that X-decision theorists will end up poor. Since X can be anything, a sufficiently powerful predictor can punish the user of any decision theory. Newcomb is a special case where the predictor punishes causal decision theorists.

So I’m inclined to say that there exists no decision theory which will make you rich in all cases. So we need to be pragmatic and choose the decision theory that works best given the cases we expect to face. But this just means using the meta-decision theory that tells you to do that.

  1. This isn’t fully rigorous, since Ks are lists of (counterfactual) propositions, so you can’t have a credence in a K. What I mean by Cr(Ky)=0.9Cr(K_y)=0.9 is that Harry has credence 0.9 in every C in K, and (importantly) he also has credence 0.9 in in their conjunction C1C2C_1 \land C_2. But I drop this formalism in the body of the post, which I feel already suffers from an excess of pedantry as it stands! 

  2. This is denied by Ahmed and Price (2012), but I ultimately don’t find their objection convincing. 

  3. See section 6, “Good and Bad Partitions”. Importantly, this account fails to identify any adequate partition in Newcomb, so the established conclusion that causal decision theorists tend to lose money in Newcomb still holds. 

June 28, 2017
Read more:

Leave feedback on this post