Self-locating beliefs vs loss of discriminating power
sleeping beauty
In Adam Elga’s 2000 paper “Self-locating belief and the Sleeping Beauty problem”, he opens with:
In addition to being uncertain about what the world is like, one can also be uncertain about one’s own spatial or temporal location in the world. My aim is to pose a problem arising from the interaction between these two sorts of uncertainty, solve the problem, and draw two lessons from the solution.
His answer to the sleeping beauty problem is 1/3. But this violates conditionalisation and reflection. His diagnosis is that this has to do with the self-locating nature of the beliefs:
The answer is that you have gone from a situation in which you count your own temporal location as irrelevant to the truth of H, to one in which you count your own temporal location as relevant to the truth of H. […] [W]hen you are awakened on Monday, you count your current temporal location as relevant to the truth of H: your credence in H, conditional on its being Monday, is 1/ 2, but your credence in H, conditional on its being Tuesday, is 0. On Monday, your unconditional credence in H differs from 1/ 2 because it is a weighted average of these two conditional credences — that is, a weighted average of 1/2 and 0.
But Arntzenius (2003) shows that the problem has nothing to do with the self-locating nature of the beliefs and everything to do with the loss of discriminating power of experiences.
Strict conditionalization of one’s degrees of belief upon proposition X can be pictured in the following manner. One’s degrees of belief are a function on the set of possibilities that one entertains. Since this function satisfies the axioms of probability theory it is normalized: it integrates (over all possibilities) to one. Conditionalizing such a function on proposition X then amounts to the following: the function is set to zero over those possibilities that are inconsistent with X, while the remaining nonzero part of the function is boosted (by the same factor) everywhere so that it integrates to one once again. Thus, without being too rigorous about it, it is clear that conditionalization can only serve to “narrow down” one’s degree of belief distribution (one really learns by conditionalization). In particular a degree of belief distribution that becomes more “spread out” as time passes cannot be developing by conditionalization, and a degree of belief distribution that exactly retains its shape, but is shifted as a whole over the space of possibilities, cannot be developing by conditionalization.
So we need to distinguish problems with spreading from problems with shifting.
shifting
Self-locating beliefs undergo shifting in a perfectly straightforward manner which has nothing to do with sleeping beauty type cases:
suppose that one is constantly looking at a clock one knows to be perfect. […] At any given moment one’s degrees of belief […] will be entirely concentrated on one temporal location, namely, the one that corresponds to the clock reading that one is then seeing. And that of course means that the location where one’s degree of belief distribution is concentrated is constantly moving.
spreading
Beliefs can undergo spreading when the situation is such that there is a loss of discriminating power of experiences over time. In Shangri-La1,
there are two distinct possible experiential paths that end up in the same experiential state. That is to say, the traveler’s experiences earlier on determine whether possibility A is the case (Path by the Mountain), or whether possibility B is the case (Path by the Ocean). But because of the memory replacement that occurs if possibility B is the case, those different experiential paths merge into the same experience, so that that experience is not sufficient to tell which path was taken. Our traveler therefore has an unfortunate loss of information, due to the loss of the discriminating power of his experience.
The same thing is happening in sleeping beauty, contra Elga:
In the case of Sleeping Beauty, the possibility of memory erasure ensures that the self-locating degrees of belief of Sleeping Beauty, even on Monday when she has suffered no memory erasure, become spread out over two days.
It just so happened that Elga chose an example in which self-locating beliefs are “counted as relevant to the truth of H”. This caused confusion.
implication for bayesianism
The lesson from Arntzenius (2003) is that conditionalisation, understood as ereasing and re-normalising, is not a necessary condition of ratioanlity.
It’s a mistake to think of bayesian rationality as conditionalisation. The key maxim, that implied by Bayes’ theorem, is: ‘At each time, apportion your credences to your evidence’.
We can think of this as having an ‘original’ or ‘ur-prior’ credence distribution. At each time, you should update that ur-prior based on your total evidence E. E can come to contain less information, (you “lose evidence”) in cases of fogetting or loss of discriminating power. When you lose evidence, your credence distribution undergoes spreading.
constraints on ur-priors?
What norms constrain the ur-priors of rational agents? One possibility is the following. Think of possible worlds. Within each possible world, there are many experience-moments: one for each observer location and each time. When one is uncertain about about one’s own spatial or temporal location, one is uncertain about which experience-moment one finds oneself in within a possible world.
So one set of possible constraints are:
- In accordance with the principal principle, your credence in each possible world should be equal to the objective chance of that world.
- In accordance with a principle of indifference, your should apportion your credence in a world equally between all its possible experience-moments.
-
“Every now and then, the guardians to Shangri La will allow a mere mortal to enter that hallowed ground. You have been chosen because you are a fan of the Los Angeles Clippers. But there is an ancient law about entry into Shangri La: you are only allowed to enter, if, once you have entered, you no longer know by what path you entered. Together with the guardians, you have devised a plan that satisfies this law. There are two paths to Shangri La, the Path by the Mountains, and the Path by the Sea. A fair coin will be tossed by the guardians to determine which path you will take: if heads you go by the Mountains, if tails you go by the Sea. If you go by the Mountains, nothing strange will happen: while traveling you will see the glorious Mountains, and even after you enter Shangri La, you will forever retain your memories of that Magnificent Journey. If you go by the Sea, you will revel in the Beauty of the Misty Ocean. But, just as you enter Shangri La, your memory of this Beauteous Journey will be erased and be replaced by a memory of the Journey by the Mountains. Suppose that in fact you travel by the Mountains. How will your degrees of belief develop? Before you set out your degree of belief in heads will be 1/2. Then, as you travel along the Mountains and you gaze upon them, your degree of belief in heads will be one. But then, once you have arrived, you will revert to having degree of belief 1/2 in heads. For you will know that you would have had the memories that you have either way, and hence you know that the only relevant information that you have is that the coin was fair. This seems a bizarre development of degrees of belief. For as you are traveling along the Mountains, you know that your degree of belief in heads is going to go down from one to 1/2. You do not have the least inclination to trust those future degrees of belief. Those future degrees of belief will not arise because you will acquire any evidence, at least not in any straightforward sense of “acquiring evidence.” Nonetheless, you think you will behave in a fully rational manner when you acquire those future degrees of belief. Moreover, you know that the development of your memories will be completely normal. It is only because something strange would have happened to your memories had the coin landed tails that you are compelled to change your degree of belief to 1/2 when that counterfactual possibility would have occurred.” ↩