# Anthropic and Decision-Theoretical Dutch Books

## Table of Contents

## Introduction #

Here’s a fun fact—some combinations of anthropic assumptions and decision theories expose you to Dutch books.
(1)My apologies to readers who aren’t familiar with the basics of the anthropics and decision theory debates; I won’t be explaining them here. For an introduction to causal and evidential decision theory, I recommend chapter nine of Martin Peterson’s textbook, and for a (dated) intro to anthropics, I recommend Nick Bostrom’s *Anthropic Bias*.
If you are a causal decision theorist who makes the self sampling assumption or an evidential decision theorist who makes the self indicating assumption, you can be enticed into accepting a portfolio of bets that lead to a sure loss.

I find this fact surprising for two reasons. The first is that it wasn’t intuitively obvious to me that your anthropic assumptions should interact at all with your decision theory. I first learned about these two topics in totally separate contexts, and it was never pointed out to me that there could be a connection between them. Anthropics feels like it’s asking an epistemological question—*What should I believe in light of the fact that I exist?*—whereas decision theory is asking a pragmatic question—*How should I act in light of the fact that my decision-making algorithm is correlated with features of the environment?* Of course beliefs are for actions, and any epistemic principle worth arguing over has to change your behavior somehow, but it takes some imagination to think of a case where both your decision theory and your anthropics matter.

It’s also surprising *which* decision theories and anthropic assumptions lead to Dutch books. I don’t have comprehensive data
(2)The 2020 PhilPapers survey polled professional philosophers on the Newcomb problem and the Sleeping Beauty problem, but frustratingly, they don’t report the correlation between EDT and SIA. I can’t think of a single good reason why the survey shouldn’t report a correlation coefficient for *every* pair of questions. If anyone reading this happens to know David Chalmers, can you pressure him to fix this on the survey’s next iteration?
on the relative popularity of different combinations among people who think about these topics, but my anecdotal impression is that EDT and SIA are positively correlated. I’ve met only a couple two-boxing thirders and one-boxing halfers, and in a DIY, `$N=10$`

mini survey I recently ran on some s-risk researchers, 70% endorsed both one-boxing and thirding.

My guess is that this correlation (to the extent that it’s real) is explained by how deeply one cares about theoretical simplicity. People commonly reject SSA out of suspicion of the arbitrary, magical seeming reference classes the assumption posits. As Joe Carlsmith wants to know, what even are these reference classes, and how on earth are we supposed to determine which observers they contain? Can there really be a deep metaphysical fact about whether you “could have been” an ant, or a posthuman, or a Boltzmann brain? If you care about simplicity as a theoretic virtue, you are quite likely to answer no and turn to SIA, which admittedly isn’t entirely innocent of arbitrary commitments either. SIA still requires you to believe there are facts about who counts as an observer, but it at least seems cleaner than non-minimal reference class SSA, less of a kludge. I think a similar reaction strikes people of a Humean bent when they first enter the EDT/CDT debate. EDT says you should decide what to do by simply conditioning on your actions and picking the one with the highest EV. Then CDT proposes that you instead rig up an elaborate theory of counterfactual conditionals and K-partitions all to respect some hazy intuition about causality being important. How is this anything more than a pointless superstition?
(3)An all time great philosophical burn from Ahmed, *Evidence, Decision, and Causality*
Isn’t evidence just evidence, regardless of whether we can tell a special kind of story about how events are *causally* related? I suspect that the same kind of trust in Ockham’s Razor is often at work when evidentialists slice causality out of decision theory and when thirders slice reference classes out of anthropics.
(4)Another potential confounder is expected value fanaticism/scope-sensitivity. Even if you think EDT is probably wrong, you might wager that you should follow it anyway, because if it’s right, your actions matter more. You can make a similar wager argument for SIA, which implies an update toward larger worlds, thus raising the stakes of your actions.

It may come as a surprise, then, that the two most popular decision-theoretical/anthropic pairs are also the ones that fall prey to Dutch books. The rest of this article will look at these Dutch books in some detail and offer some thoughts on how seriously we should take them. (5)The arguments presented in the next two sections are adapted from Briggs “Putting a Value on Beauty.”

## A Dutch book against CDT halfers #

Suppose you’re a CDT agent who subscribes to the SSA, and the Sleeping Beauty experiment is run on you. A clever bookie
Maybe the bookie looks like one of these fellows. (Jan Lievens’s *Card Players*, courtesy of the Leiden Collection)
offers you the following bet on Sunday, before you are put to sleep for the first time

`\[\text{Sunday}\quad \begin{cases}-\$ 13 &\text{on heads} \\ \$16 &\text{on tails.}\end{cases}\]`

Every time you awaken on a weekday, the bookie offers you this bet

`\[\text{Monday or Tuesday}\quad \begin{cases} \$11 &\text{on heads} \\ -\$ 9 &\text{on tails.}\end{cases}\]`

Notice that the bookie can execute on his strategy without using any information you don’t have. When you wake up, you don’t know whether it’s Monday or Tuesday, but the bookie doesn’t need to know either; he’ll offer you the same bet no matter which day it is. I emphasize this because a Dutch book argument doesn’t prove much if it relies on the bookie being better informed than his victim. (6)See Hitchcock’s “Beauty and the Bets” §6 for another argument against Dutch books with asymmetric information. “If the bookie can achieve his certain gain only by exploiting information that is unavailable to the agent, then the Dutch Book reflects an evaluation of the system of bets that is not the agent’s own.” It’s trivially true that a clairvoyant bookie can trick any non-clairvoyant gambler into accepting a Dutch book, no matter how scrupulously that gambler obeys the laws of probability. In our case, though, the bookie isn’t merely taking advantage of your ignorance. If he manages to sell you an unfavorable portfolio of bets, you have to blame it on a glitch in your reasoning, not on the sleeping drugs.

Assuming that you’re risk neutral and that your Sunday and weekday person-moments belong to separate reference classes, everyone agrees that you should take the bet on Sunday. You’ve been offered a bet on a fair coin at better-than-even odds, and there aren’t (yet) any observer selection effects or weird acausal correlations to worry about, so accepting is a no-brainer. The point where your decision theory starts to matter is when you wake up and are offered the weekday bet. EDT refuses because it realizes that if it accepts now and the toss comes up tails, it’s bound to make the same bet twice. CDT ignores this effect. A causal decision theorist treats its gambling decisions on all previous and subsequent days as fixed, just another part of the environment which it can’t control.

I feel some amount of intuitive sympathy, in this case, for CDT. First imagine that the toss came up tails, and it’s Monday. By design, there is no way for you to causally influence the decision that you’ll make tomorrow. The memory-wiping drug will erase any intentions you form to accept or decline tomorrow’s bet. The experimenters have carefully given you no means of leaving a message for yourself tomorrow. Now imagine that the toss came up tails, and it’s Tuesday. This time your decision on Monday can’t be changed because it’s *in the past*. “What’s done cannot be undone,” and the bet that you agreed to yesterday can’t depend the choices you make today. The upshot is that in either case, it feels natural to treat your behavior on all other awakenings as fixed.

But although I’m somewhat sympathetic to CDT here, it clearly produces the wrong betting behavior. If the coin lands heads, you lose $13 on the Sunday gamble, and win back only $11 on the Monday gamble. If it lands tails instead, you win $16 on the Sunday gamble, but lose $18 total on your Monday and Tuesday gambles. The bookie is guaranteed a $2 profit at your expense no matter what happens with the coin toss.

Notice that *both* our anthropic assumption *and* our decision theory were essential to making this Dutch book work. If the bookie tries to run the same Dutch book against an *EDT* halfer, he fails because they refuse the weekday gamble. For them, the expected value of accepting is

`\[\text{P}(\text{heads}) \text{V}(\text{heads}) + \text{P}(\text{tails}) \left(2\text{V}(\text{tails})\right)\\[3pt] = \frac12 (\$11)+ \frac12 (-\$18) <\$0,\]`

so they recognize it as a bad bet. Similarly, a CDT *thirder* will refuse the weekday bet because they calculate the EV as

`\[\text{P}(\text{heads}) \text{V}(\text{heads}) + \text{P}(\text{tails}) \text{V}(\text{tails}) \\[3pt] = \frac13 (\$11)+ \frac23 (-\$9) <\$0.\]`

## A Dutch book against EDT thirders #

Now suppose you’re an EDT agent who makes the SIA, and we run the same experiment again. This time the bookie offers you the following bet on Sunday
`\[\text{Sunday}\quad \begin{cases} \$ 17 &\text{on heads} \\ -\$14 &\text{on tails.}\end{cases}\]`

Then when you wake up, he offers you this bet
`\[\text{Monday or Tuesday}\quad \begin{cases} -\$19 &\text{on heads} \\ \$ 6 &\text{on tails.}\end{cases}\]`

As before, nobody disputes that you should accept the bet on Sunday, but only EDT thirders think the weekday bet is favorable. They’re willing to accept a 1/3 chance of betting wrong and losing $19 on heads in exchange for a 2/3 chance of betting right twice and winning $12 on tails. As far as they’re concerned, this is a positive EV bet.

The issue is that in applying both EDT and SIA, you are in some sense overcorrecting for the asymmetry between heads and tails worlds. EDT points out that in tails worlds, your gambles effectively have higher stakes because you will make them repeatedly. SIA points out that you are more likely to be in a tails world than in a heads world because tails worlds contain more of your person-moments. Both of these observations are correct, but something goes wrong when we combine them, as can be seen by calculating your *ex ante* expected winnings. If the coin lands heads, you win $17 on your Sunday bet, but lose $19 on your Monday bet. If it lands tails, you lose $14 on your Sunday bet, and only make back $12 in total on your Monday and Tuesday bets. Once again, the bookie has extracted a sure profit of $2 from you.

We can avoid this Dutch book if we revise either one of our decision theory or our anthropic assumption. An EDT *halfer* refuses the weekday gamble because they think the expected value of taking it is
`\[\text{P}(\text{heads}) \text{V}(\text{heads}) + \text{P}(\text{tails}) \left(2\text{V}(\text{tails})\right)\\[3pt] = \frac12 (-\$19)+ \frac12 (\$12) <\$0,\]`

and a *CDT* thirder passes because
`\[\text{P}(\text{heads}) \text{V}(\text{heads}) + \text{P}(\text{tails}) \text{V}(\text{tails}) \\[3pt] = \frac13 (-\$19)+ \frac23 (\$6) <\$0.\]`

As it turns out, these decision-theoretic/anthropic combinations don’t just avoid the *specific* Dutch books presented in this section and the last. One can prove that CDT agents who make the SIA and EDT agents who make the minimal reference class SSA are immune to *all* Dutch books.
(7)See Oesterheld and Conitzer’s paper on games of imperfect recall.

## Are the Dutch books decisive? #

Some combinations of anthropic assumptions and decision theories make you vulnerable to a Dutch book, but does it follow that these combinations are forbidden or irrational? I don’t think so.

The problem is not with Dutch book arguments in general, as I think synchronic Dutch books can be quite compelling. “Synchronic” here means that all the bets required to assure the gambler’s loss can be placed in a single sitting, without any need for a time delay. The classical Dutch books against bettors who violate the probability axioms are synchronic in this sense, but the Dutch books against agents who fail to conditionalize are not, and neither are the Dutch book against sleeping beauties outlined above.

Diachronic Dutch book arguments are open to at least two strong objections that don’t apply to the synchronic arguments. For one, agents that can see the sure loss coming and bind themselves to a policy *ex ante* avoid the Dutch book altogether. Suppose, for specificity, that you’re an EDT thirder, and you’ve just been offered the Sunday bet. You know that on all your subsequent awakenings, you’ll be inclined to take the weekday bets, and you foresee that the result will be a guaranteed loss. To avoid this outcome, you accept the Sunday bet but precommit to refuse all of the weekday bets, thus making an expected $1.5 profit off of the bookie.
(8)Of course, if the bookie knows you’re the kind of agent who reasons this way and can bind yourself, he won’t offer you the Sunday bet in the first place.
So just as the Newcomb problem only argues against CDT without the ability to self-modify before the prediction is made, the Dutch book arguments presented above only apply to irresolute agents who can’t adhere to their own plans.

Another objection is that the coherence requirements that the Dutch book arguments are trying to draw out simply don’t apply to your various person-moments. You on Sunday and you on a weekday are different agents, so it’s not a problem if your beliefs about the coin toss don’t harmonize. Yes, your person-moments’ decisions can effect each other’s welfare. The bet that you make on Monday can either increase or decrease your bankroll on Wednesday. But there’s no general requirement that your beliefs be coherent with those of all other agents whose welfares are tethered to yours.

Here’s an analogous situation. Imagine that you and your spouse hold all of your assets in common, but you have different beliefs about the bias of a coin. You believe that the coin is fair, but your spouse believes that it’s three times as likely to come up tails as it is to come up heads. A cunning bookie can exploit this pattern of beliefs by offering you this bet
`\[\text{You}\quad \begin{cases} \$ 2 &\text{on heads} \\ -\$2 &\text{on tails}\end{cases}\]`

and your spouse this bet
`\[\text{Spouse}\quad \begin{cases} -\$ 3 &\text{on heads} \\ \$1 &\text{on tails.}\end{cases}\]`

Each of you regards the bet you’re offered as fair, so you accept, guaranteeing that the bookie will win $1 from your joint bank account.
(9)This example is borrowed from David Christensen’s “Clever Bookies” §IV
Does this show that your combined epistemic state as a couple is somehow incoherent, or that one or both of you are rationally compelled to adjust your credences? I don’t think so. You and your spouse are separate agents, and although it may be prudent for you to coordinate your gambling behavior, there’s no *epistemic* requirement to do so. It would be very strange, after all, if the norms of rationality that apply to married people were different from those that apply to single people.

If collectives of agents are not epistemically required to avoid Dutch books, it’s hard to see why collectives of person-moments should be required to avoid them either.

*Thanks to Julian Schulz for inspiring me to write this article, to Sylvester Kollin for helpful insights, and to Caspar Oesterheld for very generous feedback. All errors are mine.*

*Decision theory and anthropics are tricky, so I hold all the opinions expressed above lightly. If you know of an argument I’ve overlooked, please tell me about it.*