In this post, I examine an odd consequence of "playing chicken with the universe", as used in proof-based UDT. Let's say that our agent uses PA, and that it has a provability oracle, so that if it doesn't find a proof, there really isn't one. In this case, one way of looking at UDT is to say that it treats the models of PA as impossible possible worlds: UDT thinks that taking action leads to utility iff the universe program returns in all models in which returns . The chicken step ensures that for every , there is at least one model in which this is true. But how? Well, even though in the "real world", , the sentence isn't provable---that is, ---there are other models such that , and in these models, the chicken step can make output .

In general, the only "impossible possible worlds" in which are models according to which it is provable that . In this post, I show that this odd way of constructing the counterfactual "what would happen if I did " can cause problems for modal UDT and the corresponding notion of third-party counterfactuals.


To simplify things, I'll consider modal UDT in this post. (The fact that our definition of modal UDT doesn't have an explicit chicken-playing step will make some later arguments a little subtler than they otherwise would have been, but it's not really a problem since it still does something very similar implicitly.)

I'll consider a version of the 5-and-10 problem, which has two actions () and three outcomes (, ordered from best to worst by the convention I've adopted in my posts on modal UDT). If the agent takes action , it always gets outcome . If the agent takes action , it gets the optimal outcome , unless it's provable that it doesn't take action , in which case it gets the worst outcome, .

Of course, in the real world, if it's provable that the agent doesn't take a particular action, then it really doesn't take that action. Hence, if the agent takes action in the real world, it will receive the optimal outcome , so intuitively, it's best to take action .

But modal UDT, as we'll see, takes action instead. This is self-consistent, because in the only "impossible possible worlds" in which it takes action , it does so because of (implicitly) playing chicken; that is, it only takes action because, in these worlds, it's "provable" that it doesn't take action , implying that it receives the worst possible outcome in all of these worlds.


We model the above set-up as a modal decision problem, that is, as three formulas , and , such that is true if an agent obtains outcome . Here, and are propositional variables; is interpreted as meaning that the agent takes action . Thus, we can define as follows:

Here's a different possible reading. You may know the diamond operator, , which is an abbreviation for . We have if is provable, that is, iff it is true in all models of PA; we have if it isn't provable that not , i.e., if there is some model of PA in which is true. (The relation between and is like the relation between and ---we have iff for all models , , and we have or if not for all models not , which is equivalent to saying that there exists a model such that .) So the box can be read as "true in all models", whereas the diamond can be read "true in some models" or "possible". Using the diamond operator, we can rewrite the decision problem as

So the agent gets the good outcome if it chooses action and it "was possible" for it to choose action , whereas it gets the bad outcome if it chooses action but it "wasn't possible" for it to choose this action.

For example, let's consider the agent which just takes action without any further reasoning, which we represent by substituting the pair for the pair of variables . By substitution, is equivalent to , which is equivalent to , which states that PA is consistent ("no contradiction is provable"). Since , it follows that "in the real world", the agent obtains the best possible outcome .


It turns out that modal UDT, on the other hand, will choose action and receive a suboptimal payoff. Moreover, this is true even if we consider modal UDT with a stronger proof system than PA, as described in An optimality result for modal UDT. This means, of course, that the assumptions of the optimality theorem aren't satisfied; this isn't too surprising, because the decision problem refers to the agent's action inside boxes (see Patrick's recent post for a much simpler example of non-optimality when the decision problem refers to the agent inside boxes).

Let's write , or for short, for the two formulas describing the action of modal UDT on our decision problem, and let's write for the formulas describing the outcome it obtains; that is, is equivalent to . (Technically, this means that is the fixed point of with ; refer to the optimality post and the references therein for technical details.)

I claim that ; that is: we can show that if UDT takes action , then it isn't provable that it takes action . Which, of course, implies that , since is equivalent to ; UDT concludes that taking action will lead to the worst possible outcome, . We can also think of this in terms of possible worlds: In all models of PA (all "impossible possible worlds", the way that modal UDT evaluates counterfactuals) in which our agent takes action , i.e., in all models satisfying , we also have .

All of the above implies that "in the real world", UDT must take action , i.e., . The most direct way to conclude this, once I've proven my claim that , is by a quick proof of contradiction: Suppose that the agent took action (). Then by my claim, it would be provable that it doesn't take action (). But this would mean that PA is inconsistent; contradiction.


I'll now proceed to prove my claim. I guess you may not be surprised if I tell you I'll do so by Löb's theorem; that is, I'll actually prove that .

This may look like an opaque collection of boxes, but actually there's some interpretation. First of all, implies , by taking the argument I made in the previous section and carrying out inside the boxes: If we have and , then we have , which is the same as , which is equivalent to .

So it's enough if we can prove , or equivalently, . Now consider the way modal UDT operates:

  • For all outcomes from 1 to 3:
    • For all actions from 1 to 2:
      • If you can prove , return .
  • If you're still here, return a default action.

Consider, furthermore, that we clearly have (by definition, we have , which yields and hence ). Thus, the search will stop at at the latest, and by using this fact, GL can show that the only way could be true is if we have or . But if either of these is true, and we also have , then it follows that (since are provably mutually exclusive, i.e., only one of them can be true).

This proves , which concludes the proof of my claim.


But is this really so bad? Sure, modal UDT performs suboptimally on this decision problem, but we already know that every agent behaves badly on its "evil" decision problems, for example, and these problems are a lot simpler to reason about than what I've discussed here.

Moreover, the proof in this post only works for modal UDT using PA as its proof system, not for variants using proof systems stronger than PA. (You could define an analogous decision problem for these stronger proof systems, but without changing the decision problem, the proof only works for -based UDT.) Now, the optimality result for modal UDT deals with the "evil" decision problems by saying that for every provably extensional decision problem, there's some such that modal UDT using , , performs optimally on this decision problem. This result probably doesn't apply to the decision problem I've discussed in this post, because that's probably not provably extensional (though I haven't actually tried to prove this); but we might still say, why care if -based UDT fails on some specific decision problem? We already knew that you need to use stronger proof systems sometimes.

The real reason to consider this problem becomes clear when we consider the logical third-person counterfactuals corresponding to modal UDT.


First of all, note that the optimality picture looks quite different when using logical counterfactuals, which ask "what would have happened if this agent had chosen a different action", rather than the physical counterfactuals from the optimality result, which ask "what would happen if you replaced the agent by a different agent".

Consider the "evil" decision problems: Fix a two-action agent . The corresponding evil decision problem, , compares the action the agent is taking against the action taken by ; if they're different, it rewards the agent , if they're the same, it punishes . Clearly, every agent gets punished on its own evil decision problem, even though there's a very simple agent which gets rewarded on the same problem.

But that's judging the agent through physical counterfactuals. When we consider logical counterfactuals, we don't compare the agent against other agents; rather, we ask whether this agent could have done better by taking a different action. From that perspective, the evil problems don't seem so problematic: It's intuitively clear, and our proof-based third-person counterfactuals agree, that if an agent returned a different action, it would still be punished by its evil decision problem.


It's true that we can't expect PA-based UDT to be optimal, even from the perspective of logical third-party counterfactuals. After all, consider a decision problem that rewards action iff PA is consistent, and action otherwise. In this case, modal UDT will not be able to prove anything of the form , and will end up taking its default action, which could end up being the bad action .

However, from the perspective of proof-based third-party counterfactuals, this is acceptable. In the language of my previous post, we have "ambiguous counterfactuals": we can't figure out what outcome each action leads to, and so we throw up our hands and make peace with the fact that our agent may not behave optimally in any sense. I showed in the previous post that modal UDT is "optimal" in a certain sense whenever a universe is "fully informative", in the sense that for every action there's a unique for which it can prove , or at least "sufficiently informative", which is a slight weakening of this condition. We only have reason to expect modal UDT to behave well when it's in a sufficiently informative universe; the decision problem that rewards iff PA is consistent doesn't satisfy this condition (if we use PA as our proof system), and so we don't expect PA-based modal UDT to do well on in the first place. The fact that the decision problem fails to be sufficiently informative alerts us to the fact that we shouldn't expect optimality.

But now consider the decision problem discussed in this post! This decision problem is fully informative---we obviously have ; we have , because we've shown that the agent actually takes action (i.e., ), and is sound; we've shown ; and we have , because otherwise, would be provable and our agent would take action . By the proof in the previous post, this implies that modal UDT performs "optimally", according to the notion of optimality defined there---that is, it acts optimally if you believe that taking action leads to the bad outcome , as examining provability in PA would lead you to believe. So unlike the problem discussed in the previous paragraph, in this case the third-party counterfactuals don't alert us to the fact that something is going wrong, and instead lead us to expect optimality---and yet we get arguably suboptimal behavior.

So the problem I want to point out in this post is that both modal UDT and the proof-based third-party counterfactuals defined in my previous post use a problematic notion of counterfactuals, related to the implicit chicken-playing step; in this case, this leads them to believe that taking action leads to the bad outcome , and to believe that they know what they're talking about (we're in the fully informative case, where for every action we seem to know exactly what outcome it would lead to), even though intuitively, it seems like taking action should be thought of as leading to the optimal outcome .

Counterfactuals
Personal Blog

3

New Comment