On a larger scale than shopping for potato chips, Xan posted an interesting discussion on the preference conflict problem when a person has multiple selves:
But right now I just want to make a simple remark. As soon as there is any disagreement whatsoever between different instances of "your" preferences, what's to say which is the "true" or "right" set of preferences? Six years in advance you want to abstain from the beer and chips. In six years, you don't. What does it mean to say the advanced preferences are the "real" ones? In some sense there are simply many agents with different preferences in conflict with each other, doing what they can to get their way. Now perhaps most agents agree that You-2017 should not go for the chips, while You-2017 disagrees, in which case are we really making a comment about social optimality when we say You-2017 is in "error"? And if social optimality is on the table, by all means, please tell me: what weights are we using? How is it related to the discounting that's already taking place? It is so very nice if preferences start out consistent, because then everyone agrees on everything...but as soon as there's disagreement, there's a big discussion to be had. (More to say, another time).Back at the potato chip aisle, CurrentMe doesn't know what mood FutureMe will be in when reaching for the bag of chips. A further complication to CurrentMe's shopping decision is that there are multiple FutureMes to appease. There's FutureMe who delights in eating the chips and there's FutureMe who has to go to the gym to work off the calories that previous FutureMe ingested.
There's an extensive form game in there somewhere, but for now, CurrentMe just bought both bags of chips. I think I'll leave it to my FutureMes to duke it out.