# On the (pseudo?)-paradox of “fair” games.

### Fair Games and 50% Win Chances

I’ll take it as an assumption in the rest of this article that a fair game is one where each player has a 50% chance of winning. We also sometimes call such a situation a “good match” or say that the game will be good if we believe such a state of affairs prevails. We also tend to view negatively the opposite condition, wherein one player has a huge advantage over the other and hence where we expect the probability of that player losing is very low (implying the probability of the other player losing is very high).

These considerations aren’t limited to two player competitive games.  If we are playing a single player, digital or otherwise interactive game, we call that game “fair” when we have about a 50% chance of winning. We would call a game where our chance of winning is ~1% unfair or badly designed, and where our chance of winning is ~99% boring or badly designed.

### A Pseudo-contradiction

At first glance this seems to imply a contradictory attitude, one illustrated by recalling that we also call a coin flip “fair” if there is a 50% chance of the coin landing on either face. If the purpose of a game is to determine which player is the better player, how can it be that we seem to also want the outcome of the game to be as random as possible (such that for good matches, each player has a 50% chance of winning). It would appear that good games have random outcomes and that seems to contradict their apparent purpose in measuring how well a player plays.

(NB. The account is a little harder to render in the case of single player interactive systems. However, it seems paradoxical that a player would engage with a system with the intent of winning when the outcome could equivalently be determined by the toss of a coin).

### Resolution

I don’t think this is a genuine paradox, of course: when we say a game is fair, what we are saying is that the outcome isn’t random, but that it depends, sensitively, on which player makes the better sequence of moves in response to the other player. Why sensitively? Well, when two players are closely matched the the outcome of the game, if the win probability for either player is 50%, should depend very sensitively on how well each player actually plays. In particular, close matches come down to one or two critical mistakes or strokes of brilliance to tip the scales in one direction.

(This is particularly true because of another property of games (approximate reversibility) which I believe games must also have, but which I don’t discuss here.)

So it isn’t really surprising that we can resolve this merely apparent contradiction about games. But the resolution points us towards another important argument:

### Implications about Randomness

Because the outcome of a good game should depend sensitively on the moves of the player, the randomness present in a good game should be minimal or not present at all. Why? Because if the outcome of a game depends sensitively on the moves the player makes, then it also must necessarily depend sensitively on random influences on the game state. Why? For outcomes to depend sensitively on a move implies that each move a player makes is carefully tuned for the game state, which they have correctly appreciated in order to make the right move. But if the game state changes randomly, then a good move might be turned into a bad move by a random change in the game state.

(It is possible to imagine random changes to the game state which don’t change the quality of moves. But if this is the case, then these changes to the game state are _extraneous_ to playing the game and may as well be removed).

### Conclusion

To restate the argument:

1. we believe games should be fair, which is to say that a given player should have a 50% change of winning
2. this is because we want games to be sensitive tests of the quality of play of the given player, where the outcome depends sensitively on moves. We don’t want the game itself to be actually be random in the sense that the outcome is extraneous to the game itself.
3. Random elements (which are necessarily extraneous to the game in their origin) reduce the sensitivity of the win condition on the specific moves made by a player
4. Hence, good games should have minimal random elements.

This argument puts game designers in a difficult position. For designers of multiplayer games, they must make sure that the game’s rules don’t advantage particular players or add the appropriate handicap if they do. This turns out to be difficult. In Chess, for instance, white has a slight win chance, although the precise probability is unknown. Typically, for a new game without a long history of play, it will be very hard to determine whether such a bias exists and what size it might be.

With the rise of computers and single player strategy games a different set of design concerns manifests. The temptation in single player game design is to use random elements to provide variety for a gameplay system which may not have the strategic depth furnished by the presence of a second rational player. It is hard to imagine a deterministic single player game with the same initial conditions each play that can stand up to repeated play.

I think the way forward here is to randomize the initial conditions of any such game subject to the constraint that a given initial condition preserves the win 50% rate (perhaps based on artificial intelligence play or some other way of characterizing win chance) and then to make play from that point forward completely deterministic.