# Philosophy of Strategy Game Design (an attempt)

I don’t get to do a lot of game development these days (now that I am a dad and I have a full time job). But I still think about game design a fair bit in my spare moments. Arguably, The Death of the Corpse Wizard is a strategy game and I enjoy talking about strategy game design in particular with the Keith Burgun Games community. There are lots of ways of talking about this subject (and I might even believe that at a fundamental level, one can’t make a good game of any kind via reductive strategy) but I, personally, find my thinking is influenced by two sources: philosophy and physics.

In particular, Bernard Suits’ book “The Grasshopper” has left a lasting impression on me, both as a kind of literary work and as an organized and systematic attempt to define what games are. I think this kind of philosophical approach can be useful for understanding even specific sorts of games, like strategy games, and I’d like to sketch an approach to the problem in that style here.

First, let me recapitulate some of Suits’ basic ideas. He defines a game as “The voluntary pursuit of a goal by less than efficient means.” This is a compact definition and thus requires some exposition. His frequent example is golf: the goal in golf is to put a ball in a hole. When we play golf we do not pursue this goal in any way. We intentionally pursue it by the less then efficient means of swinging a stick at the ball, as many times as necessary, until it lands in the hole. It seems obvious that golf only constitutes a game if we undertake it voluntarily. I have more to say on this point, but I think its reasonable to suggest that while we may go through the motions of a game with a gun to our heads, we can hardly be said to be “playing.”

This is not much remarked upon in The Grasshopper, but I think there is a reasonable implication in Suits’ definition: a game is an undertaking which is pursued for its own sake. This  is plausible if we step out of the game and watch a person play: if a person voluntarily pursues a goal by less than efficient means, it must be because the less than efficient means of pursuit are themselves the object of the behavior. After Suits, I believe it is fair to provide the following description of leisure: any activity undertaken for its own end. Thus, games are naturally leisure. We undertake the pursuit of the goal for the sake of the pursuit rather than for some external purpose.

(This helps us understand the requirement that the undertaking be voluntary: if we were coerced by violence to play the game, we would be undertaking the activity as a means of avoiding violence, not for its own sake).

Can we understand how to design better games by considering this frame?

### Strategy and Strategy Games

By the above, we might suggest that when someone plays a strategy game, their goal is not to satisfy the win condition of the game. For instance, in Chess, the win condition is that the opponent’s King is in Check. But a player who simply re-arranged the pieces when their opponent is not looking isn’t playing Chess, though they are pursuing the goal of Chess. To want to play chess is to wish to reach that goal by a highly restrictive set of less than efficient means. (Suits uses the word “lusory goal” to suggest this ancillary character for the in-game goal).

Can we put a finer point on the true goal, then? Yes – the purpose is to play. If we restrict ourselves to more specific sorts of games, we can give more specific answers.

When we play strategy games our goal is to strategize. When we design strategy games our goal is to furnish a context in which the player can strategize.

Thus, to understand our job as a game designer we need only understand what it is to strategize. Simple stuff first: to strategize is to construct a strategy. What is a strategy? I’ll provide my definition here, though it doesn’t differ much from the ordinary one:

A strategy is an efficient, robust, plan.

A plan is an algorithm which takes you from some starting state to a final, desired state. A recipe for chocolate chip cookies is a plan, but it isn’t a strategy. That is because it is not robust. That is, if you find you don’t have 2 cups of flour on hand, the recipe has nothing to say about the situation. Your lack of flour is a condition for which the plan has no contingency. Robustness is a probabilistic notion: a robust plan succeeds at reaching the goal frequently when you apply it over and over again in varying situations.

An exhaustive search of the state space of the traveling salesman problem is a plan as well. But it isn’t a strategy (or it is a very, very poor one) because it isn’t efficient. Efficiency relates to the fact that there are limits on our ability to make decisions (most of the time this limit is most concretely understood in terms of time, but it might also be something like ability – we simply can’t exhaustively search the state space of Go, for example). Most generally, humans have a limited ability to exert themselves towards any end. Thus, we seek to marshal our efforts by virtue of efficient plans. This is particularly true in competitive games – if a strategy is strenuous to apply, chances are you will eventually fail to do it, at least partially.

### Generating Insight

We can already get some juice out of this definition, as strategy game designers. Our games must have one or more goals (so that the player can strategize towards them). But that isn’t enough – the game must have one or more sources of variability (I’m purposefully avoiding the word randomness here). In a system without uncertainty of some kind, a plan cannot be robust because there are not varying situations over which we can test it. We might also say the robustness of plans in such a system is trivial or degenerate – all successful plans have an equal probability of succeeding: 1. Without variation in play, the player can only ever improve the efficiency of a given plan and in those circumstances they are engaged in a different activity: algorithm design. This may be leisure in some circumstances, but it isn’t strategy generation.

What about the notion of “efficiency?” First, let’s eliminate a possible source of confusion. By efficient, I don’t mean that the plan itself arrives at the goal in some limited number of turns or some other unit. Such lusory efficiency is probably a desirable property of a strategy, but I mean something different by “efficiency” here. What I mean is that the process by which the current game state is transformed into the next action is efficient. That is, it makes good use of the player’s limited cognitive resources.  This corresponds to the intuition that a good strategy doesn’t have a lot of fiddly bits, that it abstracts the true degrees of freedom in the game into effective degrees of freedom.

A trivial example: suppose we fire a virtual cannon and we want to know where the ball will land. The worst possible strategy is to memorize the table relating angle and powder volume to the final displacement. A better strategy is to understand Newton’s laws and energy conservation, which profoundly limits the amount of information you need on hand to predict the final state of the cannon ball. Firing a cannon allows this kind of simple strategy formation because the apparent degrees of freedom are redundant in specific ways that you can learn.

Thus, if you want to design strategy games you need to present the player or players with apparent degrees of freedom which contain much simpler dynamics that they can learn. The true dynamics of the game should emerge from the basic rules. These true dynamics might only be approximate, they might only apply in certain circumstances which the player also learns to identify. But the key idea is that the player needs a system which is not just complex, but which is complex in a specific way that allows approximations to be valid in some domains.

Space is a perfect example (which explains why it appears as a game component in so many games). In some fundamental sense, in a real time game, for instance, to predict everything in advance you need to track each object and figure out its update rule on each time step. But many objects move in straight lines at a constant velocity and thus can easily be projected ahead in time. What other sorts of mechanical contrivances have this property?

### Calculation

So far we’ve just recapitulated standard game design advice. Can we generate some novel insights?

Its more or less standard lore that games should not be calculation heavy. I’d argue this general advice is malformed and the above definition produces a deeper insight. In a good strategy game calculation should eventually yield to approximation. Systems should be designed such that calculation should reveal one or more effective theories that apply in a limited number of circumstances. The effective theories can’t be “at the surface” because otherwise they would be trivial – its not a strategy if you are certain about which effective theory you need to put into place upon initial contact with the game system. Experience and lugubrious thought should be required to transform knowledge of the basic rules into a suite of personal effective theories along with heuristics facilitating choice among them. This activity of generalizing knowledge of game state and how to choose appropriate generalizations given the knowledge you have is precisely the activity of strategy formation.

This is why, for instance, adding a timer to a game to prevent calculation doesn’t solve the problem in many games. The true problem is that there are no effective theories embedded in the low level game rules, not that players have too long to calculate. Because a strategy is necessarily (or, by definition, if you prefer) efficient, merely adding a time limit doesn’t make people strategize. It just cuts off calculation. On the other hand, if there are accessible, effective theories, then players will naturally gravitate to them because of their efficiency. Don’t add timers: adjust the basic rules to make strategy more efficient than calculation.

### “Fun”

Another insight generated by this strategy is that fun appears nowhere in the definition of a strategy game. Leisure is a much more expansive notion than “fun” and, I’d argue, we can’t really understand what strategy games are, in particular, if we restrict ourselves to those activities which are merely fun. The pleasure of learning a strategy game involves, in part, struggle, precisely because the true effective theories upon which we should base efficient and robust planning are obscured, in part, by the surface rules of the game.

### Goals

This definition of strategy gaming doesn’t surface goals – there may be one or more goals as long as they are not degenerate (eg, as long as one is not the obvious, easier goal). These goals may be boolean or score based (though for reasons I won’t elaborate upon here, I think boolean goals are better).

### Recapitulation

A strategy game is a context for strategization, the production of strategies. A strategy is an efficient and robust plan. In order for a plan to be robust, it has to withstand unanticipated changes, and thus a strategy game must involve one or more types of uncertainty over which plans can be evaluated. In order to be efficient, a plan has to abstract over details of the game state – it has to free the player from managing all the minutia of the game state in favor of one or more appropriate, high level, conceptions of the game. Players naturally want efficient plans because they are less strenuous to apply and thus provide a natural advantage. Both efficiency and robustness imply a variety of conditions on the system in which the game functions: it must be variable and it must admit summary representations.

My hope with this approach is to highlight the fundamental features of strategy games rather than their superficial elements.

# Mathematics as a single player, evergreen strategy game.

I spend a fair amount of time on the Keith Burgun Games Discord, which is a community built up around Keith Burgun’s game design theory work. He’s interested, I would say, in designing so-called evergreen strategy games in the vein of Go or Chess. That is, games which facilitate long term engagement. He is also interested in single player strategy games.

My sense is that these two goals compete pretty strongly with one another. Without providing a full account, my sense is that evergreen strategy games like Go and Chess are evergreen almost entirely due to the fact that they are multiplayer games. The addition of a human opponent, in my view, radically changes the game design landscape. As such, single player game design is different beast. This might account for why single player strategy games seem to fall short of evergreen character, where they exist at all.

How might we account for these differences? The basic argument is thus: all a multiplayer strategy game must do is provide a large enough state space between the two players that, in the presence of intelligent play, there is enough richness that a conversation and a culture of conversation can arise. I understand multiplayer, competitive strategy games in at least the following way: in such games each player wants to reach a goal while preventing the other player from the same or a similar goal. To do so they must construct and execute a strategy (which encompasses, for our purposes, both a strategy to the goal and a counterstrategy against the other player). The player naturally wishes to conceal their strategy from their competitor, but each move they make necessarily communicates information about their strategy. The vital tension of the game comes from the fact that it forces the competitors into a conversation where each utterance is the locus of two, competing, but necessary, goals: to embody the players strategy and to reveal as little about it as possible.

From this point of view the rules of a multiplayer game can be quite “dumb.” They do not, alone, provide the strategic richness. They only need to give a sufficiently rich vocabulary of moves to facilitate the conversation. One way of seeing this is to consider that the number of possible games of Go is vastly larger than the number of games of Go human players are likely to play. Go furnishes a large state space, much of which is unexplored. The players of Go furnish the constraints which make the game live.

Single player games, even in the era of the computer, which can enforce a large number of rules, struggle to meet the level of richness of multiplayer games exactly for the same reason computers cannot pass the Turing test. A computer alone cannot furnish a culture or a conversation.

(At this point you may raise the point that computers can play Go and Chess. This is true. But they cannot play like a person. In a way, the fact that AlphaGo plays in ways which surprise expert player’s of Go demonstrates my point. Playing AlphaGo is a bit like playing against a space alien who comes from an alternative Go tradition. Interpreting a move that AlphaGo makes is challenging because it isn’t part of the evolved culture of Go. In a sense, its moves are in a different language or dialect.)

Terrence Deacon argues, in Incomplete Nature (a very useful book the fundamental point of which perhaps fails to land) that we can make useful progress understanding phenomena in terms of constraint rather than in terms of construction. For instance, we can nail down what a game of Go is as much by describing what doesn’t occur during a game than what does. Another way to appreciate this point is to recognize that we can play Go with orange and blue glass beads as well as we can play it with shell and slate pieces: the precise material construction of the pieces and the board don’t matter to the game. The question I want to pose from this point of view is: where do operating constraints in a game of Go come from?

I think I’ve made a clear argument by this point that the constraints which define any given game of Go come from the players rather than the rules of Go. The rules of Go merely create a context of constraint which forces the players to interact. By creating a context where each move necessarily (partially) communicates the (hopefully concealed) intent of each player, Go creates a space where someone can be said to have a style of play. Where two players can even be said to have a style. Even a community can be understood as having a style. Play, then, is more like a literary tradition than it is like a fully rational analytical process exactly by virtue of the fact that in the presence of such a large true state space of games, play stays near a much smaller, often intuitively or practically understood, effective state space.

Single player games operate in a similar way. Either the single player or a computer enforces some rules, but the rules themselves imply (typically) a much larger true state space than the state space explored by human players. The difference is, of course, that the player is competing against a much simpler counter-constrainer. In most single player, computer hosted, strategy games the counter-constraining forces are typically a small number of very simple agents pursuing a bunch of distinct goals. If you think of each move of a game as being an utterance in a dialog, as is the case in a two player game, then, in a single player game, the player is doing worse than having a conversation with themselves: they are speaking to no one, though the game engine might be attempting to provide an illusion of conversation. Providing the illusion of culture and conversation is the grand challenge of single player strategy game design.

(Interesting note: from this point of view, games have hardly evolved from the simple (and arguably deeply unsatisfying) text-interpreters of text adventure games.)

Believe it or not, all that was front matter for the following observation which I find myself returning to over and over: Mathematics is perhaps the best example of a single player, evergreen, strategy game-like institution.

Mathematics can plausibly be described as a game. The lusory goal of a mathematical exercise is typically to construct a particular sentence in a formal language using the less than efficient means provided by the rules of that formal system. In other words, you could just write out the sentence, but you don’t let yourself do so. You force yourself, using only the formal rules of your system and your axioms, to find a way to construct the sentence. As in real games, the number of possible rewrites you can make using the formal system is much, much larger than the ones you’re actually interested in. In a real sense, the mathematician is doing the heavy lifting when it comes to the practical character of a formal system. Indeed, the community of mathematicians is doing the lifting. They develop an evolving culture of proof strategy which constrains the typical manipulation of symbols profoundly. In this way, the practice of mathematics is much like the play of multiplayer strategy games. There are probably many, many ways to prove a given theorem, assuming it is provable, but exactly because the space of proof is so large and because humans are so limited in comparison to it, style evolves as a necessity. It helps us prune probably ineffective strategies.

What insights are there here for us, as game designers? It seems to be a maxim, over at the Keith Burgun discord, that we ought not to let the player design the game. Often this comes up in places where players are given agency over goals. We might find that players adopt restrictions on their play to intentionally increase difficulty. Or they might design arbitrary goals like playing without losing any health or restricted to a subset of the board. If we to build an analogy to mathematics, it would be as if we specially designated a class of mathematicians to identify target proofs and then handed them to a distinct set of mathematicians (forbidden to invent their own theorems) to prove them. But it is precisely the freedom of mathematicians to invent their own rules and goals that makes mathematics so much like an evergreen game. To use the language of constraint, mathematicians are able to play against themselves. They build both the rules of the game and then they constrain the space of play by playing. Having the freedom to choose goals and means, they can ensure that play remains stimulating even in the absence of an opponent.

In contrast, players of single player, computer hosted strategy games who are forced to pursue only the goals the designer wants, are hamstrung to grapple with systems which inevitably offer insufficiently rich constraints. Designer’s who forbid themselves from considering player-selected goals (and even player modification of rules) are restricting themselves from considering design questions like “What sort of rule sets facilitate interesting goal choices?” Such limitations make their games as dead as the computers which host them. Not entirely dead, but pretty lifeless.

# The Ethics of Game Design

In the next week or so, I’ll be on the Dinofarm Games Community Podcast talking about the ethics of game design. My baby is just one week old, though! So I might not have been as coherent there as I wanted to be. As such, I thought I’d collect a few notes here while they were still in my head.

As a preamble: there are lots of ethical implications of games that I don’t discuss here. Particularly social ones: since games often depict social and cultural situations (like novels, plays or television shows) similar ethical concerns operate for games as for those artifacts. Here I’m specifically interested in those special ethical questions associated with games as interactive systems.

The question I’m interested in is: “What are the ethical obligations of a game designer, particularly to the player?” In a way, this is an old question in a new disguise, recognizable as such since the answer tends to dichotomize in a familiar way: is the game designer supposed to give the player what they want or is she supposed to give the player that which is good for them?

Let’s eliminate some low hanging fruit: if we design a game which is addictive, in the literal sense, I think most people will agree that we’ve committed an ethical lapse. There are a few folks out there with unusual or extreme moral views who would argue that even a game with bona fide addictive qualities isn’t morally problematic, but to them I simply say we’re operating with a different set of assumptions. However, the following analysis should hopefully illuminate exactly why we consider addictive games problematic as well as outline a few other areas where games ethical impact is important.

I think the most obvious place to start with this kind of analysis is to ask whether games are leisure activity, recreation or whether they provide a practical value. By leisure activity I mean any activity which we perform purely for pleasure, by recreation, I mean an activity that is performed without an immediate practical goal but which somehow improves or restores our capacity to act on practical goals, and by practical value, I mean something which immediately provides for a concrete requirement of living.

Its a little unclear where games fall into this rubric. It is easiest to imagine that games are purely leisure activities. This fits the blurb provided by the wikipedia article and also dovetails, broadly, with my understanding of games in public rhetoric. Categorizing games as purely leisure activities seems to justify a non-philosophical attitude about them: what is the point of worrying about the implications of that which is, at a fundamental level, merely a toy¹?

Point number one is that even toys, which have no practical purpose but to provide fun, are subject to some broad ethical constraints. It isn’t implausible to imagine that we could implant an electrode directly into a person’s brain such that the application of a small current to that electrode would produce, without any intervening activity, the sensation of fun. We could then give the person a button connected to that electrode and allow them to push it. This is technically an interactive system, perhaps even a highly degenerate game. It is certainly providing the player with the experience of fun, directly. However, its likely that a person so equipped would forego important practical tasks in favor of directly stimulating the experience of fun. If we gradually add elements between button presses and the reward or between the electrodes and the reward circuitry, we can gradually transform this game into any interactive system we could imagine. Clearly, at some point, the game might lose its property that it overwhelms the player’s desire to perform practical tasks. That line is the line between ethical and non-ethical game design.

In other words, game designers subscribing to the leisure theory of games are still obligated, perhaps counter-intuitively, to make their games sufficiently unfun that they don’t interfere with the player’s practical goals.

We have two interpretations of game value: the recreational and the practical interpretations.

Of these, the idea of the game as recreation may be closest to what is often discussed on the Dinofarm Discord channel. Its also frequently the narrative used to justify non-practical games. You’ve likely heard or even used the argument that digital games can improve hand-eye coordination or problem solving skills. This interpretation rests on their existing an operational analogy between the skills required to play a game and those required to perform practical tasks. There is a lot of literature on whether such a link exists and what form or forms it takes.

If no such link exists we can rubbish this entire interpretation of games, so its more interesting to imagine the opposite (as it least seems to sometimes be the case). When a link exists the value proposition for a game is: this game provides, as a side effect of play, a practical benefit. Why the phrase “as a side effect of play?” Because, if the purpose of the game is to provide the practical benefit, then we must always compare our game against some practical activity which might provide more of that same benefit than an equivalent effort directed towards non-game activity.

To choose a particularly morally dubious example, we might find that playing Doom improves firing range scores for soldiers. But shouldn’t we compare that to time spent simply practicing on the firing range? Without some further argumentative viscera, this line of thinking seems to lead directly to the conclusion that if games are recreation, we might always or nearly always find some non-game activity which provides a better “bang” for our buck.

Elaborating on this line of argument reveals what the shape of the missing viscera might be. Why is it plausible that we could find some non-game activity that works as well or better than any given game at meeting a practical end? Because games must devote some of their time and structure to fun and, as such, seem to be less dense in their ability to meet a concrete practical goal. In Doom, for instance, there are a variety of mechanics in the game which make it an exciting experience which don’t have anything to do with the target fixation behavior we are using to justify our game.

But we can make an argument of the following form: a purely practical activity which results the improvement of a skill requires an amount of effort. That effort might be eased by sweetening the activity with some fun elements, converting it to a game, allowing less effort for a similar gain of skill.

On this interpretation the ethical obligation of the game designer is to ensure that whatever skill they purport to hone with their game is developed for less effort than the direct approach. If they fail to meet this criteria, then they fail to provide the justification for their game.

The final interpretation we need to consider is that games themselves provide a direct, practical, benefit. I think this is a degenerate version of the above interpretation. It turns out to be difficult to find examples of this kind of game, but they do exist. Consider Fold.it, a game where player activity helps resolve otherwise computationally expensive protein folding calculations.

In this kind of game the developer has a few ethical obligations. The first is to make sure that the fun the game provides is sufficient compensation for the work the player has done or to otherwise make sure the player’s play is given with informed consent. For instance, if we design a game that gives player’s fun to solve traveling salespeople problems which, for some reason, we are given a cash reward for solving, a good argument can be made that, unless the game is exceptionally fun, we’re exploiting our player base. If the game were really so fun as to justify playing on its own terms, why wouldn’t we simply be playing it ourselves?

Game designers of this sort also need to make sure that there isn’t a more efficient means to the practical end. Since the whole purpose of the game is to reach a particular end, if we discover a more efficient way to get there, the game is no longer useful.

I think there is probably much more to say on this subject but I had a baby a week ago and three hours of sleep last night, so I think I will float this out there in hopes of spurring some discussion.

#### The Dinofarm Community Interpretation

At the end of the podcast we decided on a very specific definition of games (from an ethical standpoint). We (myself and users Hopenager and Redless) decided games could  be described as a kind of leisure whose purpose is to produce the feeling of pleasure associated with learning. Since this is a leisure interpretation, we aren’t concerned directly with practical value, which I think is square with the way we typically think of games. However, as a leisure interpretation we need a theory of how games operate in the context of the player’s larger goals.

Let’s sketch one. What circumstances transpire in a person’s life where they have the desire for the pleasure associated with learning but are unable to pursue that desire in productive terms? One possibility is fatigue: after working on productive activities, a person might have an excess of interest in the experience of learning but a deficit of energy to pursue those productive activities. In that situation, a game can satisfy the specific desire with a lower investment of energy (which could mean here literal energy or just lower stress levels – games, since they aren’t practical, are typically less stressful than similar real world situations).

Once the game is completed, the desire ought to be satisfied but not stimulated, allowing the player to rest and then pursue practical goals again.

Again, there are probably other possible ways of situation ethical games in this interpretation, but I think this is a compelling one: games should satisfy, but not stimulate, the desire to learn, and only in those situations where that desire might not be more productively used, as is in the case of mental exhaustion or the need to avoid stress.

Games shouldn’t have a “loop” which intends to capture the player’s attention permanently. Indeed, I think ethical games should be designed to give up the attention of the player fairly easily, so they don’t distract from practical goals.

And them’s my thoughts on the ethics of game design.

¹: Note that there is a loose correspondence between our rubric and The Forms. Toys, roughly, seem to be objects of leisure, puzzles and contests are arguably recreation, and games are, potentially, at least, objects of real practical value. Maybe this is the interpretation of games is the one underlying “gamification” enthusiasts.

# Goals, Anti-Goals and Multi-player Games

In this article I will try to address Keith Burgun‘s assertion that games should have a single goal and his analysis of certain kinds of goals as trivial or pathological. I will try to demonstrate that multi-player games either reduce to single player games or necessitate multiple goals, some of which are necessarily the sorts of goals which Burgun dismisses as trivial. I’ll try to make the case that such goals are useful ideas for game designers as well as being necessary components of non-trivial multi-player games.

(Note: I find Keith Burgun’s game design work very useful. If you are interested in game design and have the money, I suggest subscribing to his Patreon.)

# Notes on Burgun’s Analytical Frame

## The Forms

Keith Burgun is a game design philosopher focused on strategy games, which he calls simply games. He divides the world of interactive systems into four useful forms:

1. toys – an interactive system without goals. Discovery is the primary value of toys.
2. puzzle – bare interactive system plus a goal. Solving is the primary value of the puzzle.
3. contests – a toy plus a goal all meant to measure performance.
4. games – a toy, plus a goal, plus obfuscation of game state. The primary value is in synthesizing decision making heuristics to account for the obfuscation of the game state.

A good, brief, video introduction to the forms is available here. Burgun believes a good way to construct a game is to identify a core mechanism, which is a combination of a core action, a core purpose, and a goal. The action and purpose point together towards the goal. The goal, in turn, gives meaning to the actions the player can take and the states of the interactive system.

## On Goals

More should be said on goals, which appear in many of the above definitions. Burgun has a podcast which serves as a good long form explication of many of his ideas. There is an entire episode on goals here. The discussion of goals begins around the fifteen minute mark.

Here Burgun provides a related definition of games: contests of decision making. Goals are prominent in this discussion: the goal gives meaning to actions in the game state.

Burgun raises a critique of games which feature notions of second place. He groups such goals into a category of non-binary goals and gives us an example to clarify the discussion: goals of the form “get the highest score.”

His analysis of the poorness of this goal is that it seems to imply a few strange things:

1. The player always gets the highest score they are capable of because the universe is deterministic.
2. These goals imply that the game becomes vague after the previous high score is beaten, since the goal is met and yet the game continues.

The first applies to any interactive system at all, so isn’t a very powerful argument, as I understand it. Take a game with the rules of Tetris except that the board is initialized with a set of blocks already on the board. The player receives a deterministic sequence of blocks and must clear the already present blocks, at which point the game ends. This goal is not of the form “find the highest score” or “survive the longest” but the game’s outcome is already determined by the state of the universe at the beginning of the game. From this analysis we can conclude that if (1) constitutes a downside to the construction of a goal, it doesn’t apply uniquely to “high score” style goals.

(2) is more subtle. While it is true that in the form suggested, these rules leave the player without guidelines after the goal is met, I believe that in many cases a simple rephrasing of the goal in question resolves this problem. Take the goal:

G: Given the rules of Tetris, play for the highest score.

Since Tetris rewards you for clearing more lines at once and since Tetris ends when a block becomes fixed to the board but touches the top of the screen, we can rephrase this goal as:

G': Do not let the blocks reach the top of the screen.

This goal is augmented by secondary goals which enhance play: certain ways of moving away from the negative goal G' are more rewarding than others. Call this secondary goal g: clear lines in the largest groups possible. Call G' and goals like it “anti-goals.”

This terminology implies the definition.

If a goal is a particular game state towards which the player tries to move, an anti-goal is a particular state which the player is trying to avoid. Usually anti-goals are of the form “Do not allow X to occur” Where X is related to a (potentially open ended) goal.

Goals of the “high score” or “survive” variety are (or may be) anti-goals in disguise. Rephrased properly, they can be conceived of in anti-goal language. Of course there are good anti-goals and bad ones, just as there are good goals and bad goals. However, I would argue that the same criteria applies to both types of goals: a good (anti) goal is just one which gives meaning to the actions a person is presented with over an interactive system.

# Multi-Player Games and Anti-Goals

I believe anti-goals can be useful game design, even in the single player case. In another essay I may try to make the argument that anti-goals must be augmented with mechanics which tend to move the player towards the anti-goal against which players must do all the sorts of complex decision making which produces value for players.

However, there is a more direct way of demonstrating that anti-goals are unavoidable aspects of games, at least when games are multi-player. This argument also demonstrates that games with multiple goals are in a sense inevitable, at least in the case of multi-player games. First let me describe what I conceive of as a multi-player game.

multi-player game: A game where players interact via an interactive system in order to reach a goal which can only be attained by a single player.

The critical distinction I want to make is that a multi-player game is not just two or more people engaged in separate contests of decision making. If there are not actions mediating the interaction of players via the game state then what is really going on is many players are playing many distinct games. A true multi-player game must allow players to interact (via actions).

In a multi-player game, players are working towards a win state we can call G. However, in the context of the mechanics which allow interaction they are also playing against a (set of) anti-goals {A}, one for each player besides themselves. These goals are of the form “Prevent player X from reaching goal G“. Hence, anti-goals are critical ingredients to successful multi-player game design and are therefore useful ideas for game designers. Therefore, for a game to really be multi-player then there must be actions associated with each anti-goal {A}.

An argument we might make at this point is that if players are playing for {A} and not explicitly for G then our game is not well designed (for instance, it isn’t elegant or minimal). But I believe any multi-player game where a player can pursue G and not concern herself with {A}, even in the presence of game actions which allow interaction, is a set of single player games in disguise. If we follow our urge to make G the true goal for all players at the expense of {A} then we may as well remove the actions which intermediate between players and then we may as well be designing a single player game whose goal is G.

So, if we admit that multi-player games are worth designing, then we also admit that at least a family of anti-goals are worth considering. Note that we must explicitly design the actions which allow the pursuit of {A} in order to design the game. Ideally these will be related and work in accord with the actions which facilitate G but they cannot be identical to those mechanics without our game collapsing to the single player case. We must consider {A} actions as a separate (though ideally related) design space.

# Summary

I’ve tried to demonstrate that in multi-player games especially, anti-goals, which are goals of the for “Avoid some game state”, are necessary, distinct goal forms worth considering by game designers. The argument depends on demonstrating that a multi-player game must contain such anti-goals or collapse to a single player game played by multiple people but otherwise disconnected.

In a broader context, the idea here is to get a foot in the door for anti-goals as rules which can still do the work of a goal, which is to give meaning to choices and actions in an interactive system. An open question is whether such anti-goals are useful for single player games, whether they are useful but only in conjunction with game-terminating goals, or whether, though useful, we can always find a related normal goal which is superior from a design point of view. Hopefully, this essay provides a good jumping off point for those discussions.

# Quick, Probabily Naive Thoughts about Turing Machines and Random Numbers

Here is a fact which is still blowing my mind, albeit quietly, from the horizon.

Turing Machines, the formalism which we use to describe computation, do not, strictly speaking, cover computational processes which have access to random values. When we wish to reason about such machines people typically imagine a Turing Machine with two tapes, one which takes on the typical role and another which contains an infinite string of random numbers which the machine can peel off one at a time.

I know what you are all thinking: can’t I just write a random number generator and put it someplace on my turing machine’s tape, and use that? Sure, but those numbers aren’t really random, particularly in the sense that a dedicated attacker, having access to the output of your turing machine can in principle detect the difference between your machine and one with bona fide random numbers if it has access to your outputs. And, in fact, the question of whether there exists a random number generator which uses only polynomial time and space such that a polynomial time and space algorithm is unable to detect whether the numbers derive from a real random process or an algorithm is still open.

All that is really an aside. What is truly, profoundly surprising to me is this: a machine which has access to random numbers seems to be more powerful than one without random numbers. In what sense? There are algorithms which are not practical on a normal turing machine which become immanently practical on a turing machine with a random tape as long as we are able to accept a vanishingly small probability that the result is wrong. Algorithms about which we can even do delta/epsilon style reasoning: that is, we can make the probability of error as small as we like by the expedient of repeating the computation with new random numbers and (for instance) using the results as votes to determine the “correct answer.” This expedient does not really modify the big O complexity of algorithms.

Buridan’s Ass is a paradox in which a hungry donkey sits between two identical bales of hay and dies of hunger, unable to choose which to eat on account of their equal size. There is a strange sort of analogy here: if the Ass has a source of random numbers he can pick one randomly and survive. It is almost as if deterministic, finitist mathematics, in its crystalline precision, encounters and wastes energy on lots of tiny Ass’ Dilemmas which put certain sorts of results practically out of reach, but if we fuzz it up with random numbers, suddenly it is liberated to find much more truth than it was before. At least that is my paltry intuitive understanding.

# Notes on Quantum Computing Since Democritus, Chapter 1

For a long time, I’ve been interested in the sorts of questions exemplified by the following example:

Suppose we are Isaac Newton or  Gottfried Leibniz. We have at our disposal two sources of inspiration: data, collected by intrepid philatelists like Tycho Brahe and something like theory, in the form of artifacts like Kepler’s Laws, Galileo’s pre-Newtonian laws of motion (for it was he who first suggested that objects in motion retain that motion unless acted upon), and a smattering of Aristotelian and post-Aristotelian intuitions about motion (for instance, John Philoponus’ notion that, in addition to the rules of motion described by Aristotle, one object could impart on another a transient impetus). You also have tables and towers and balls you can roll on them or drop from them. You can perform your own experiments.

The question, then, is how do you synthesize something like Newton’s Laws. Jokes about Newton’s extra-scientific interests aside, this is alchemy indeed, and an alchemy to which most training physicists receive (or at least I received) does not address itself.

Newton’s Laws are generally dropped on the first year physics student (perhaps after working with statics for awhile) fully formed:

 First law: When viewed in an inertial reference frame, an object either remains at rest or continues to move at a constant velocity, unless acted upon by an external force.[2][3] Second law: The vector sum of the external forces F on an object is equal to the mass m of that object multiplied by the acceleration vector aof the object: F = ma. Third law: When one body exerts a force on a second body, the second body simultaneously exerts a force equal in magnitude and opposite in direction on the first body.

(this formulation borrowed from Wikipedia)

The laws are stated here in terms of a lot of subsidiary ideas: inertial reference frames, forces, mass. Neglecting the reference to mathematical structures (vector sums), this is a lot to digest: and it is hard to imagine Newton just pulling these laws from thin air.  It took the species about 2000 years to figure it out (if you measure from Zeno to Newton, since Newton’s work is in some sense a practical rejoinder to the paradoxes of that pre-Socratic philosopher), so it cannot be, as some of my colleagues have suggested, so easy to figure out.

A doctorate in physics takes (including the typical four year undergraduate degree in math, physics or engineering) about ten years. Most of what is learned in such a program is pragmatic theory: how to take a problem statement or something even more vague, identify the correct theoretical approach from a dictionary of possibilities, and then to “turn the crank.” It is unusual (or it was unusual for me) for a teacher to spend time posing more philosophical questions. Why, for instance, does a specific expression called the “Action,” when minimized over all possible paths of a particle, find a physical path? I’ve had a lot of physicist friends dismiss my curiosity about this subject, but I’m not the only one interested (eg, the introductory chapter of Lanczos’ “The Variation Principles of Mechanics”).

What I am getting to here, believe it or not, is that I think physicists are over-prepared to work problems and under-prepared to do the synthetic work of building new theoretical approaches to existing unsolved problems. I enjoy the freedom of having fallen from the Ivory Tower, and I aim to enjoy that freedom in 2016 by revisiting my education from a perspective which allows me to stop and ask “why” more frequently and with more intensity.

Enter Scott Aaronson’s “Quantum Computing Since Democritus,” a book whose title immediately piqued my interest, combining, as it does, the name of a pre-Socratic philosopher (the questions of which form the basis, in my opinion, for so much modern physics) with the most modern and pragmatic of contemporary subjects in physics. Aaronson’s project seems to accomplish exactly what I want as an armchair physicist: stopping to think about what our theories really mean.

To keep myself honest, I’ll be periodically writing about the chapters of this book – I’m a bit rusty mathematically and so writing about the work will encourage me to get concrete where needed.

# Atoms and the Void

Atoms and the Void is a short chapter which basically asks us to think a bit about what quantum mechanics means. Aaronson describes Quantum Mechanics in the following way:

Here’s the thing: for any isolated region of the universe that you want to consider, quantum mechanics describes the evolution in time of the state of that region, which we represent as a linear combination – a superposition – of all the possible configurations of elementary particles in that region. So, this is a bizarre picture of reality, where a given particle is not here, not there, but in a sort of weighted sum over all the places it could be. But it works. As we all know, it does pretty well at describing the “atoms and the void” that Democritus talked about.

The needs of an introductory chapter, I guess, prevent him from describing how peculiar this description is: for one thing, there is never an isolated region of the universe (or at least, not one we are interested in, I hope obviously). But he goes on to meditate on this anyway by asking us to think about how we interpret measurement where quantum mechanics is concerned. He dichotimizes interpretations of quantum mechanics by where they fall on the question of putting oneself in coherent superposition.

Happily, he doesn’t try to claim that any particular set of experiments can definitely disambiguate different interpretations of quantum mechanics. Instead he suggests that by thinking specifically of Quantum Computing, which he implies gets most directly at some of the issues raised by debates over interpretation, we might learn something interesting.

This tantalizes us to move to chapter 2.