Category Archives: philosophy

Mathematics as a single player, evergreen strategy game.

I spend a fair amount of time on the Keith Burgun Games Discord, which is a community built up around Keith Burgun’s game design theory work. He’s interested, I would say, in designing so-called evergreen strategy games in the vein of Go or Chess. That is, games which facilitate long term engagement. He is also interested in single player strategy games.

My sense is that these two goals compete pretty strongly with one another. Without providing a full account, my sense is that evergreen strategy games like Go and Chess are evergreen almost entirely due to the fact that they are multiplayer games. The addition of a human opponent, in my view, radically changes the game design landscape. As such, single player game design is different beast. This might account for why single player strategy games seem to fall short of evergreen character, where they exist at all.

How might we account for these differences? The basic argument is thus: all a multiplayer strategy game must do is provide a large enough state space between the two players that, in the presence of intelligent play, there is enough richness that a conversation and a culture of conversation can arise. I understand multiplayer, competitive strategy games in at least the following way: in such games each player wants to reach a goal while preventing the other player from the same or a similar goal. To do so they must construct and execute a strategy (which encompasses, for our purposes, both a strategy to the goal and a counterstrategy against the other player). The player naturally wishes to conceal their strategy from their competitor, but each move they make necessarily communicates information about their strategy. The vital tension of the game comes from the fact that it forces the competitors into a conversation where each utterance is the locus of two, competing, but necessary, goals: to embody the players strategy and to reveal as little about it as possible.

From this point of view the rules of a multiplayer game can be quite “dumb.” They do not, alone, provide the strategic richness. They only need to give a sufficiently rich vocabulary of moves to facilitate the conversation. One way of seeing this is to consider that the number of possible games of Go is vastly larger than the number of games of Go human players are likely to play. Go furnishes a large state space, much of which is unexplored. The players of Go furnish the constraints which make the game live.

Single player games, even in the era of the computer, which can enforce a large number of rules, struggle to meet the level of richness of multiplayer games exactly for the same reason computers cannot pass the Turing test. A computer alone cannot furnish a culture or a conversation.

(At this point you may raise the point that computers can play Go and Chess. This is true. But they cannot play like a person. In a way, the fact that AlphaGo plays in ways which surprise expert player’s of Go demonstrates my point. Playing AlphaGo is a bit like playing against a space alien who comes from an alternative Go tradition. Interpreting a move that AlphaGo makes is challenging because it isn’t part of the evolved culture of Go. In a sense, its moves are in a different language or dialect.)

Terrence Deacon argues, in Incomplete Nature (a very useful book the fundamental point of which perhaps fails to land) that we can make useful progress understanding phenomena in terms of constraint rather than in terms of construction. For instance, we can nail down what a game of Go is as much by describing what doesn’t occur during a game than what does. Another way to appreciate this point is to recognize that we can play Go with orange and blue glass beads as well as we can play it with shell and slate pieces: the precise material construction of the pieces and the board don’t matter to the game. The question I want to pose from this point of view is: where do operating constraints in a game of Go come from?

I think I’ve made a clear argument by this point that the constraints which define any given game of Go come from the players rather than the rules of Go. The rules of Go merely create a context of constraint which forces the players to interact. By creating a context where each move necessarily (partially) communicates the (hopefully concealed) intent of each player, Go creates a space where someone can be said to have a style of play. Where two players can even be said to have a style. Even a community can be understood as having a style. Play, then, is more like a literary tradition than it is like a fully rational analytical process exactly by virtue of the fact that in the presence of such a large true state space of games, play stays near a much smaller, often intuitively or practically understood, effective state space.

Single player games operate in a similar way. Either the single player or a computer enforces some rules, but the rules themselves imply (typically) a much larger true state space than the state space explored by human players. The difference is, of course, that the player is competing against a much simpler counter-constrainer. In most single player, computer hosted, strategy games the counter-constraining forces are typically a small number of very simple agents pursuing a bunch of distinct goals. If you think of each move of a game as being an utterance in a dialog, as is the case in a two player game, then, in a single player game, the player is doing worse than having a conversation with themselves: they are speaking to no one, though the game engine might be attempting to provide an illusion of conversation. Providing the illusion of culture and conversation is the grand challenge of single player strategy game design.

(Interesting note: from this point of view, games have hardly evolved from the simple (and arguably deeply unsatisfying) text-interpreters of text adventure games.)

Believe it or not, all that was front matter for the following observation which I find myself returning to over and over: Mathematics is perhaps the best example of a single player, evergreen, strategy game-like institution.

Mathematics can plausibly be described as a game. The lusory goal of a mathematical exercise is typically to construct a particular sentence in a formal language using the less than efficient means provided by the rules of that formal system. In other words, you could just write out the sentence, but you don’t let yourself do so. You force yourself, using only the formal rules of your system and your axioms, to find a way to construct the sentence. As in real games, the number of possible rewrites you can make using the formal system is much, much larger than the ones you’re actually interested in. In a real sense, the mathematician is doing the heavy lifting when it comes to the practical character of a formal system. Indeed, the community of mathematicians is doing the lifting. They develop an evolving culture of proof strategy which constrains the typical manipulation of symbols profoundly. In this way, the practice of mathematics is much like the play of multiplayer strategy games. There are probably many, many ways to prove a given theorem, assuming it is provable, but exactly because the space of proof is so large and because humans are so limited in comparison to it, style evolves as a necessity. It helps us prune probably ineffective strategies.

What insights are there here for us, as game designers? It seems to be a maxim, over at the Keith Burgun discord, that we ought not to let the player design the game. Often this comes up in places where players are given agency over goals. We might find that players adopt restrictions on their play to intentionally increase difficulty. Or they might design arbitrary goals like playing without losing any health or restricted to a subset of the board. If we to build an analogy to mathematics, it would be as if we specially designated a class of mathematicians to identify target proofs and then handed them to a distinct set of mathematicians (forbidden to invent their own theorems) to prove them. But it is precisely the freedom of mathematicians to invent their own rules and goals that makes mathematics so much like an evergreen game. To use the language of constraint, mathematicians are able to play against themselves. They build both the rules of the game and then they constrain the space of play by playing. Having the freedom to choose goals and means, they can ensure that play remains stimulating even in the absence of an opponent.

In contrast, players of single player, computer hosted strategy games who are forced to pursue only the goals the designer wants, are hamstrung to grapple with systems which inevitably offer insufficiently rich constraints. Designer’s who forbid themselves from considering player-selected goals (and even player modification of rules) are restricting themselves from considering design questions like “What sort of rule sets facilitate interesting goal choices?” Such limitations make their games as dead as the computers which host them. Not entirely dead, but pretty lifeless.

The Ethics of Game Design

In the next week or so, I’ll be on the Dinofarm Games Community Podcast talking about the ethics of game design. My baby is just one week old, though! So I might not have been as coherent there as I wanted to be. As such, I thought I’d collect a few notes here while they were still in my head.

As a preamble: there are lots of ethical implications of games that I don’t discuss here. Particularly social ones: since games often depict social and cultural situations (like novels, plays or television shows) similar ethical concerns operate for games as for those artifacts. Here I’m specifically interested in those special ethical questions associated with games as interactive systems.

The question I’m interested in is: “What are the ethical obligations of a game designer, particularly to the player?” In a way, this is an old question in a new disguise, recognizable as such since the answer tends to dichotomize in a familiar way: is the game designer supposed to give the player what they want or is she supposed to give the player that which is good for them?

Let’s eliminate some low hanging fruit: if we design a game which is addictive, in the literal sense, I think most people will agree that we’ve committed an ethical lapse. There are a few folks out there with unusual or extreme moral views who would argue that even a game with bona fide addictive qualities isn’t morally problematic, but to them I simply say we’re operating with a different set of assumptions. However, the following analysis should hopefully illuminate exactly why we consider addictive games problematic as well as outline a few other areas where games ethical impact is important.

I think the most obvious place to start with this kind of analysis is to ask whether games are leisure activity, recreation or whether they provide a practical value. By leisure activity I mean any activity which we perform purely for pleasure, by recreation, I mean an activity that is performed without an immediate practical goal but which somehow improves or restores our capacity to act on practical goals, and by practical value, I mean something which immediately provides for a concrete requirement of living.

Its a little unclear where games fall into this rubric. It is easiest to imagine that games are purely leisure activities. This fits the blurb provided by the wikipedia article and also dovetails, broadly, with my understanding of games in public rhetoric. Categorizing games as purely leisure activities seems to justify a non-philosophical attitude about them: what is the point of worrying about the implications of that which is, at a fundamental level, merely a toy¹?

Point number one is that even toys, which have no practical purpose but to provide fun, are subject to some broad ethical constraints. It isn’t implausible to imagine that we could implant an electrode directly into a person’s brain such that the application of a small current to that electrode would produce, without any intervening activity, the sensation of fun. We could then give the person a button connected to that electrode and allow them to push it. This is technically an interactive system, perhaps even a highly degenerate game. It is certainly providing the player with the experience of fun, directly. However, its likely that a person so equipped would forego important practical tasks in favor of directly stimulating the experience of fun. If we gradually add elements between button presses and the reward or between the electrodes and the reward circuitry, we can gradually transform this game into any interactive system we could imagine. Clearly, at some point, the game might lose its property that it overwhelms the player’s desire to perform practical tasks. That line is the line between ethical and non-ethical game design.

In other words, game designers subscribing to the leisure theory of games are still obligated, perhaps counter-intuitively, to make their games sufficiently unfun that they don’t interfere with the player’s practical goals.

We have two interpretations of game value: the recreational and the practical interpretations.

Of these, the idea of the game as recreation may be closest to what is often discussed on the Dinofarm Discord channel. Its also frequently the narrative used to justify non-practical games. You’ve likely heard or even used the argument that digital games can improve hand-eye coordination or problem solving skills. This interpretation rests on their existing an operational analogy between the skills required to play a game and those required to perform practical tasks. There is a lot of literature on whether such a link exists and what form or forms it takes.

If no such link exists we can rubbish this entire interpretation of games, so its more interesting to imagine the opposite (as it least seems to sometimes be the case). When a link exists the value proposition for a game is: this game provides, as a side effect of play, a practical benefit. Why the phrase “as a side effect of play?” Because, if the purpose of the game is to provide the practical benefit, then we must always compare our game against some practical activity which might provide more of that same benefit than an equivalent effort directed towards non-game activity.

To choose a particularly morally dubious example, we might find that playing Doom improves firing range scores for soldiers. But shouldn’t we compare that to time spent simply practicing on the firing range? Without some further argumentative viscera, this line of thinking seems to lead directly to the conclusion that if games are recreation, we might always or nearly always find some non-game activity which provides a better “bang” for our buck.

Elaborating on this line of argument reveals what the shape of the missing viscera might be. Why is it plausible that we could find some non-game activity that works as well or better than any given game at meeting a practical end? Because games must devote some of their time and structure to fun and, as such, seem to be less dense in their ability to meet a concrete practical goal. In Doom, for instance, there are a variety of mechanics in the game which make it an exciting experience which don’t have anything to do with the target fixation behavior we are using to justify our game.

But we can make an argument of the following form: a purely practical activity which results the improvement of a skill requires an amount of effort. That effort might be eased by sweetening the activity with some fun elements, converting it to a game, allowing less effort for a similar gain of skill.

On this interpretation the ethical obligation of the game designer is to ensure that whatever skill they purport to hone with their game is developed for less effort than the direct approach. If they fail to meet this criteria, then they fail to provide the justification for their game.

The final interpretation we need to consider is that games themselves provide a direct, practical, benefit. I think this is a degenerate version of the above interpretation. It turns out to be difficult to find examples of this kind of game, but they do exist. Consider Fold.it, a game where player activity helps resolve otherwise computationally expensive protein folding calculations.

In this kind of game the developer has a few ethical obligations. The first is to make sure that the fun the game provides is sufficient compensation for the work the player has done or to otherwise make sure the player’s play is given with informed consent. For instance, if we design a game that gives player’s fun to solve traveling salespeople problems which, for some reason, we are given a cash reward for solving, a good argument can be made that, unless the game is exceptionally fun, we’re exploiting our player base. If the game were really so fun as to justify playing on its own terms, why wouldn’t we simply be playing it ourselves?

Game designers of this sort also need to make sure that there isn’t a more efficient means to the practical end. Since the whole purpose of the game is to reach a particular end, if we discover a more efficient way to get there, the game is no longer useful.

I think there is probably much more to say on this subject but I had a baby a week ago and three hours of sleep last night, so I think I will float this out there in hopes of spurring some discussion.

The Dinofarm Community Interpretation

At the end of the podcast we decided on a very specific definition of games (from an ethical standpoint). We (myself and users Hopenager and Redless) decided games could  be described as a kind of leisure whose purpose is to produce the feeling of pleasure associated with learning. Since this is a leisure interpretation, we aren’t concerned directly with practical value, which I think is square with the way we typically think of games. However, as a leisure interpretation we need a theory of how games operate in the context of the player’s larger goals.

Let’s sketch one. What circumstances transpire in a person’s life where they have the desire for the pleasure associated with learning but are unable to pursue that desire in productive terms? One possibility is fatigue: after working on productive activities, a person might have an excess of interest in the experience of learning but a deficit of energy to pursue those productive activities. In that situation, a game can satisfy the specific desire with a lower investment of energy (which could mean here literal energy or just lower stress levels – games, since they aren’t practical, are typically less stressful than similar real world situations).

Once the game is completed, the desire ought to be satisfied but not stimulated, allowing the player to rest and then pursue practical goals again.

Again, there are probably other possible ways of situation ethical games in this interpretation, but I think this is a compelling one: games should satisfy, but not stimulate, the desire to learn, and only in those situations where that desire might not be more productively used, as is in the case of mental exhaustion or the need to avoid stress.

Games shouldn’t have a “loop” which intends to capture the player’s attention permanently. Indeed, I think ethical games should be designed to give up the attention of the player fairly easily, so they don’t distract from practical goals.

And them’s my thoughts on the ethics of game design.

¹: Note that there is a loose correspondence between our rubric and The Forms. Toys, roughly, seem to be objects of leisure, puzzles and contests are arguably recreation, and games are, potentially, at least, objects of real practical value. Maybe this is the interpretation of games is the one underlying “gamification” enthusiasts.

Goals, Anti-Goals and Multi-player Games

In this article I will try to address Keith Burgun‘s assertion that games should have a single goal and his analysis of certain kinds of goals as trivial or pathological. I will try to demonstrate that multi-player games either reduce to single player games or necessitate multiple goals, some of which are necessarily the sorts of goals which Burgun dismisses as trivial. I’ll try to make the case that such goals are useful ideas for game designers as well as being necessary components of non-trivial multi-player games.

(Note: I find Keith Burgun’s game design work very useful. If you are interested in game design and have the money, I suggest subscribing to his Patreon.)

Notes on Burgun’s Analytical Frame

The Forms

Keith Burgun is a game design philosopher focused on strategy games, which he calls simply games. He divides the world of interactive systems into four useful forms:

  1. toys – an interactive system without goals. Discovery is the primary value of toys.
  2. puzzle – bare interactive system plus a goal. Solving is the primary value of the puzzle.
  3. contests – a toy plus a goal all meant to measure performance.
  4. games – a toy, plus a goal, plus obfuscation of game state. The primary value is in synthesizing decision making heuristics to account for the obfuscation of the game state.

A good, brief, video introduction to the forms is available here. Burgun believes a good way to construct a game is to identify a core mechanism, which is a combination of a core action, a core purpose, and a goal. The action and purpose point together towards the goal. The goal, in turn, gives meaning to the actions the player can take and the states of the interactive system.

On Goals

More should be said on goals, which appear in many of the above definitions. Burgun has a podcast which serves as a good long form explication of many of his ideas. There is an entire episode on goals here. The discussion of goals begins around the fifteen minute mark.

Here Burgun provides a related definition of games: contests of decision making. Goals are prominent in this discussion: the goal gives meaning to actions in the game state.

Burgun raises a critique of games which feature notions of second place. He groups such goals into a category of non-binary goals and gives us an example to clarify the discussion: goals of the form “get the highest score.”

His analysis of the poorness of this goal is that it seems to imply a few strange things:

  1. The player always gets the highest score they are capable of because the universe is deterministic.
  2. These goals imply that the game becomes vague after the previous high score is beaten, since the goal is met and yet the game continues.

The first applies to any interactive system at all, so isn’t a very powerful argument, as I understand it. Take a game with the rules of Tetris except that the board is initialized with a set of blocks already on the board. The player receives a deterministic sequence of blocks and must clear the already present blocks, at which point the game ends. This goal is not of the form “find the highest score” or “survive the longest” but the game’s outcome is already determined by the state of the universe at the beginning of the game. From this analysis we can conclude that if (1) constitutes a downside to the construction of a goal, it doesn’t apply uniquely to “high score” style goals.

(2) is more subtle. While it is true that in the form suggested, these rules leave the player without guidelines after the goal is met, I believe that in many cases a simple rephrasing of the goal in question resolves this problem. Take the goal:

G: Given the rules of Tetris, play for the highest score.

Since Tetris rewards you for clearing more lines at once and since Tetris ends when a block becomes fixed to the board but touches the top of the screen, we can rephrase this goal as:

G': Do not let the blocks reach the top of the screen.

This goal is augmented by secondary goals which enhance play: certain ways of moving away from the negative goal G' are more rewarding than others. Call this secondary goal g: clear lines in the largest groups possible. Call G' and goals like it “anti-goals.”

This terminology implies the definition.

If a goal is a particular game state towards which the player tries to move, an anti-goal is a particular state which the player is trying to avoid. Usually anti-goals are of the form “Do not allow X to occur” Where X is related to a (potentially open ended) goal.

Goals of the “high score” or “survive” variety are (or may be) anti-goals in disguise. Rephrased properly, they can be conceived of in anti-goal language. Of course there are good anti-goals and bad ones, just as there are good goals and bad goals. However, I would argue that the same criteria applies to both types of goals: a good (anti) goal is just one which gives meaning to the actions a person is presented with over an interactive system.

Multi-Player Games and Anti-Goals

I believe anti-goals can be useful game design, even in the single player case. In another essay I may try to make the argument that anti-goals must be augmented with mechanics which tend to move the player towards the anti-goal against which players must do all the sorts of complex decision making which produces value for players.

However, there is a more direct way of demonstrating that anti-goals are unavoidable aspects of games, at least when games are multi-player. This argument also demonstrates that games with multiple goals are in a sense inevitable, at least in the case of multi-player games. First let me describe what I conceive of as a multi-player game.

multi-player game: A game where players interact via an interactive system in order to reach a goal which can only be attained by a single player.

The critical distinction I want to make is that a multi-player game is not just two or more people engaged in separate contests of decision making. If there are not actions mediating the interaction of players via the game state then what is really going on is many players are playing many distinct games. A true multi-player game must allow players to interact (via actions).

In a multi-player game, players are working towards a win state we can call G. However, in the context of the mechanics which allow interaction they are also playing against a (set of) anti-goals {A}, one for each player besides themselves. These goals are of the form “Prevent player X from reaching goal G“. Hence, anti-goals are critical ingredients to successful multi-player game design and are therefore useful ideas for game designers. Therefore, for a game to really be multi-player then there must be actions associated with each anti-goal {A}.

An argument we might make at this point is that if players are playing for {A} and not explicitly for G then our game is not well designed (for instance, it isn’t elegant or minimal). But I believe any multi-player game where a player can pursue G and not concern herself with {A}, even in the presence of game actions which allow interaction, is a set of single player games in disguise. If we follow our urge to make G the true goal for all players at the expense of {A} then we may as well remove the actions which intermediate between players and then we may as well be designing a single player game whose goal is G.

So, if we admit that multi-player games are worth designing, then we also admit that at least a family of anti-goals are worth considering. Note that we must explicitly design the actions which allow the pursuit of {A} in order to design the game. Ideally these will be related and work in accord with the actions which facilitate G but they cannot be identical to those mechanics without our game collapsing to the single player case. We must consider {A} actions as a separate (though ideally related) design space.

Summary

I’ve tried to demonstrate that in multi-player games especially, anti-goals, which are goals of the for “Avoid some game state”, are necessary, distinct goal forms worth considering by game designers. The argument depends on demonstrating that a multi-player game must contain such anti-goals or collapse to a single player game played by multiple people but otherwise disconnected.

In a broader context, the idea here is to get a foot in the door for anti-goals as rules which can still do the work of a goal, which is to give meaning to choices and actions in an interactive system. An open question is whether such anti-goals are useful for single player games, whether they are useful but only in conjunction with game-terminating goals, or whether, though useful, we can always find a related normal goal which is superior from a design point of view. Hopefully, this essay provides a good jumping off point for those discussions.


Quick, Probabily Naive Thoughts about Turing Machines and Random Numbers

Here is a fact which is still blowing my mind, albeit quietly, from the horizon.

Turing Machines, the formalism which we use to describe computation, do not, strictly speaking, cover computational processes which have access to random values. When we wish to reason about such machines people typically imagine a Turing Machine with two tapes, one which takes on the typical role and another which contains an infinite string of random numbers which the machine can peel off one at a time.

Screen Shot 2016-05-30 at 9.42.43 AM

I know what you are all thinking: can’t I just write a random number generator and put it someplace on my turing machine’s tape, and use that? Sure, but those numbers aren’t really random, particularly in the sense that a dedicated attacker, having access to the output of your turing machine can in principle detect the difference between your machine and one with bona fide random numbers if it has access to your outputs. And, in fact, the question of whether there exists a random number generator which uses only polynomial time and space such that a polynomial time and space algorithm is unable to detect whether the numbers derive from a real random process or an algorithm is still open.

All that is really an aside. What is truly, profoundly surprising to me is this: a machine which has access to random numbers seems to be more powerful than one without random numbers. In what sense? There are algorithms which are not practical on a normal turing machine which become immanently practical on a turing machine with a random tape as long as we are able to accept a vanishingly small probability that the result is wrong. Algorithms about which we can even do delta/epsilon style reasoning: that is, we can make the probability of error as small as we like by the expedient of repeating the computation with new random numbers and (for instance) using the results as votes to determine the “correct answer.” This expedient does not really modify the big O complexity of algorithms.

Buridan’s Ass is a paradox in which a hungry donkey sits between two identical bales of hay and dies of hunger, unable to choose which to eat on account of their equal size. There is a strange sort of analogy here: if the Ass has a source of random numbers he can pick one randomly and survive. It is almost as if deterministic, finitist mathematics, in its crystalline precision, encounters and wastes energy on lots of tiny Ass’ Dilemmas which put certain sorts of results practically out of reach, but if we fuzz it up with random numbers, suddenly it is liberated to find much more truth than it was before. At least that is my paltry intuitive understanding.

Notes on `Quantum Computing Since Democritus, Chapter 1`

For a long time, I’ve been interested in the sorts of questions exemplified by the following example:

Suppose we are Isaac Newton or  Gottfried Leibniz. We have at our disposal two sources of inspiration: data, collected by intrepid philatelists like Tycho Brahe and something like theory, in the form of artifacts like Kepler’s Laws, Galileo’s pre-Newtonian laws of motion (for it was he who first suggested that objects in motion retain that motion unless acted upon), and a smattering of Aristotelian and post-Aristotelian intuitions about motion (for instance, John Philoponus’ notion that, in addition to the rules of motion described by Aristotle, one object could impart on another a transient impetus). You also have tables and towers and balls you can roll on them or drop from them. You can perform your own experiments.

The question, then, is how do you synthesize something like Newton’s Laws. Jokes about Newton’s extra-scientific interests aside, this is alchemy indeed, and an alchemy to which most training physicists receive (or at least I received) does not address itself.

Newton’s Laws are generally dropped on the first year physics student (perhaps after working with statics for awhile) fully formed:

First law: When viewed in an inertial reference frame, an object either remains at rest or continues to move at a constant velocity, unless acted upon by an external force.[2][3]
Second law: The vector sum of the external forces F on an object is equal to the mass m of that object multiplied by the acceleration vector aof the object: F = ma.
Third law: When one body exerts a force on a second body, the second body simultaneously exerts a force equal in magnitude and opposite in direction on the first body.

(this formulation borrowed from Wikipedia)

The laws are stated here in terms of a lot of subsidiary ideas: inertial reference frames, forces, mass. Neglecting the reference to mathematical structures (vector sums), this is a lot to digest: and it is hard to imagine Newton just pulling these laws from thin air.  It took the species about 2000 years to figure it out (if you measure from Zeno to Newton, since Newton’s work is in some sense a practical rejoinder to the paradoxes of that pre-Socratic philosopher), so it cannot be, as some of my colleagues have suggested, so easy to figure out.

A doctorate in physics takes (including the typical four year undergraduate degree in math, physics or engineering) about ten years. Most of what is learned in such a program is pragmatic theory: how to take a problem statement or something even more vague, identify the correct theoretical approach from a dictionary of possibilities, and then to “turn the crank.” It is unusual (or it was unusual for me) for a teacher to spend time posing more philosophical questions. Why, for instance, does a specific expression called the “Action,” when minimized over all possible paths of a particle, find a physical path? I’ve had a lot of physicist friends dismiss my curiosity about this subject, but I’m not the only one interested (eg, the introductory chapter of Lanczos’ “The Variation Principles of Mechanics”).

What I am getting to here, believe it or not, is that I think physicists are over-prepared to work problems and under-prepared to do the synthetic work of building new theoretical approaches to existing unsolved problems. I enjoy the freedom of having fallen from the Ivory Tower, and I aim to enjoy that freedom in 2016 by revisiting my education from a perspective which allows me to stop and ask “why” more frequently and with more intensity.

Enter Scott Aaronson’s “Quantum Computing Since Democritus,” a book whose title immediately piqued my interest, combining, as it does, the name of a pre-Socratic philosopher (the questions of which form the basis, in my opinion, for so much modern physics) with the most modern and pragmatic of contemporary subjects in physics. Aaronson’s project seems to accomplish exactly what I want as an armchair physicist: stopping to think about what our theories really mean.

To keep myself honest, I’ll be periodically writing about the chapters of this book – I’m a bit rusty mathematically and so writing about the work will encourage me to get concrete where needed.

Atoms and the Void

Atoms and the Void is a short chapter which basically asks us to think a bit about what quantum mechanics means. Aaronson describes Quantum Mechanics in the following way:

Here’s the thing: for any isolated region of the universe that you want to consider, quantum mechanics describes the evolution in time of the state of that region, which we represent as a linear combination – a superposition – of all the possible configurations of elementary particles in that region. So, this is a bizarre picture of reality, where a given particle is not here, not there, but in a sort of weighted sum over all the places it could be. But it works. As we all know, it does pretty well at describing the “atoms and the void” that Democritus talked about.

The needs of an introductory chapter, I guess, prevent him from describing how peculiar this description is: for one thing, there is never an isolated region of the universe (or at least, not one we are interested in, I hope obviously). But he goes on to meditate on this anyway by asking us to think about how we interpret measurement where quantum mechanics is concerned. He dichotimizes interpretations of quantum mechanics by where they fall on the question of putting oneself in coherent superposition.

Happily, he doesn’t try to claim that any particular set of experiments can definitely disambiguate different interpretations of quantum mechanics. Instead he suggests that by thinking specifically of Quantum Computing, which he implies gets most directly at some of the issues raised by debates over interpretation, we might learn something interesting.

This tantalizes us to move to chapter 2.