Monthly Archives: June 2018

Mathematics as a single player, evergreen strategy game.

I spend a fair amount of time on the Keith Burgun Games Discord, which is a community built up around Keith Burgun’s game design theory work. He’s interested, I would say, in designing so-called evergreen strategy games in the vein of Go or Chess. That is, games which facilitate long term engagement. He is also interested in single player strategy games.

My sense is that these two goals compete pretty strongly with one another. Without providing a full account, my sense is that evergreen strategy games like Go and Chess are evergreen almost entirely due to the fact that they are multiplayer games. The addition of a human opponent, in my view, radically changes the game design landscape. As such, single player game design is different beast. This might account for why single player strategy games seem to fall short of evergreen character, where they exist at all.

How might we account for these differences? The basic argument is thus: all a multiplayer strategy game must do is provide a large enough state space between the two players that, in the presence of intelligent play, there is enough richness that a conversation and a culture of conversation can arise. I understand multiplayer, competitive strategy games in at least the following way: in such games each player wants to reach a goal while preventing the other player from the same or a similar goal. To do so they must construct and execute a strategy (which encompasses, for our purposes, both a strategy to the goal and a counterstrategy against the other player). The player naturally wishes to conceal their strategy from their competitor, but each move they make necessarily communicates information about their strategy. The vital tension of the game comes from the fact that it forces the competitors into a conversation where each utterance is the locus of two, competing, but necessary, goals: to embody the players strategy and to reveal as little about it as possible.

From this point of view the rules of a multiplayer game can be quite “dumb.” They do not, alone, provide the strategic richness. They only need to give a sufficiently rich vocabulary of moves to facilitate the conversation. One way of seeing this is to consider that the number of possible games of Go is vastly larger than the number of games of Go human players are likely to play. Go furnishes a large state space, much of which is unexplored. The players of Go furnish the constraints which make the game live.

Single player games, even in the era of the computer, which can enforce a large number of rules, struggle to meet the level of richness of multiplayer games exactly for the same reason computers cannot pass the Turing test. A computer alone cannot furnish a culture or a conversation.

(At this point you may raise the point that computers can play Go and Chess. This is true. But they cannot play like a person. In a way, the fact that AlphaGo plays in ways which surprise expert player’s of Go demonstrates my point. Playing AlphaGo is a bit like playing against a space alien who comes from an alternative Go tradition. Interpreting a move that AlphaGo makes is challenging because it isn’t part of the evolved culture of Go. In a sense, its moves are in a different language or dialect.)

Terrence Deacon argues, in Incomplete Nature (a very useful book the fundamental point of which perhaps fails to land) that we can make useful progress understanding phenomena in terms of constraint rather than in terms of construction. For instance, we can nail down what a game of Go is as much by describing what doesn’t occur during a game than what does. Another way to appreciate this point is to recognize that we can play Go with orange and blue glass beads as well as we can play it with shell and slate pieces: the precise material construction of the pieces and the board don’t matter to the game. The question I want to pose from this point of view is: where do operating constraints in a game of Go come from?

I think I’ve made a clear argument by this point that the constraints which define any given game of Go come from the players rather than the rules of Go. The rules of Go merely create a context of constraint which forces the players to interact. By creating a context where each move necessarily (partially) communicates the (hopefully concealed) intent of each player, Go creates a space where someone can be said to have a style of play. Where two players can even be said to have a style. Even a community can be understood as having a style. Play, then, is more like a literary tradition than it is like a fully rational analytical process exactly by virtue of the fact that in the presence of such a large true state space of games, play stays near a much smaller, often intuitively or practically understood, effective state space.

Single player games operate in a similar way. Either the single player or a computer enforces some rules, but the rules themselves imply (typically) a much larger true state space than the state space explored by human players. The difference is, of course, that the player is competing against a much simpler counter-constrainer. In most single player, computer hosted, strategy games the counter-constraining forces are typically a small number of very simple agents pursuing a bunch of distinct goals. If you think of each move of a game as being an utterance in a dialog, as is the case in a two player game, then, in a single player game, the player is doing worse than having a conversation with themselves: they are speaking to no one, though the game engine might be attempting to provide an illusion of conversation. Providing the illusion of culture and conversation is the grand challenge of single player strategy game design.

(Interesting note: from this point of view, games have hardly evolved from the simple (and arguably deeply unsatisfying) text-interpreters of text adventure games.)

Believe it or not, all that was front matter for the following observation which I find myself returning to over and over: Mathematics is perhaps the best example of a single player, evergreen, strategy game-like institution.

Mathematics can plausibly be described as a game. The lusory goal of a mathematical exercise is typically to construct a particular sentence in a formal language using the less than efficient means provided by the rules of that formal system. In other words, you could just write out the sentence, but you don’t let yourself do so. You force yourself, using only the formal rules of your system and your axioms, to find a way to construct the sentence. As in real games, the number of possible rewrites you can make using the formal system is much, much larger than the ones you’re actually interested in. In a real sense, the mathematician is doing the heavy lifting when it comes to the practical character of a formal system. Indeed, the community of mathematicians is doing the lifting. They develop an evolving culture of proof strategy which constrains the typical manipulation of symbols profoundly. In this way, the practice of mathematics is much like the play of multiplayer strategy games. There are probably many, many ways to prove a given theorem, assuming it is provable, but exactly because the space of proof is so large and because humans are so limited in comparison to it, style evolves as a necessity. It helps us prune probably ineffective strategies.

What insights are there here for us, as game designers? It seems to be a maxim, over at the Keith Burgun discord, that we ought not to let the player design the game. Often this comes up in places where players are given agency over goals. We might find that players adopt restrictions on their play to intentionally increase difficulty. Or they might design arbitrary goals like playing without losing any health or restricted to a subset of the board. If we to build an analogy to mathematics, it would be as if we specially designated a class of mathematicians to identify target proofs and then handed them to a distinct set of mathematicians (forbidden to invent their own theorems) to prove them. But it is precisely the freedom of mathematicians to invent their own rules and goals that makes mathematics so much like an evergreen game. To use the language of constraint, mathematicians are able to play against themselves. They build both the rules of the game and then they constrain the space of play by playing. Having the freedom to choose goals and means, they can ensure that play remains stimulating even in the absence of an opponent.

In contrast, players of single player, computer hosted strategy games who are forced to pursue only the goals the designer wants, are hamstrung to grapple with systems which inevitably offer insufficiently rich constraints. Designer’s who forbid themselves from considering player-selected goals (and even player modification of rules) are restricting themselves from considering design questions like “What sort of rule sets facilitate interesting goal choices?” Such limitations make their games as dead as the computers which host them. Not entirely dead, but pretty lifeless.

Why Physicists Need Their Space

A few weeks ago I attended the Rutgers/Columbia Symposium on the Metaphysics of Quantum Field Theory. This morning in the shower a few things I’ve been thinking about snapped into place relating to that conference and my own hobby-level interest in related questions.

Some background: Ontology, meaning “what stuff is fundamental and what stuff is derived?” is important in the question of the foundations of physics. You can see this going all the way back to Thales (624 – c. 546 BC) who, in the traditional account, is the first “scientist” exactly because he proposed an ontology: water is real, all other phenomena are derived from water. (Note that the idea of supervenience enters into the discussion here: in Thales’ account, for instance, because a rock is fundamentally a sort of water, we can say that the higher level properties of rock supervene upon the fundamental properties of water in some way.) Contrast this with the atomists, who posit that atoms are fundamental objects and other things supervene upon them. Or contrast it with idealists, like Plato, who claim that in some sense forms are ontologically fundamental and that real things supervene upon them.

Now, one of the many ways to see what is hard about QM is that it challenges the ontological status of space itself. This is, in fact, one of the most important ways it’s challenging from a philosophical point of view. That is, for lots of reasons (of which more later), we tend to believe that space is fundamental.

But why do we care so much about space? There are ways of deriving the Schrodinger Equation (which governs the behavior of quantum mechanical systems) from ontologies which don’t include space at all (consequently, space isn’t part of the ontology of these theories at all). See Lee Smolin 2014 – Nonlocal Beables. (NB the vogue is to call observables “beables”). It seems like, if we can find a nice way of getting QM from a more fundamental theory without any of the other weirdnesses (like considering that the wave function is real, for instance) that explains why it seems like wave function collapse constitutes action at a “distance”, then we ought to take it. After all, if we can show that space isn’t ontologically real then we shouldn’t be afraid of some non-spacelike aspects to our theory. Space “emerges” from some low-level dynamics of a non-spatial system. It isn’t fundamentally challenging that some of those dynamics won’t be space like and so we don’t need to grind our teeth and rend our garments about Bell’s Inequality or other Entanglement related phenomena: they are just the fundamentally non-spacey nature of reality peaking around the corners of a low energy/classical limit.

Considering that entanglement presents us with some otherwise very unusual epistemological challenges, this seems to me like a great escape hatch. Or at least it did until I spent some time thinking about how important geometry is to physics.

Believe it or not, action at a distance isn’t a new controversy in physics. It goes back way before entanglement was a twinkle in Schrödinger’s eye. The ancient Greeks were obsessed with it and Descartes and Newton worried a lot about it too. One way of telling the story is to think about planetary motion. The critical insight to planetary motion (this account more or less derives directly from Crowe’s Mechanics from Aristotle to Einstein, 2007) was that objects have momentum (which even Newton conceived of as a kind of force). Without the idea of momentum its hard to imagine what keeps planets in their orbits. The most common explanation at the time of Newton was some sort of substance filling space which had vorticial motion, and thus which carried the planets along in their circular orbits. What was particularly appealing about this to someone like Descartes (for reasons about which I could write a whole other essay) is that it was a theory without action at a distance. The sun might have been the source of a vortex which carried earth around it, but it was the local motion of the fluid which pushed the earth along, and that motion was transmitted from the sun to the location of the earth by local interactions in the fluid itself. That is, there was not some mysterious tendency transmitted over empty space which caused the motion. Everything was local. At the heart of this is both the idea that there is no action at a distance, that inanimate objects don’t move on their own and a deep underlying notion that interactions are always local (which is part and parcel with the sense that space is part of our ontology).

The irony is that Newton, the great hero of the scientific perspective, is the less materialist of the two. In The Principia (1687) he makes such enormous progress by dispensing with the notion that he needs to worry about precisely how the interaction between massive bodies is mediated and instead focuses purely on its mathematical description. In a way, Newton is thus in the “shut up and calculate” camp. Newton doesn’t throw space out of the ontology but he does profoundly weaken its role by at least suggesting that we don’t need to think of every interaction in the universe as mediated in a purely local sense (though he never outright claims gravitation force is nonlocal). If your goal is to calculate the motion of the planets, then this is a great tactic and is, in a way, the essence of good model building: whatever the underlying structure of space-time, its certainly true to a high degree of accuracy that gravity appears to act instantaneously across empty space to produce a force on distant objects. (By the way, Max Jammer’s 1957 Concepts of Force has enlightening things to say on this subject since it helps ground the philosophical notion of force exactly in the physiological experience of pushing or pulling, though we are about to see a compelling reason to believe that the gravitational force is nothing like that at all).

In a way, we can see Newton’s Principia as leapfrogging science’s ability to calculate far ahead of philosophy’s ability to account for what is exactly happening. In that sense, there is an analogy between General Relativity and Newtonian Mechanics and between some heretofore undiscovered ur-theory and Quantum Mechanics: General Relativity provided a kind of philosophical closure between the Cartesian and Newtonian split over the locality of interactions by re-inventing an ontological role for space(time).

General Relativity tells us that no force at all pulls or pushes on the planets. Instead, it says the planets move the way they when the true geometry of space-time is taken into account, they are simply following what locally looks like the plain old Newtonian notion that objects in motion continue to move in a straight line unless acted upon.

Supersymmetry is an attempt to resolve the problems of quantum field theory by imagining that every fermion (boson) in the standard model has a bosonic (fermionic) super-symmetric partner. At this point the theory is out of favor: we’ve never seen these super-symmetric particles in accelerators which means they’d have to be very massive indeed. But one interesting aspect of the theory which was developed in a talk by David Baker at the Rutgers/Columbia Conference is that the addition of such supersymmetric particles introduces aspects of the theory which you could consider elevating to space-time coordinates (however Grassman valued). Why would you want to do that? Well, because its very natural to say that space-time symmetries generate or cause physics. This is the essence of General Relativity and of Yang-Mills style theories, so it underlies both GR and one of the best tools we have to develop useful Quantum Field Theories. Even my passing familiarity with both disciplines is enough to sniff out that these theories are extremely local and geometric. That is practically what differential geometry means.

That is the point of this essay. Modern physics is so used to treating geometry (of spacetime) as ontologically prior that the idea that geometry itself might supervene upon some more fundamental physics is truly challenging. From this point of view, you might prefer to do something like just say the wavefunction is real, even though doing so drastically expands the universe (by introducing a vast number of new observers into it, for instance).

Images, Causality, Disassociation, Interactivity and Videogames

I’ve got an eight month old. Watching a baby come to terms with the world can teach you a lot of things. For instance, and as a kind of hors d’oevre, consider the word “shush.” To an adult human being, its an imperative verb which indicates that you should be quiet. To a baby it resembles the sound of blood rushing in the womb and is, therefore, supposed to be calming. As a baby learns that sounds can have arbitrary meaning, the “shush” as simulation becomes the “shush” as symbol – the baby comes to appreciate that we can mean things with sounds we make.

My baby spends a lot of time feeling the texture of things. In particular, he’s interested in pictures in books, over which he carefully draws his pointer finger, alternating between the finger tip and scraping with the fingernail. Its not too hard to see that he is curious about the difference between images of things and things themselves. In particular, he seems to have cottoned to the fact that things themselves feel a certain way when you touch them whereas mere images feel like paper or laminate or cardboard, and are more or less undifferentiated qua image with respect to feeling.

When I dwell on this interest, it strikes me how marvelous images really are: they represent a profound collapse of the ordinary causal relationship between light entering our eyes and the objects with which that light has interacted. Wood grain looks like wood grain because it has the physical structure of wood grain. Its dark, striated areas appear as such because the material is ridged, casting some parts into shadow with respect to the source of illumination. A photograph of wood grain inherits the visible properties of the object while it separates them from an immediate cause. The visual aspects of a photograph can be easily manipulated (particularly in the modern era) without changing the way the photograph feels, but most modifications to actual wood grain meant to accomplish a visual change will also result in changes in the physical structure of the object. Our brains, of course, evolved in a context where this relationship between the way we perceive things and the underlying structure of the things themselves, is often strong. This is why when we see a piece of wood we expect it to feel like a piece of wood. It’s probably why my child is so interested in touching pictures in his books: because the breakdown between the visual perception of the thing and any obvious physically relevant structure is novel.

Part of the power of images is related to this detachment from material cause. Things themselves only ever depict (in our senses) that which is literally possible. Images can depict whatever they are designed to depict, whether its causally plausible or not. A normal person has the visual form of a person on account of the fact that they are made up of bones, muscles, fat, etc, that they have a certain mass and weight. When a human bends their knees and leaps into the air, the height of their leap is, ultimately, a property of all these material causes. Superman,  however much he might resemble a person, can leap tall buildings with a single bound, because the resemblance is, in a sense, entirely incidental. A comic book merely depicts physics and thus may take liberties, while a 100 meter dash is physics. Images can exploit the fact that that which is depictable is much more various than that which is possible.

To take a lurching step towards the point before my baby wakes up from his nap: technology in general has this property of obscuring the relationship between cause and effect. Technology can even be understood primarily in terms of the careful manipulation of cause and effect to accomplish what might otherwise be an unlikely outcome. From this point of view a computer is almost literally a cause/effect obfuscator. It presents to us, the user, a two-dimensional interface on which almost any cause and effect relationship can obtain at all. A real xylophone has the property that larger blocks vibrate at lower frequencies, and so a necessary material relationship between music and the structure of the xylophone appears. We can easily imagine a simulation of a xylophone where the relationship between apparent block size and the sound each block makes when struck is the opposite or totally random. Take the piano as an example somewhere in between: its keys are all the same size: the strings which produce the sounds are hidden behind the curtain, so to speak. We can’t as easily infer from the piano that sound is deeply related to vibration, which is related to mass and energy. Computers are the apotheosis of the movement between the xylophone and the piano: their inner workings are, at the human scale, so subtle, that no amount of inspection with the senses can reveal how cause and effect are tangled up inside them.

Armed with these insights, we can put on our game designer’s hat and begin to build up a new way of thinking about what precisely we are doing when design digital interactive systems. I’d like to make two points: the first is that we often feel alienated from experiences when there is a disconnect between the apparent causal structure of those experiences and their actual evolution in time. A good example of this is those old physical racing toys you sometimes still see: a steering wheel controls (by virtue of a connected lever) a plastic car while the image of a road, with obstacles, printed on a loop of paper, is scrolled through a viewing window. The player is expected to avoid collision with these obstacles by virtue of their own understanding of the implied relationship between the objects: cars crash when they strike things like trees or other cars. We quickly grow tired of these sorts of games, not just because we are expected to enforce the rules ourselves (which is also true of games like Chess) but because the causal relationships they do embody are trivial compared to (and distant from) the causal relationships they appear to embody.

The point is that, if we want to engage players, we should provide simulations of causal relationships which are meaningful and we should avoid both acausal elements (like pure randomness) and discrepancies between depictions and causality.  If the presentation of our game suggests, by reference to physical processes with which we are all familiar, that a particular causal relationship is in force in our simulation, then we ought to make that relationship present or we should eliminate the appearance of that relationship from the presentation.

Canny readers will probably recognize that this goes against philosophies like “juice it or lose it,” which seem to suggest that the experience of play is actually enhanced by the intense elaboration of the appearance of our game elements. A more nuanced position can be developed, however: we can and ought to feel free to elaborate on the image our interactive system presents precisely in those ways which underline the causal relationships which our system embodies. When a ball strikes a wall, its probably good to indicate that with sound, dust particles, a shaking screen. On the other hand, if we do elaborate 3d modelling of rocks falling down a mountain, but they don’t interact with our player’s avatar, then we’ve introduced the appearance of a relationship that our system fails to deliver on.

None of this is to say that such appearances might not lead to more saleable products or that they might not provide pleasure to players. That leads me to my second, moral, point. We, as game designers, ought to respect our players by giving them interactive systems which communicate clearly about the relationships they embody for exactly the same reasons that we ought to communicate honestly in real life or in any other art form.

This isn’t to say that our simulations have to correspond to reality or be as realistic as possible. On the contrary, if we wish to explore systems which deviate from reality with our players, we must take even greater care to harmonize the representation of those systems with their underlying structure. We might dazzle players for awhile with elaborate audiovisuals, but unless those operate in concert with the causal structure of our games, we’ll almost certainly have wasted their time (or, at the very least, missed an opportunity to provide real interactive value.)