Yawning

The baby was fussy all morning, and when he finally went to sleep, in the crook of his mother’s arm, after nursing we were scared to leave him alone in case the silence woke him up. I made carbonara downstairs, ate, and then went to lie beside him reading while Shelley took her portion.

As I re-positioned my leg, my knee popped loudly, startling the baby. He stretched his arms above his head and pawed at his face with the backs of his hands. These gestures were familiar to me from my own body. I had seen, too, him sneeze or yawn. I imagined for a moment, that I had given these things to him, but that transposition made a deeper truth clear.

My cat, who slept above us, on a table over the bed we had arranged on the floor, stretched and yawned. He sneezes. When we turn the lights on at night to change Felix’s diaper, he sprawls onto his stomach and covers his eyes with his paws, sulkily. These gestures, taught to us by no one, inherent in us, which you could have observed in my child minutes after he was born, belong to an unimaginably ancient process of which we are merely brief manifestations.

Human beings tie themselves into knots or grind themselves to featureless lumps, struggling to connect with something vast and ancient. We don’t stop to think that each time we yawn we are in contact with something profound and atavistic, something older than history, bigger than the merely human.

The Ethics of Game Design

In the next week or so, I’ll be on the Dinofarm Games Community Podcast talking about the ethics of game design. My baby is just one week old, though! So I might not have been as coherent there as I wanted to be. As such, I thought I’d collect a few notes here while they were still in my head.

As a preamble: there are lots of ethical implications of games that I don’t discuss here. Particularly social ones: since games often depict social and cultural situations (like novels, plays or television shows) similar ethical concerns operate for games as for those artifacts. Here I’m specifically interested in those special ethical questions associated with games as interactive systems.

The question I’m interested in is: “What are the ethical obligations of a game designer, particularly to the player?” In a way, this is an old question in a new disguise, recognizable as such since the answer tends to dichotomize in a familiar way: is the game designer supposed to give the player what they want or is she supposed to give the player that which is good for them?

Let’s eliminate some low hanging fruit: if we design a game which is addictive, in the literal sense, I think most people will agree that we’ve committed an ethical lapse. There are a few folks out there with unusual or extreme moral views who would argue that even a game with bona fide addictive qualities isn’t morally problematic, but to them I simply say we’re operating with a different set of assumptions. However, the following analysis should hopefully illuminate exactly why we consider addictive games problematic as well as outline a few other areas where games ethical impact is important.

I think the most obvious place to start with this kind of analysis is to ask whether games are leisure activity, recreation or whether they provide a practical value. By leisure activity I mean any activity which we perform purely for pleasure, by recreation, I mean an activity that is performed without an immediate practical goal but which somehow improves or restores our capacity to act on practical goals, and by practical value, I mean something which immediately provides for a concrete requirement of living.

Its a little unclear where games fall into this rubric. It is easiest to imagine that games are purely leisure activities. This fits the blurb provided by the wikipedia article and also dovetails, broadly, with my understanding of games in public rhetoric. Categorizing games as purely leisure activities seems to justify a non-philosophical attitude about them: what is the point of worrying about the implications of that which is, at a fundamental level, merely a toy¹?

Point number one is that even toys, which have no practical purpose but to provide fun, are subject to some broad ethical constraints. It isn’t implausible to imagine that we could implant an electrode directly into a person’s brain such that the application of a small current to that electrode would produce, without any intervening activity, the sensation of fun. We could then give the person a button connected to that electrode and allow them to push it. This is technically an interactive system, perhaps even a highly degenerate game. It is certainly providing the player with the experience of fun, directly. However, its likely that a person so equipped would forego important practical tasks in favor of directly stimulating the experience of fun. If we gradually add elements between button presses and the reward or between the electrodes and the reward circuitry, we can gradually transform this game into any interactive system we could imagine. Clearly, at some point, the game might lose its property that it overwhelms the player’s desire to perform practical tasks. That line is the line between ethical and non-ethical game design.

In other words, game designers subscribing to the leisure theory of games are still obligated, perhaps counter-intuitively, to make their games sufficiently unfun that they don’t interfere with the player’s practical goals.

We have two interpretations of game value: the recreational and the practical interpretations.

Of these, the idea of the game as recreation may be closest to what is often discussed on the Dinofarm Discord channel. Its also frequently the narrative used to justify non-practical games. You’ve likely heard or even used the argument that digital games can improve hand-eye coordination or problem solving skills. This interpretation rests on their existing an operational analogy between the skills required to play a game and those required to perform practical tasks. There is a lot of literature on whether such a link exists and what form or forms it takes.

If no such link exists we can rubbish this entire interpretation of games, so its more interesting to imagine the opposite (as it least seems to sometimes be the case). When a link exists the value proposition for a game is: this game provides, as a side effect of play, a practical benefit. Why the phrase “as a side effect of play?” Because, if the purpose of the game is to provide the practical benefit, then we must always compare our game against some practical activity which might provide more of that same benefit than an equivalent effort directed towards non-game activity.

To choose a particularly morally dubious example, we might find that playing Doom improves firing range scores for soldiers. But shouldn’t we compare that to time spent simply practicing on the firing range? Without some further argumentative viscera, this line of thinking seems to lead directly to the conclusion that if games are recreation, we might always or nearly always find some non-game activity which provides a better “bang” for our buck.

Elaborating on this line of argument reveals what the shape of the missing viscera might be. Why is it plausible that we could find some non-game activity that works as well or better than any given game at meeting a practical end? Because games must devote some of their time and structure to fun and, as such, seem to be less dense in their ability to meet a concrete practical goal. In Doom, for instance, there are a variety of mechanics in the game which make it an exciting experience which don’t have anything to do with the target fixation behavior we are using to justify our game.

But we can make an argument of the following form: a purely practical activity which results the improvement of a skill requires an amount of effort. That effort might be eased by sweetening the activity with some fun elements, converting it to a game, allowing less effort for a similar gain of skill.

On this interpretation the ethical obligation of the game designer is to ensure that whatever skill they purport to hone with their game is developed for less effort than the direct approach. If they fail to meet this criteria, then they fail to provide the justification for their game.

The final interpretation we need to consider is that games themselves provide a direct, practical, benefit. I think this is a degenerate version of the above interpretation. It turns out to be difficult to find examples of this kind of game, but they do exist. Consider Fold.it, a game where player activity helps resolve otherwise computationally expensive protein folding calculations.

In this kind of game the developer has a few ethical obligations. The first is to make sure that the fun the game provides is sufficient compensation for the work the player has done or to otherwise make sure the player’s play is given with informed consent. For instance, if we design a game that gives player’s fun to solve traveling salespeople problems which, for some reason, we are given a cash reward for solving, a good argument can be made that, unless the game is exceptionally fun, we’re exploiting our player base. If the game were really so fun as to justify playing on its own terms, why wouldn’t we simply be playing it ourselves?

Game designers of this sort also need to make sure that there isn’t a more efficient means to the practical end. Since the whole purpose of the game is to reach a particular end, if we discover a more efficient way to get there, the game is no longer useful.

I think there is probably much more to say on this subject but I had a baby a week ago and three hours of sleep last night, so I think I will float this out there in hopes of spurring some discussion.

The Dinofarm Community Interpretation

At the end of the podcast we decided on a very specific definition of games (from an ethical standpoint). We (myself and users Hopenager and Redless) decided games could  be described as a kind of leisure whose purpose is to produce the feeling of pleasure associated with learning. Since this is a leisure interpretation, we aren’t concerned directly with practical value, which I think is square with the way we typically think of games. However, as a leisure interpretation we need a theory of how games operate in the context of the player’s larger goals.

Let’s sketch one. What circumstances transpire in a person’s life where they have the desire for the pleasure associated with learning but are unable to pursue that desire in productive terms? One possibility is fatigue: after working on productive activities, a person might have an excess of interest in the experience of learning but a deficit of energy to pursue those productive activities. In that situation, a game can satisfy the specific desire with a lower investment of energy (which could mean here literal energy or just lower stress levels – games, since they aren’t practical, are typically less stressful than similar real world situations).

Once the game is completed, the desire ought to be satisfied but not stimulated, allowing the player to rest and then pursue practical goals again.

Again, there are probably other possible ways of situation ethical games in this interpretation, but I think this is a compelling one: games should satisfy, but not stimulate, the desire to learn, and only in those situations where that desire might not be more productively used, as is in the case of mental exhaustion or the need to avoid stress.

Games shouldn’t have a “loop” which intends to capture the player’s attention permanently. Indeed, I think ethical games should be designed to give up the attention of the player fairly easily, so they don’t distract from practical goals.

And them’s my thoughts on the ethics of game design.

¹: Note that there is a loose correspondence between our rubric and The Forms. Toys, roughly, seem to be objects of leisure, puzzles and contests are arguably recreation, and games are, potentially, at least, objects of real practical value. Maybe this is the interpretation of games is the one underlying “gamification” enthusiasts.

Accounting for Turtles

When we bought the land, the irrigation pond, formed at the lowest point of the property by an earthen dam now overgrown with pines, cherry trees, and hobbles of tangled honey suckle, had failed. After cutting our way through the tall grass between the pond and the road and wading out into the swamp mud which now marked out the area where water had been, we found it: a four inch, rusted out, galvanized steel pipe down which water fell in a cold, sonorous trickle, despite the heat. Pieces of the rusted pipe, too few and small to form the whole of the missing riser, which otherwise seemed to have almost completely disintegrated, littered the area.

A year later, after we had repaired the riser with a clean new piece of white PVC, an orange bucket and twenty pounds of concrete mixed with muddy water, a storm rolled in over the ridge to the north west and I dreamed that I saw, from the porch, a huge turtle making its slow way through the grassy shallow ditch from the road down to the pond.

In May, and for several months afterwards, turtles, seeking new habitats or mates or following their own silent intuitions, make their way across the rural roads around our home. You see them standing on the side of the road as cars rush past in the morning, as if contemplating making a run for it.

Or you see their bodies, mangled or crushed into chunks of muscle and shell, attracting flies in the afternoon heat which melts the tar between the pebbles of the asphalt. That summer I found a special sympathy developing for those animals. The natural defenses of such animals give them a relaxed, even clumsy, attitude which doesn’t prepare them for the dangers of living among humans. Whenever I saw a turtle furtively planning a trip across a road I would pull over and, using a camouflage work glove with black, spray on, latex grips that I kept in the car for the purpose, move it across the road. Usually, deep into the grass on the other side to discourage a return trip.

A few months after I dreamed of the enormous turtle I took a canoe out onto the water to inspect the new riser. As I got close I saw a pale yellow something sticking out from the top. It was a turtle which had gotten stuck, head first, down the pipe. It was dead, and while its feet and shell had been baked and desiccated by the sunlight, its head was down in the trickling darkness and covered in a film of almost airy mucous that made me think of the ectoplasmic expulsions of spiritualists.

After that day I attached a foot long, perforated, PVC section to the top of the riser so that other animals wouldn’t get sucked in.  I also started to keep a tally of the number of turtles I picked up and moved across the road and the number I saw killed or already dead.

This practice of counting turtles exposes you to suffering.

Once, unable to stop immediately to move a turtle, I watched the truck behind me pass it harmlessly only for its trailer to catch its edge and send it hurtling into the ditch alongside the road. Similar scenes often played out – you see the turtle crushed by the car behind you, or, after managing to find a place to turn around, you find only pieces. On one occasion, a turtle which was sitting at the side of the road, as through ready to cross, had already been hit. It seemed whole, but there were cracks along the seams of its shell. I carefully moved it under a tree. I wondered for some time whether turtles could survive such a thing or of it died of blood loss or dehydration, its essence sublimating off into the summer air.

State of the Life

Here is where I am in July of 2017.

Baby Time

My spouse and I are having a baby in a few months. Its hard to know what to say about this since, in addition to being highly personal, it involves, in principle, at least, the interests of at least two other human beings. I will say this. When I got married and began living with my spouse, I felt, ironically, that for the first time in my life, I had to live not just with her, but with myself. By that I mean that my own emotional state inside my home no longer radiated away harmlessly, but was responded to and echoed back at me. Until that moment in my life, I think I’ve always tried to ignore, suppress or dissipate any emotional activity but suddenly it was clear that I could no longer afford to ignore a part of myself which could affect my intimate partner.

Having a kid is like that but times ten. My partner is, at least, an adult, with her own independent existence, ability to ignore my worst qualities and even sympathize with my imperfections. A child, on the other hand, experiences a more unbalanced relationship with their parents, one, furthermore, made more fraught by material dependence and a lack of a frame of reference. I always think, at this point, of Huxley’s The Island, wherein children are raised by groups of people so that they don’t experience unalloyed exposure to the peculiarities of their parents. Contemporary western civilization, so obsessively organized around the patriarchal family unit, seems perverse in comparison. Adding to this sense of pressure is our extremely rural location and hence comparative isolation. Luckily we have some great neighbors upon whom I am (hopefully gently) prevailing to have children.

At any rate, each day I find that I turn more scrutiny upon myself.

Health

I’m thirty-six.  Sometime in high-school I started doing push ups in the morning. In college I joined a rowing team and in so doing was exposed, perhaps for the first time, to the pleasures of physical conditioning. With a few notable periods in my life since then, I’ve been more or less aggressively fit. Starting at the beginning of this year, though, I’ve finally recovered a fairly aggressive routine of physical fitness which looks something like this:

  • Monday: fast mile (currently 6:35s) + weightlifting
  • Tuesday: rowing intervals. 24 minutes of rowing (plus warm up and cool down). Three minute intervals consisting of a hard sprint for one minute (split 1:53) and a cool down (split 2:00). Over 24 minutes I average a split of 1:58 or so.
  • Wednesday: slow run (4 miles at a 7:30s pace) or a leisurely row for about a half hour.
  • Thursday: same as Tuesday
  • Friday: Same as monday

I started the year at about an eight minute mile and I am slowly peeling off seconds. I seem to recall having run a sub six-minute mile in high school or college some time. I’m curious whether I can get back down to that time before the baby comes.

Sporadically I am working on my 2k sprint on the rowing machine. I’m doing about a 7:35 these days. I feel like I am close to my maximum without more aggressive cross training. So far I’ve never experienced any significant chest pain so I assume I am not going to die from exercise any time soon.

In other health news I’m drinking too much coffee. I typically cut back in the summer time, but I have an hour long drive to work and its boring. Attempts to drink less coffee have left me a little frightened of the drive.

Intellectual Life

I’m spread pretty thin. In this category I place game development, generative art work, technical skills related and unrelated to work, philosophy and physics.

Games and Game Design

On the practical subject of game design, my game The Death of the Corpse Wizard came out about a month ago – I’ve sold about 40 copies without doing much advertising. More importantly, since I don’t make my living as a game designer or developer, I think I’ve created a game with some substance, not entirely devoid of genuine value. I’m still contemplating to what degree I plan on developing Corpse Wizard forward or whether I want to move on to greener pastures.

On the less practical question of game design theory I’m working on trying to understand whether we can bring quantitative techniques to bear on the question of what constitutes a good strategy game. In particular, I’m trying to nail down exactly what sorts of properties the phase space of a game has, at each decision point, that make games feel fun.

I can give you a sense of what sorts of questions I am trying to think about quantitatively. Its typically understood that a game ought to present a player with about a 50% chance of winning if its to be fun. Its better to state that in the negative: the outcome of a game shouldn’t be a foregone conclusion. You can see this at work in two player games, where match making is always employed.

Incidentally, there is a pseudo-paradox here: the point of a two-player game appears to be to determine whether player 1 is better than player 2. Yet, paradoxically, we call only those games where a given player has a 50% chance of winning “fair.” But if each player has a 50% chance of winning, the outcome seems to  be random, which means it cannot teach us which player is better! I leave it as an exercise to the reader to puzzle out what, if any, resolution is possible.

Anyway, suppose we are dutiful game designers. On the first turn of our game, the player’s chance of winning must necessarily be 50%, then. One question I am interested in is: what does that chance of winning look like as a function of time? Is it flat at 50% until the end of the game? This seems unlikely. Why? Because when we play a game we are, at each turn, asking what move raises our chances of winning! If our chance of winning is flat, then the game will feel meaningless,  because no action will change the win rate. On the other hand, other paradoxes seem to manifest: suppose that instead a skilled player almost always chooses a move which increases her chance of winning. If that is the case, then at some point in the game, the chance of winning will be 90%. But at this point, the game’s outcome seems like a foregone conclusion! Why keep playing if almost all possible move sequences from turn N result in a win. In other words, it seems like games become more boring towards their ends, if we define boring as the property that their outcome is easy to predict.

In other words: it seems like the desire to make games non-boring is in tension with the desire to make the game playable. If the game is playable, then at each turn the player can, in principle, increase her chance of winning. If she can always increase her chance of winning, then, at some point, the game will become boring.

All this has to do with the way that individual moves change the win rate. This is, for simple games, anyway, tractable numerically. So I’m working on some experiments to try and suss out some of the structure of games and how it changes as we change the rules.

Physics

As mostly a hobby and an attempt to keep all those years of studying physics fresh, I’ve become interested in getting a good grasp of the interpretation of Quantum Mechanics. To that end I’ve started planning and produce a series of lectures covering RIG Hughes’ book “The Structure and Interpretation of Quantum Mechanics.” The book is very good (I’m about halfway through, in terms of deep understanding). About the only complaint I could make about it is that the introductory chapters do a good job of comparing and contrasting classical and quantum mechanics, whereas I think the more interesting comparison is between classical probabilistic mechanics and quantum mechanics. Both theories operate naturally on Hilbert spaces. Classical probabilistic mechanics seems to me to have an unambiguous interpretation (though see: https://arxiv.org/pdf/physics/0703019.pdf) but obviously there are differences between classical probabilistic physics and quantum mechanics.

Note that the ordinary formulation of QM makes this comparison non-trivial. I think of it this way: suppose we have a classical 1D system with N particles. Each has two degrees of freedom, its position and momentum, so we need 2N numbers to represent the classical state. If we imagine shrinking this system down (or engaging in some sort of metaphysical transition) so that the system becomes quantum mechanical, each particle requires a wave function which, in open space, has an infinite number of values, one for every point in space (for instance). That is, our 2N numbers must become N*∞. It seems like we’ve lost a factor of two. But we haven’t – each of those numbers in the wave function are complex valued, so, apart from the fact that complex numbers have structure which in some ways makes them seem like less than the sum of their (real and imaginary) parts, we’re back to where we start.

Contrast that with thinking about a probabilistic description of the classical system. In that case, we simply take each observable quantity (of which there are 2N) and create a probability distribution, which has an infinite number of values per N. So we have 2*N*∞ numbers to deal with. Rather than N wave functions, which serve as a combined representation of position and momentum, we have 2*N probability distributions, each of which is mapped directly onto a classical observable.

Three questions, then:

  1. Can we find a representation for Quantum Mechanics which is directly comparable to Classical Mechanics?
  2. Can we find a representation for Classical Mechanics which is directly comparable to Quantum Mechanics?
  3. In either of the above cases, what precisely accounts for the differences between the classical and quantum mechanical pictures?

Since I’m not smart enough to even pose questions which haven’t been posed before, I think, after enough reading, I can answer these questions.

  1. Yes – the Phase Space Formulation of Quantum Mechanics uses the Wigner-Weil transform to map the wave function to position/momentum phase space quasi-probability distribution.
  2. Yes – just create N wave functions from 2N probability distributions by adding Q + iP together for each particle.
  3. In the first case the critical distinction between the quasi-probability distributions and the classical probability distributions is that the former sometimes take values less than 0. In the second case the quantum mechanical system still admits no dispersion free states, whereas any combination of probability distributions is allowed in the classical case. It would be interesting to work out the mathematics, but the requirement that no state is dispersion free, which has to do with the operators which represent position and momentum and the Born Rule, imposes a constraint on the types of momentum probability distributions which can coexist with each particular position probability distribution.

Anyway, if there were some miraculous surfeit of free time in my future, I’d like to spend some of it working out these ideas in detail. I’m sure it would be educational for me.

Generative Artwork

Since Clocks I haven’t undertaken a single large generative art project with a coherent theme. I’m still interested in the themes of that project: minimizing artificial randomness in generative systems in favor of exploring patterns implicit therein.

On the other hand, I have worked on a few interesting little etudes.:

  1. Ceatues, a system built on coupled games of life.
  2. Spin, a sort of continuous, tune-able version of Langton’s Ant
  3. Meat is Mulder, an experiment in piecing together.

And I’ve been lucky enough to lecture a few times at the soon to be defunct Iron Yard:

  1. Generative Music with Javascript
  2. Making Generative Art with Javascript

More and more I see generative artwork and game design as tightly related fields. The difference lies entirely in the the absence of direct player interaction with the generative artwork. But the same quality of of lying just at the edge of predictability, which produces a sense of life in a generative artwork, generates interesting player situations in games.

I suppose I’m more interested in game design than generative art at the moment, but maybe something will strike me. The one big advantage of generative artwork is that it can be easier to work on in small bursts.

Emotional Health

I suppose that I have left this section for last indicates a bias which characterizes this entire component of my life. That bias is that I tend to not reflect deeply or frequently about whether I am happy nor not and, when I do so reflect, I tend to do so with my prefrontal cortex, so to speak.

I suppose I am happy from that point of view. I have a good relationship with my partner, a child on the way, a beautiful home and a job which is, for the most part, both reasonable and well compensated.

When I reflect deeply on my life, however, I wonder. I wonder first whether happiness really matters and I wonder whether I would or could be happier if I had a career which more accurately reflects both my gifts and my interests (two categories which don’t always overlap).

Impending fatherhood encourages reflection. You can’t help but wonder not how your child will see you, but how your example will affect your child’s conception of the world. Suddenly all your negative qualities, your petty unhappinesses, sloth and unkemptness are in sharp focus. A child ought not be exposed to a passenger seat full of empty coffee cups. What sort of universe is it where your father’s mood sours because his tiny video-game hasn’t won widespread acclaim. It seems so easy to live for the approval of others until you feel the keen but naive eye of childhood bearing down on you.

My big hope is that I’ll rise to this challenge, strip off my pettiness without losing those qualities which make living as myself possible.

Goals, Anti-Goals and Multi-player Games

In this article I will try to address Keith Burgun‘s assertion that games should have a single goal and his analysis of certain kinds of goals as trivial or pathological. I will try to demonstrate that multi-player games either reduce to single player games or necessitate multiple goals, some of which are necessarily the sorts of goals which Burgun dismisses as trivial. I’ll try to make the case that such goals are useful ideas for game designers as well as being necessary components of non-trivial multi-player games.

(Note: I find Keith Burgun’s game design work very useful. If you are interested in game design and have the money, I suggest subscribing to his Patreon.)

Notes on Burgun’s Analytical Frame

The Forms

Keith Burgun is a game design philosopher focused on strategy games, which he calls simply games. He divides the world of interactive systems into four useful forms:

  1. toys – an interactive system without goals. Discovery is the primary value of toys.
  2. puzzle – bare interactive system plus a goal. Solving is the primary value of the puzzle.
  3. contests – a toy plus a goal all meant to measure performance.
  4. games – a toy, plus a goal, plus obfuscation of game state. The primary value is in synthesizing decision making heuristics to account for the obfuscation of the game state.

A good, brief, video introduction to the forms is available here. Burgun believes a good way to construct a game is to identify a core mechanism, which is a combination of a core action, a core purpose, and a goal. The action and purpose point together towards the goal. The goal, in turn, gives meaning to the actions the player can take and the states of the interactive system.

On Goals

More should be said on goals, which appear in many of the above definitions. Burgun has a podcast which serves as a good long form explication of many of his ideas. There is an entire episode on goals here. The discussion of goals begins around the fifteen minute mark.

Here Burgun provides a related definition of games: contests of decision making. Goals are prominent in this discussion: the goal gives meaning to actions in the game state.

Burgun raises a critique of games which feature notions of second place. He groups such goals into a category of non-binary goals and gives us an example to clarify the discussion: goals of the form “get the highest score.”

His analysis of the poorness of this goal is that it seems to imply a few strange things:

  1. The player always gets the highest score they are capable of because the universe is deterministic.
  2. These goals imply that the game becomes vague after the previous high score is beaten, since the goal is met and yet the game continues.

The first applies to any interactive system at all, so isn’t a very powerful argument, as I understand it. Take a game with the rules of Tetris except that the board is initialized with a set of blocks already on the board. The player receives a deterministic sequence of blocks and must clear the already present blocks, at which point the game ends. This goal is not of the form “find the highest score” or “survive the longest” but the game’s outcome is already determined by the state of the universe at the beginning of the game. From this analysis we can conclude that if (1) constitutes a downside to the construction of a goal, it doesn’t apply uniquely to “high score” style goals.

(2) is more subtle. While it is true that in the form suggested, these rules leave the player without guidelines after the goal is met, I believe that in many cases a simple rephrasing of the goal in question resolves this problem. Take the goal:

G: Given the rules of Tetris, play for the highest score.

Since Tetris rewards you for clearing more lines at once and since Tetris ends when a block becomes fixed to the board but touches the top of the screen, we can rephrase this goal as:

G': Do not let the blocks reach the top of the screen.

This goal is augmented by secondary goals which enhance play: certain ways of moving away from the negative goal G' are more rewarding than others. Call this secondary goal g: clear lines in the largest groups possible. Call G' and goals like it “anti-goals.”

This terminology implies the definition.

If a goal is a particular game state towards which the player tries to move, an anti-goal is a particular state which the player is trying to avoid. Usually anti-goals are of the form “Do not allow X to occur” Where X is related to a (potentially open ended) goal.

Goals of the “high score” or “survive” variety are (or may be) anti-goals in disguise. Rephrased properly, they can be conceived of in anti-goal language. Of course there are good anti-goals and bad ones, just as there are good goals and bad goals. However, I would argue that the same criteria applies to both types of goals: a good (anti) goal is just one which gives meaning to the actions a person is presented with over an interactive system.

Multi-Player Games and Anti-Goals

I believe anti-goals can be useful game design, even in the single player case. In another essay I may try to make the argument that anti-goals must be augmented with mechanics which tend to move the player towards the anti-goal against which players must do all the sorts of complex decision making which produces value for players.

However, there is a more direct way of demonstrating that anti-goals are unavoidable aspects of games, at least when games are multi-player. This argument also demonstrates that games with multiple goals are in a sense inevitable, at least in the case of multi-player games. First let me describe what I conceive of as a multi-player game.

multi-player game: A game where players interact via an interactive system in order to reach a goal which can only be attained by a single player.

The critical distinction I want to make is that a multi-player game is not just two or more people engaged in separate contests of decision making. If there are not actions mediating the interaction of players via the game state then what is really going on is many players are playing many distinct games. A true multi-player game must allow players to interact (via actions).

In a multi-player game, players are working towards a win state we can call G. However, in the context of the mechanics which allow interaction they are also playing against a (set of) anti-goals {A}, one for each player besides themselves. These goals are of the form “Prevent player X from reaching goal G“. Hence, anti-goals are critical ingredients to successful multi-player game design and are therefore useful ideas for game designers. Therefore, for a game to really be multi-player then there must be actions associated with each anti-goal {A}.

An argument we might make at this point is that if players are playing for {A} and not explicitly for G then our game is not well designed (for instance, it isn’t elegant or minimal). But I believe any multi-player game where a player can pursue G and not concern herself with {A}, even in the presence of game actions which allow interaction, is a set of single player games in disguise. If we follow our urge to make G the true goal for all players at the expense of {A} then we may as well remove the actions which intermediate between players and then we may as well be designing a single player game whose goal is G.

So, if we admit that multi-player games are worth designing, then we also admit that at least a family of anti-goals are worth considering. Note that we must explicitly design the actions which allow the pursuit of {A} in order to design the game. Ideally these will be related and work in accord with the actions which facilitate G but they cannot be identical to those mechanics without our game collapsing to the single player case. We must consider {A} actions as a separate (though ideally related) design space.

Summary

I’ve tried to demonstrate that in multi-player games especially, anti-goals, which are goals of the for “Avoid some game state”, are necessary, distinct goal forms worth considering by game designers. The argument depends on demonstrating that a multi-player game must contain such anti-goals or collapse to a single player game played by multiple people but otherwise disconnected.

In a broader context, the idea here is to get a foot in the door for anti-goals as rules which can still do the work of a goal, which is to give meaning to choices and actions in an interactive system. An open question is whether such anti-goals are useful for single player games, whether they are useful but only in conjunction with game-terminating goals, or whether, though useful, we can always find a related normal goal which is superior from a design point of view. Hopefully, this essay provides a good jumping off point for those discussions.


Amateur Notes on “Quantum Mechanics as Classical Physics”

I am slow to mature. That is why I squandered myself in graduate school. I could have embraced the opportunity to think critically about the philosophy of physics, in which I was at least up to my knees. I instead glibly dismissed philosophy as secondary to prediction. Quantum Mechanics poses the greatest and most interesting philosophical problems and only now, when graduate school is vanishing on the horizon or, at any rate, eclipsed by towering pragmatics racing towards me (mortgages, careers, children), am I taken, with ever more frequency, by thoughts of the philosophy of physics.

Compensating this lack of remit to study is a comparative freedom of choice about how I study. Reading that would have been deemed frivolous by my graduate adviser I am now free to pursue for pleasure. Hence Charles Sebens’ 2013 paper “Quantum Mechanics as Classical Physics,” which develops a purely classical interpretation of Quantum Mechanics of a novel, Bohmian-flavored variety.

Interested readers should read the version on the Archive. I can’t hope to reproduce anything by a quick and probably inelegant if not misleading summary here, but the basic idea is to create a sort of supererogatory interpretative framework for Quantum Mechanics by adding not a single Bohmian particle, but one for many universes in such a way that the dynamics are preserved and then cleverly realizing that the so-called “Pilot Wave,” which corresponds to the Wave Function in more ordinary interpretations, can be completely removed, replaced instead by a regular Newtonian force between the Bohmian trace particles.

This results in a many-universe interpretation of Quantum Mechanics (with the same predictions as any other interpretation) but without a wave function. I’m interested in what I believe to be one aspect of this interpretation: it seems to be that worldlines never cross in this way of thinking, so that, if we jump up and up and up to slightly absurd questions like “Are there me’s in other universes who have made different decisions than I have?” The answer is “no,” in the following sense: because world lines never cross, there was never a time where two universes (and hence two versions of yourself) shared exactly the same state and then diverged. In other words, in each universe, while there may be many beings who resemble any individual in many respects, none of them share identical pasts. If you resent some decision in the past, as I resent not thinking about philosophy more in graduate school, and torture yourself by imagining some parallel person who made different decisions (with the help of some vague thoughts about the Interpretation of Quantum Mechanics) take heart: there is no such moment in the past where you could have chosen differently. You past is fixed and distinct from all those other versions of yourself, none of which were ever identical to you at any point.

At least that seems to be the case when you think about it this way.

Jovian Prayer

Big slow storms of Jupiter, help sooth us.
Sooth us with your patient weather, ochre,
gamboge, carmine, grey, swirling storms, giant.
And auroras, lightning, huge, cathartic.Screen Shot 2016-07-10 at 10.46.37 AM

Let us be like Galileo’s nameless
daughter, who threw herself into your heart
wrapped in curiosity, down, down, down,
swallowed by knowledge, by your huge brown storms.

The Fetishist

Slow mottled gray skies, the empty plains
somewhere in the blown out corridor from
Houston to Galveston. Highway and plane
noise, far enough for privacy but frisson-
near enough for wanderers to run, run
the risk of observation, forced sight:
so much more than the dead camera, glum
in its facile adsorption of light.
An old abandoned pool languishing right
behind an encroached upon foundation,
obscenely, a chimney still stands, a blight
within a blight within a blight within station-
ary air. He mugs against the gray sky
and falls into shit for the camera’s eye.

 

On Inform 7, Natural Language Programming and the Principle of Least Surprise

I’ve been pecking away at Inform 7 lately on account of its recently acquired Gnome front end. For those not in the know, Inform (and Inform 7) is a text adventure authoring language. I’ve always been interested in game programming but never had the time (or more likely the persistence of mind) to develop one of any sophistication myself. Usually in these cases one lowers the bar, and as far as interactive media goes, you can’t get much lower, complexity wise, than text adventures.

Writing a game in Inform amounts to describing the world and it’s rules in terms of a programming language provided by Inform. The system then collects the rules and descriptions and creates a game out of them. Time was, programming in Inform used to look like:

Constant Story "Hello World";
Constant Headline "^An Interactive Example^";
Include "Parser";
Include "VerbLib";
[ Initialise;
  location = Living_Room;
  "Hello World"; ];
Object Kitchen "Kitchen";
Object Front_Door "Front Door";
Object Living_Room "Living Room"
  with
      description "A comfortably furnished living room.",
      n_to Kitchen,
      s_to Front_Door,
  has light;

Which is recognizably a programming language, if a bit strange and domain specific. These days, writing Inform looks like this: (from my little project):

"Frustrate" by "Vincent Toups"
Ticks is a number which varies.
Ticks is zero.
When play begins:  
Now ticks is 1.

The Observation Room is a room. "The observation room cold and
surreal. Stars dot the floor underneath thick, leaded glass, cutting
across it with a barely perceptible tilt. This room seems to have been
adapted for storage, and is filled with all sorts of sub-stellar
detritus, sharp in the chill and out of place against the slowly
rotating sky. Even in the cold, the place smells of dust, old wood
finish, and mildew. [If ticks is less than two] As the sky cuts its
way across the milky way, the whole room seems to tilt.  You feel
dizzy.[else if ticks is less than four]The plane of the galaxy is
sinking out of range and the portal is filling with the void of
space. It feels like drowning.[else if ticks is greater than 7]The
galactic plane is filling the floor with a powdering of
stars.[else]The observation floor looks out across the void of space.
You avert your eyes from the floor.[end if]"

Every turn: Now ticks is ticks plus one.
Every turn: if ticks is 10:
decrease ticks by 10.

As you can see, the new Inform adopts a “natural language” approach to programming. As the Inform 7 website puts it

[The] Source language [is] modelled closely on a subset of English, and usually readable as such.

Also reproduced in the Inform 7 manual is the following quote from luminary Donald Knuth:

Programming is best regarded as the process of creating works of literature, which are meant to be read… so we ought to address them to people, not to machines. (Donald Knuth, “Literate Programming”, 1981)

Which better than anything else illustrates the desired goal of the new system: Humans are not machines! Machines should accommodate our modes of expression rather than forcing us to accommodate theirs! If it wasn’t for the unnaturalness of programming languages, the logic goes, many more people would program. The creation of interactive fiction means to be inclusive, so why not teach the machine to understand natural language?

This is a laudable goal. I really think the future is going to have a lot more programmers in it, and a primary task of language architects is to design programming languages which “regular” people find intuitive and useful. For successes in that arena see Python, or Smalltalk or even Basic. Perhaps these languages are not the pinnacle of intuitive programming environments but whatever that ultimate language is, I doubt seriously it will look much like Inform 7.

This is unfortunate, because reading Inform 7 is very pleasant, and the language is even charming from time to time. Unfortunately, it’s very difficult to program in1, and I say that as something of a programming language aficionado. It’s true that creating the basic skeleton of a text adventure is very easy, but even slightly non-trivial extensions to the language are difficult to intuitively get right. For instance, the game I am working on takes place on a gigantic, hollowed out natural satellite, spinning to provide artificial gravity. The game begins in a sort of observation bubble, where the floor is transparent and the stars are visible outside. Sometimes this observation window should be pointing into the plane of the Milky Way, but other times it should be pointing towards the void of space because the station’s axis of rotation is parallel to the plane of the galaxy. The description of the room should reflect these different possibilities.

Inform 7 operates on a turn based basis, so it seems like it should be simple enough to create this sort of time dependent behavior by keeping track of time but it was frustrating to figure out how to “tell” the Inform compiler what I wanted.

First I tried joint conditionals:

  When the player is in the Observation Room and
the turn is even, say: "The stars fill the floor."

But this resulted in an error message. Maybe the system doesn’t know about “evenness” so I tried:

  When the player is in the Observation Room and
the turn is greater than 3, say "The stars fill the floor."

(Figuring I could add more complex logic later).

Eventually I figured out the right syntax, which involved creating a variable and having a rule set its value each turn and a separate rule reset the value with the periodicity of the rotation of the ship, but the process was very frustrating. In Python the whole game might look, with the proper abstractions, like:


while not game.over():
    game.describe_location(player.position);
    if (player.position == 'The Observation Room' and
         game.turn() % 10):
        print "The stars fill the floor."

Which is not perhaps as “englishy” as the final working Inform code (posted near the beginning of this article) but is much more concise and obvious.

But that isn’t the reason the Python version is less frustrating to write. The reason is the Principle of Least Surprise, which states, roughly, that once you know the system, the least surprising way of doing things will work. The problem with Inform 7 is that “the system” appears to the observer to be “written English (perhaps more carefully constructed that usual)”. This produces in the coder a whole slew of assumptions about what sorts of statements will do what kind of things and as a consequence, you try a lot of things which, according to your mental model, inexplicably don’t work.

It took me an hour to figure out how to make what amounts to a special kind of clock and I had the benefit of knowing that underneath all that “natural English” was a (more or less) regular old (prolog flavored) programming environment. I can’t imagine the frustration a non-programmer would feel when they first decided to do something not directly supported or explained in the standard library or documentation.

That isn’t the only problem, either. Natural english is a domain specific language for communicating between intelligent things. It assumes that the recepient of the stream of tokens can easily resolve ambiguities, invert accidental negatives (pay attention, people do this all the time in speech) and tell the difference between important information and information it’s acceptable to leave ambiguous. Not only are computers presently incapable of this level of deduction/induction, but generally speaking we don’t want that behavior anyway: we are programming to get a computer to perform a very narrowly defined set of behaviors. The implication that Inform 7 will “understand you” in this context is doubly frustrating. And you don’t want it to “understand,” you want it to do exactly.

A lot of this could be ameliorated by a good piece of reference documentation, spelling out in exact detail the programmatic environment’s behavior. Unfortunately, the bundled documentation is a big tutorial which does a poor job of delineated between constructs in the language and elements of it. It all seems somewhat magical in the tutorial, in other words, and the intrepid reader, wishing to generalize on the rules of the system, is often confounded.

Nevertheless, I will probably keep using it. The environment is clean and pleasant, and the language, when you begin to feel out the classical language under the hood, is ok. And you can’t beat the built in features for text based games. I doubt that Inform 7, though, will seriously take off. Too many undeliverable promises.

1 This may make it the only “Read Only” programming language I can think of.

A Critique of The Programming Language J

I’ve spent around a year now fiddling with and eventually doing real
data analytic work in the The Programming Language J. J is one of
those languages which produces a special enthusiasm from its users and
in this way it is similar to other unusual programming languages like
Forth or Lisp. My peculiar interest in the language was due to no
longer having access to a Matlab license, wanting an array oriented
language to do analysis in, and an attraction to brevity and the point
free programming style, two aspects of programming which J emphasizes.

Sorry, Ken.

Sorry, Ken.

I’ve been moderately happy with it, but after about a year of light
work in the language and then a month of work-in-earnest (writing
interfaces to gnuplot and hive and doing Bayesian inference and
spectral clustering) I now feel I am in a good position to offer a
friendly critique of the language.

First, The Good

J is terse to nearly the point of obscurity. While terseness is not a
particularly valuable property in a general purpose programming
language (that is, one meant for Software Engineering), there is a
case to be made for it in a data analytical language. Much of my work
involves interactive exploration of the structure of data and for that sort
of workflow, being able to quickly try a few different ways of
chopping, slicing or reducing some big pile of data is pretty
handy. That you can also just copy and paste these snippets into some
analysis pipeline in a file somewhere is also nice. In other words,
terseness allows an agile sort of development style.

Much of this terseness is enabled by built in support for tacit
programming. What this means is that certain expressions in J are
interpreted at function level. That is, they denote, given a set of
verbs in a particular arrangement, a new verb, without ever explicitly
mentioning values.

For example, we might want a function which adds up all the maximum
values selected from the rows of an array. In J:

+/@:(>./"1)

J takes considerable experience to read, particularly in Tacit
style. The above denotes, from RIGHT to LEFT: for each row ("1)
reduce (/) that row using the maximum operation >. and then (@:)
reduce (/) the result using addition (+). In english, this means:
find the max of each row and sum the results.

Note that the meaning of this expression is itself a verb, that is
something which operates on data. We may capture that meaning:

sumMax =: +/@:(>./"1)

Or use it directly:

+/@:(>./"1) ? (10 10 $ 10)

Tacit programming is enabled by a few syntactic rules (the so-called
hooks and forks) and by a bunch of function level operators called
adverbs and conjuctions. (For instance, @: is a conjunction rougly
denoting function composition while the expression +/ % # is a fork,
denoting the average operation. The forkness is that it is three
expressions denoting verbs separated by spaces.

The details obscure the value: its nice to program at function level
and it is nice to have a terse denotation of common operations.

J has one other really nice trick up its sleeve called verb
rank
. Rank itself is not an unusual idea in data analytic languages:
it just refers to the length of the shape of the matrix; that is, its
dimensionality.

We might want to say a bit about J’s basic evaluation strategy before
explaining rank, since it makes the origin of the idea more clear. All
verbs in J take one or two arguments on the left and the right. Single
argument verbs are called monads, two argument verbs are called dyads.
Verbs can be either monadic or dyadic in which case we call the
invocation itself monadic or dyadic. Most of J’s built-in operators
are both monadic and dyadic, and often the two meanings are unrelated.

NB. monadic and dyadic invocations of <
4 < 3 NB. evaluates to 0
<3 NB. evalutes to 3, but in a box.

Give that the arguments (usually called x and y respectively) are
often matrices it is natural to think of a verb as some sort of matrix
operator, in which case it has, like any matrix operation, an expected
dimensionality on its two sides. This is sort of what verb rank is
like in J: the verb itself carries along some information about how
its logic operates on its operands. For instance, the built-in verb
-: (called match) compares two things structurally. Naturally, it
applies to its operands as a whole. But we might want to compare two
lists of objects via match, resulting in a list of results. We can
do that by modifying the rank of -:

x -:”(1 1) y

The expression -:”(1 1) denotes a version of match which applies to
the elements of x and y, each treated as a list. Rank in J is roughly
analogous the the use of repmat, permute and reshape in Matlab: we can
use rank annotations to quickly describe how verbs operate on their
operands in hopes of pushing looping down into the C engine, where
it can be executed quickly.

To recap: array orientation, terseness, tacit programming and rank are
the really nice parts of the language.

The Bad and the Ugly

As a programming environment J can be productive and efficient, but it
is not without flaws. Most of these have to do with irregularities in
the syntax and semantics which make the language confusing without
offering additional power. These unusual design choices are
particularly apparent when J is compared to more modern programming
languages.

Fixed Verb Arities

As indicated above, J verbs, the nearest cousin to functions or
procedures from other programming languages, have arity 1 or
arity 2. A single symbol may denote expressions of both arity, in
which case context determines which function body is executed.

There are two issues here, at least. The first is that we often want
functions of more than two arguments. In J the approach is to pass
boxed arrays to the verb. There is some syntactic sugar to support
this strategy:

multiArgVerb =: monad define
‘arg1 arg2 arg3’ =. y
NB. do stuff
)

If a string appears as the left operand of the =. operator, then
simple destructuring occurs. Boxed items are unboxed by this
operation, so we typically see invocations like:

multiArgVerb('a string';10;'another string')

But note that the expression on the right (starting with the open
parentheses) just denotes a boxed array.

This solution is fine, but it does short-circuit J’s notion of verb
rank: we may specify the the rank with which the function operates on
its left or right operand as a whole, but not on the individual
“arguments” of a boxed array. But nothing about the concept of rank
demands that it be restricted to one or two argument functions: rank
entirely relates to how arguments are extracted from array valued
primitive arguments and dealt to the verb body. This idea can be
generalized to functions of arbitrary argument count.

Apart from this, there is the minor gripe that denoting such single
use boxed arrays with ; feels clumsy. Call that the Lisper’s bias:
the best separator is the space character.1

A second, related problem is that you can’t have a
zero argument function either. This isn’t the only language where
this happens (Standard ML and OCaml also have this tradition, though I
think it is weird there too). The problem in J is that it would feel
natural to have such functions and to be able to mention them.

Consider the following definitions:

o1 =: 1&-
o2 =: -&1

(o1 (0 1 2 3 4)); (o2 (0 1 2 3 4))
┌────────────┬──────────┐
│1 0 _1 _2 _3│_1 0 1 2 3│
└────────────┴──────────┘

So far so good. Apparently using the & conjunction (called “bond”)
we can partially apply a two-argument verb on either the left or the
right. It is natural to ask what would happen if we bonded twice.

(o1&1)
o1&1

Ok, so it produces a verb.

 3 3 $ ''
  ;'o1'
  ;'o2'
  ;'right'
  ;((o1&1 (0 1 2 3 4))
  ; (o2&1 (0 1 2 3 4))
  ;'left'
  ; (1&o1 (0 1 2 3 4))
  ; (1&o2 (0 1 2 3 4)))

┌─────┬────────────┬────────────┐
│     │o1          │o2          │
├─────┼────────────┼────────────┤
│right│1 0 1 0 1   │1 0 _1 _2 _3│
├─────┼────────────┼────────────┤
│left │1 0 _1 _2 _3│_1 0 1 2 3  │
└─────┴────────────┴────────────┘

I would describe these results as goofy, if not entirely impossible to
understand (though I challenge the reader to do so). However, none of
them really seem right, in my opinion.

I would argue that one of two possibilities would make some sense.

  1. (1&-)&1 -> 0 (eg, 1-1)
  2. (1&-)&1 -> 0″_ (that is, the constant function returning 0)

That many of these combinations evaluate to o1 or o2 is doubly
confusing because it ignores a value AND because we can denote
constant functions (via the rank conjunction), as in the expression
0"_.

Generalizations

What this is all about is that J doesn’t handle the idea of a
function very well. Instead of having a single, unified abstraction
representing operations on things, it has a variety of different ideas
that are function-like (verbs, conjuctions, adverbs, hooks, forks,
gerunds) which in a way puts it ahead of a lot of old-timey languages
like Java 7 without first order functions, but ultimately this
handful of disparate techniques fails to acheive the conceptual unity
of first order functions with lexical scope.

Furthermore, I suggest that nothing whatsoever would be lost (except
J‘s interesting historical development) by collapsing these ideas
into the more typical idea of closure capturing functions.

Other Warts

Weird Block Syntax

Getting top-level2 semantics right is hard in any
language. Scheme is famously ambiguous on the subject, but at
least for most practical purposes it is comprehensible. Top-level has
the same syntax and semantics as any other body of code in scheme
(with some restrictions about where define can be evaluated) but in
J neither is the same.

We may write block strings in J like so:

blockString =: 0 : 0 
Everything in here is a block string.       
)

When the evaluator reads 0:0 it switches to sucking up characters
into a string until it encounters a line with a ) as its first
character. The verb 0:3 does the same except the resulting string is
turned into a verb.

plus =: 3 : 0
    x+y
)

However, we can’t nest this syntax, so we can’t define non-tacit
functions inside non-tacit functions. That is, this is illegal:

plus =: 3 : 0
  plusHelper =. 3 : 0
    x+y
  )
  x plusHelper y
)

This forces to the programmer to do a lot of lambda lifting
manually, which also forces them to bump into the restrictions on
function arity and their poor interaction with rank behavior, for if
we wish to capture parts of the private environment, we are forced to
pass those parts of the environment in as an argument, forcing us to
give up rank behavior or forcing us to jump up a level to verb
modifiers.

Scope

Of course, you can define local functions if you do it tacitly:

plus =: 3 : 0
    plusHelper =. +
    x plusHelper y   
)

But, even if you are defining a conjunction or an adverb, whence you
are able to “return” a verb, you can’t capture any local functions –
they disappear as soon as execution leaves the conjunction or adverb
scope.

That is because J is dynamically scoped, so any capture has to be
handled manually, using things like adverbs, conjunctions, or the good
old fashioned fix f., which inserts values from the current scope
directly into the representation of a function. Essentially all modern
languages use lexical scope, which is basically a rule which says: the
value of a variable is exactly what it looks like from reading the
program. Dynamic scope says: the valuable of the variable is whatever
its most recent binding is.

Recapitulation!

The straight dope, so to speak, is that J is great for a lot of
reasons (terseness, rank) but also a lot of irregular language
features (adverbs, conjunctions, hooks, forks, etc) which could be
folded all down into regular old functions without harming the
benefits of the language, and simplifying it enormously.

If you don’t believe that regular old first order functions with
lexical scope can get us where we need to go, check out my
tacit-programming libraries in R and Javascript. I
even wrote a complete, if ridiculously slow implementation of J‘s
rank feature, literate-style, here.


Footnotes

1 It bears noting that ; in an expression like (a;b;c)
is not a syntactic element, but a semantic one. That is, it is the
verb called “link” which has the effect of linking its arguments into
a boxed list. It is evaluated like this:

(a;(b;c))

(a;b;c) is nice looking but a little strange: In an expression
(x;y) the effect depends on y is boxed already or not: x is always boxed regardless, but y is boxed only if it wasn’t boxed before.

2 Top level? Top-level is the context where everything
“happens,” if anything happens at all. Tricky things about top-level
are like: can functions refer to functions which are not yet defined,
if you read a program from top to bottom? What about values? Can you
redefine functions, and if so, how do the semantics work? Do functions
which call the redefined function change their behavior, or do they
continue to refer to the old version? What if the calling interface
changes? Can you check types if you imagine that functions might be
redefined at any time? If your language has classes, what about
instances created before a change in the class definition. Believe or
not, Common Lisp tries to let you do this – and its confusing!

On the opposite end of the spectrum are really static languages like
Haskell, wherein type enforcement and purity ensure that the top-level
is only meaningful as a monolith, for the most part.