Author Archives: Vincent

About Vincent

Just your average physicist/computational neuroscientist turned software engineer.

Goals, Anti-Goals and Multi-player Games

In this article I will try to address Keith Burgun‘s assertion that games should have a single goal and his analysis of certain kinds of goals as trivial or pathological. I will try to demonstrate that multi-player games either reduce to single player games or necessitate multiple goals, some of which are necessarily the sorts of goals which Burgun dismisses as trivial. I’ll try to make the case that such goals are useful ideas for game designers as well as being necessary components of non-trivial multi-player games.

(Note: I find Keith Burgun’s game design work very useful. If you are interested in game design and have the money, I suggest subscribing to his Patreon.)

Notes on Burgun’s Analytical Frame

The Forms

Keith Burgun is a game design philosopher focused on strategy games, which he calls simply games. He divides the world of interactive systems into four useful forms:

  1. toys – an interactive system without goals. Discovery is the primary value of toys.
  2. puzzle – bare interactive system plus a goal. Solving is the primary value of the puzzle.
  3. contests – a toy plus a goal all meant to measure performance.
  4. games – a toy, plus a goal, plus obfuscation of game state. The primary value is in synthesizing decision making heuristics to account for the obfuscation of the game state.

A good, brief, video introduction to the forms is available here. Burgun believes a good way to construct a game is to identify a core mechanism, which is a combination of a core action, a core purpose, and a goal. The action and purpose point together towards the goal. The goal, in turn, gives meaning to the actions the player can take and the states of the interactive system.

On Goals

More should be said on goals, which appear in many of the above definitions. Burgun has a podcast which serves as a good long form explication of many of his ideas. There is an entire episode on goals here. The discussion of goals begins around the fifteen minute mark.

Here Burgun provides a related definition of games: contests of decision making. Goals are prominent in this discussion: the goal gives meaning to actions in the game state.

Burgun raises a critique of games which feature notions of second place. He groups such goals into a category of non-binary goals and gives us an example to clarify the discussion: goals of the form “get the highest score.”

His analysis of the poorness of this goal is that it seems to imply a few strange things:

  1. The player always gets the highest score they are capable of because the universe is deterministic.
  2. These goals imply that the game becomes vague after the previous high score is beaten, since the goal is met and yet the game continues.

The first applies to any interactive system at all, so isn’t a very powerful argument, as I understand it. Take a game with the rules of Tetris except that the board is initialized with a set of blocks already on the board. The player receives a deterministic sequence of blocks and must clear the already present blocks, at which point the game ends. This goal is not of the form “find the highest score” or “survive the longest” but the game’s outcome is already determined by the state of the universe at the beginning of the game. From this analysis we can conclude that if (1) constitutes a downside to the construction of a goal, it doesn’t apply uniquely to “high score” style goals.

(2) is more subtle. While it is true that in the form suggested, these rules leave the player without guidelines after the goal is met, I believe that in many cases a simple rephrasing of the goal in question resolves this problem. Take the goal:

G: Given the rules of Tetris, play for the highest score.

Since Tetris rewards you for clearing more lines at once and since Tetris ends when a block becomes fixed to the board but touches the top of the screen, we can rephrase this goal as:

G': Do not let the blocks reach the top of the screen.

This goal is augmented by secondary goals which enhance play: certain ways of moving away from the negative goal G' are more rewarding than others. Call this secondary goal g: clear lines in the largest groups possible. Call G' and goals like it “anti-goals.”

This terminology implies the definition.

If a goal is a particular game state towards which the player tries to move, an anti-goal is a particular state which the player is trying to avoid. Usually anti-goals are of the form “Do not allow X to occur” Where X is related to a (potentially open ended) goal.

Goals of the “high score” or “survive” variety are (or may be) anti-goals in disguise. Rephrased properly, they can be conceived of in anti-goal language. Of course there are good anti-goals and bad ones, just as there are good goals and bad goals. However, I would argue that the same criteria applies to both types of goals: a good (anti) goal is just one which gives meaning to the actions a person is presented with over an interactive system.

Multi-Player Games and Anti-Goals

I believe anti-goals can be useful game design, even in the single player case. In another essay I may try to make the argument that anti-goals must be augmented with mechanics which tend to move the player towards the anti-goal against which players must do all the sorts of complex decision making which produces value for players.

However, there is a more direct way of demonstrating that anti-goals are unavoidable aspects of games, at least when games are multi-player. This argument also demonstrates that games with multiple goals are in a sense inevitable, at least in the case of multi-player games. First let me describe what I conceive of as a multi-player game.

multi-player game: A game where players interact via an interactive system in order to reach a goal which can only be attained by a single player.

The critical distinction I want to make is that a multi-player game is not just two or more people engaged in separate contests of decision making. If there are not actions mediating the interaction of players via the game state then what is really going on is many players are playing many distinct games. A true multi-player game must allow players to interact (via actions).

In a multi-player game, players are working towards a win state we can call G. However, in the context of the mechanics which allow interaction they are also playing against a (set of) anti-goals {A}, one for each player besides themselves. These goals are of the form “Prevent player X from reaching goal G“. Hence, anti-goals are critical ingredients to successful multi-player game design and are therefore useful ideas for game designers. Therefore, for a game to really be multi-player then there must be actions associated with each anti-goal {A}.

An argument we might make at this point is that if players are playing for {A} and not explicitly for G then our game is not well designed (for instance, it isn’t elegant or minimal). But I believe any multi-player game where a player can pursue G and not concern herself with {A}, even in the presence of game actions which allow interaction, is a set of single player games in disguise. If we follow our urge to make G the true goal for all players at the expense of {A} then we may as well remove the actions which intermediate between players and then we may as well be designing a single player game whose goal is G.

So, if we admit that multi-player games are worth designing, then we also admit that at least a family of anti-goals are worth considering. Note that we must explicitly design the actions which allow the pursuit of {A} in order to design the game. Ideally these will be related and work in accord with the actions which facilitate G but they cannot be identical to those mechanics without our game collapsing to the single player case. We must consider {A} actions as a separate (though ideally related) design space.

Summary

I’ve tried to demonstrate that in multi-player games especially, anti-goals, which are goals of the for “Avoid some game state”, are necessary, distinct goal forms worth considering by game designers. The argument depends on demonstrating that a multi-player game must contain such anti-goals or collapse to a single player game played by multiple people but otherwise disconnected.

In a broader context, the idea here is to get a foot in the door for anti-goals as rules which can still do the work of a goal, which is to give meaning to choices and actions in an interactive system. An open question is whether such anti-goals are useful for single player games, whether they are useful but only in conjunction with game-terminating goals, or whether, though useful, we can always find a related normal goal which is superior from a design point of view. Hopefully, this essay provides a good jumping off point for those discussions.


Amateur Notes on “Quantum Mechanics as Classical Physics”

I am slow to mature. That is why I squandered myself in graduate school. I could have embraced the opportunity to think critically about the philosophy of physics, in which I was at least up to my knees. I instead glibly dismissed philosophy as secondary to prediction. Quantum Mechanics poses the greatest and most interesting philosophical problems and only now, when graduate school is vanishing on the horizon or, at any rate, eclipsed by towering pragmatics racing towards me (mortgages, careers, children), am I taken, with ever more frequency, by thoughts of the philosophy of physics.

Compensating this lack of remit to study is a comparative freedom of choice about how I study. Reading that would have been deemed frivolous by my graduate adviser I am now free to pursue for pleasure. Hence Charles Sebens’ 2013 paper “Quantum Mechanics as Classical Physics,” which develops a purely classical interpretation of Quantum Mechanics of a novel, Bohmian-flavored variety.

Interested readers should read the version on the Archive. I can’t hope to reproduce anything by a quick and probably inelegant if not misleading summary here, but the basic idea is to create a sort of supererogatory interpretative framework for Quantum Mechanics by adding not a single Bohmian particle, but one for many universes in such a way that the dynamics are preserved and then cleverly realizing that the so-called “Pilot Wave,” which corresponds to the Wave Function in more ordinary interpretations, can be completely removed, replaced instead by a regular Newtonian force between the Bohmian trace particles.

This results in a many-universe interpretation of Quantum Mechanics (with the same predictions as any other interpretation) but without a wave function. I’m interested in what I believe to be one aspect of this interpretation: it seems to be that worldlines never cross in this way of thinking, so that, if we jump up and up and up to slightly absurd questions like “Are there me’s in other universes who have made different decisions than I have?” The answer is “no,” in the following sense: because world lines never cross, there was never a time where two universes (and hence two versions of yourself) shared exactly the same state and then diverged. In other words, in each universe, while there may be many beings who resemble any individual in many respects, none of them share identical pasts. If you resent some decision in the past, as I resent not thinking about philosophy more in graduate school, and torture yourself by imagining some parallel person who made different decisions (with the help of some vague thoughts about the Interpretation of Quantum Mechanics) take heart: there is no such moment in the past where you could have chosen differently. You past is fixed and distinct from all those other versions of yourself, none of which were ever identical to you at any point.

At least that seems to be the case when you think about it this way.

Jovian Prayer

Big slow storms of Jupiter, help sooth us.
Sooth us with your patient weather, ochre,
gamboge, carmine, grey, swirling storms, giant.
And auroras, lightning, huge, cathartic.Screen Shot 2016-07-10 at 10.46.37 AM

Let us be like Galileo’s nameless
daughter, who threw herself into your heart
wrapped in curiosity, down, down, down,
swallowed by knowledge, by your huge brown storms.

The Fetishist

Slow mottled gray skies, the empty plains
somewhere in the blown out corridor from
Houston to Galveston. Highway and plane
noise, far enough for privacy but frisson-
near enough for wanderers to run, run
the risk of observation, forced sight:
so much more than the dead camera, glum
in its facile adsorption of light.
An old abandoned pool languishing right
behind an encroached upon foundation,
obscenely, a chimney still stands, a blight
within a blight within a blight within station-
ary air. He mugs against the gray sky
and falls into shit for the camera’s eye.

 

On Inform 7, Natural Language Programming and the Principle of Least Surprise

I’ve been pecking away at Inform 7 lately on account of its recently acquired Gnome front end. For those not in the know, Inform (and Inform 7) is a text adventure authoring language. I’ve always been interested in game programming but never had the time (or more likely the persistence of mind) to develop one of any sophistication myself. Usually in these cases one lowers the bar, and as far as interactive media goes, you can’t get much lower, complexity wise, than text adventures.

Writing a game in Inform amounts to describing the world and it’s rules in terms of a programming language provided by Inform. The system then collects the rules and descriptions and creates a game out of them. Time was, programming in Inform used to look like:

Constant Story "Hello World";
Constant Headline "^An Interactive Example^";
Include "Parser";
Include "VerbLib";
[ Initialise;
  location = Living_Room;
  "Hello World"; ];
Object Kitchen "Kitchen";
Object Front_Door "Front Door";
Object Living_Room "Living Room"
  with
      description "A comfortably furnished living room.",
      n_to Kitchen,
      s_to Front_Door,
  has light;

Which is recognizably a programming language, if a bit strange and domain specific. These days, writing Inform looks like this: (from my little project):

"Frustrate" by "Vincent Toups"
Ticks is a number which varies.
Ticks is zero.
When play begins:  
Now ticks is 1.

The Observation Room is a room. "The observation room cold and
surreal. Stars dot the floor underneath thick, leaded glass, cutting
across it with a barely perceptible tilt. This room seems to have been
adapted for storage, and is filled with all sorts of sub-stellar
detritus, sharp in the chill and out of place against the slowly
rotating sky. Even in the cold, the place smells of dust, old wood
finish, and mildew. [If ticks is less than two] As the sky cuts its
way across the milky way, the whole room seems to tilt.  You feel
dizzy.[else if ticks is less than four]The plane of the galaxy is
sinking out of range and the portal is filling with the void of
space. It feels like drowning.[else if ticks is greater than 7]The
galactic plane is filling the floor with a powdering of
stars.[else]The observation floor looks out across the void of space.
You avert your eyes from the floor.[end if]"

Every turn: Now ticks is ticks plus one.
Every turn: if ticks is 10:
decrease ticks by 10.

As you can see, the new Inform adopts a “natural language” approach to programming. As the Inform 7 website puts it

[The] Source language [is] modelled closely on a subset of English, and usually readable as such.

Also reproduced in the Inform 7 manual is the following quote from luminary Donald Knuth:

Programming is best regarded as the process of creating works of literature, which are meant to be read… so we ought to address them to people, not to machines. (Donald Knuth, “Literate Programming”, 1981)

Which better than anything else illustrates the desired goal of the new system: Humans are not machines! Machines should accommodate our modes of expression rather than forcing us to accommodate theirs! If it wasn’t for the unnaturalness of programming languages, the logic goes, many more people would program. The creation of interactive fiction means to be inclusive, so why not teach the machine to understand natural language?

This is a laudable goal. I really think the future is going to have a lot more programmers in it, and a primary task of language architects is to design programming languages which “regular” people find intuitive and useful. For successes in that arena see Python, or Smalltalk or even Basic. Perhaps these languages are not the pinnacle of intuitive programming environments but whatever that ultimate language is, I doubt seriously it will look much like Inform 7.

This is unfortunate, because reading Inform 7 is very pleasant, and the language is even charming from time to time. Unfortunately, it’s very difficult to program in1, and I say that as something of a programming language aficionado. It’s true that creating the basic skeleton of a text adventure is very easy, but even slightly non-trivial extensions to the language are difficult to intuitively get right. For instance, the game I am working on takes place on a gigantic, hollowed out natural satellite, spinning to provide artificial gravity. The game begins in a sort of observation bubble, where the floor is transparent and the stars are visible outside. Sometimes this observation window should be pointing into the plane of the Milky Way, but other times it should be pointing towards the void of space because the station’s axis of rotation is parallel to the plane of the galaxy. The description of the room should reflect these different possibilities.

Inform 7 operates on a turn based basis, so it seems like it should be simple enough to create this sort of time dependent behavior by keeping track of time but it was frustrating to figure out how to “tell” the Inform compiler what I wanted.

First I tried joint conditionals:

  When the player is in the Observation Room and
the turn is even, say: "The stars fill the floor."

But this resulted in an error message. Maybe the system doesn’t know about “evenness” so I tried:

  When the player is in the Observation Room and
the turn is greater than 3, say "The stars fill the floor."

(Figuring I could add more complex logic later).

Eventually I figured out the right syntax, which involved creating a variable and having a rule set its value each turn and a separate rule reset the value with the periodicity of the rotation of the ship, but the process was very frustrating. In Python the whole game might look, with the proper abstractions, like:


while not game.over():
    game.describe_location(player.position);
    if (player.position == 'The Observation Room' and
         game.turn() % 10):
        print "The stars fill the floor."

Which is not perhaps as “englishy” as the final working Inform code (posted near the beginning of this article) but is much more concise and obvious.

But that isn’t the reason the Python version is less frustrating to write. The reason is the Principle of Least Surprise, which states, roughly, that once you know the system, the least surprising way of doing things will work. The problem with Inform 7 is that “the system” appears to the observer to be “written English (perhaps more carefully constructed that usual)”. This produces in the coder a whole slew of assumptions about what sorts of statements will do what kind of things and as a consequence, you try a lot of things which, according to your mental model, inexplicably don’t work.

It took me an hour to figure out how to make what amounts to a special kind of clock and I had the benefit of knowing that underneath all that “natural English” was a (more or less) regular old (prolog flavored) programming environment. I can’t imagine the frustration a non-programmer would feel when they first decided to do something not directly supported or explained in the standard library or documentation.

That isn’t the only problem, either. Natural english is a domain specific language for communicating between intelligent things. It assumes that the recepient of the stream of tokens can easily resolve ambiguities, invert accidental negatives (pay attention, people do this all the time in speech) and tell the difference between important information and information it’s acceptable to leave ambiguous. Not only are computers presently incapable of this level of deduction/induction, but generally speaking we don’t want that behavior anyway: we are programming to get a computer to perform a very narrowly defined set of behaviors. The implication that Inform 7 will “understand you” in this context is doubly frustrating. And you don’t want it to “understand,” you want it to do exactly.

A lot of this could be ameliorated by a good piece of reference documentation, spelling out in exact detail the programmatic environment’s behavior. Unfortunately, the bundled documentation is a big tutorial which does a poor job of delineated between constructs in the language and elements of it. It all seems somewhat magical in the tutorial, in other words, and the intrepid reader, wishing to generalize on the rules of the system, is often confounded.

Nevertheless, I will probably keep using it. The environment is clean and pleasant, and the language, when you begin to feel out the classical language under the hood, is ok. And you can’t beat the built in features for text based games. I doubt that Inform 7, though, will seriously take off. Too many undeliverable promises.

1 This may make it the only “Read Only” programming language I can think of.

A Critique of The Programming Language J

I’ve spent around a year now fiddling with and eventually doing real
data analytic work in the The Programming Language J. J is one of
those languages which produces a special enthusiasm from its users and
in this way it is similar to other unusual programming languages like
Forth or Lisp. My peculiar interest in the language was due to no
longer having access to a Matlab license, wanting an array oriented
language to do analysis in, and an attraction to brevity and the point
free programming style, two aspects of programming which J emphasizes.

Sorry, Ken.

Sorry, Ken.

I’ve been moderately happy with it, but after about a year of light
work in the language and then a month of work-in-earnest (writing
interfaces to gnuplot and hive and doing Bayesian inference and
spectral clustering) I now feel I am in a good position to offer a
friendly critique of the language.

First, The Good

J is terse to nearly the point of obscurity. While terseness is not a
particularly valuable property in a general purpose programming
language (that is, one meant for Software Engineering), there is a
case to be made for it in a data analytical language. Much of my work
involves interactive exploration of the structure of data and for that sort
of workflow, being able to quickly try a few different ways of
chopping, slicing or reducing some big pile of data is pretty
handy. That you can also just copy and paste these snippets into some
analysis pipeline in a file somewhere is also nice. In other words,
terseness allows an agile sort of development style.

Much of this terseness is enabled by built in support for tacit
programming. What this means is that certain expressions in J are
interpreted at function level. That is, they denote, given a set of
verbs in a particular arrangement, a new verb, without ever explicitly
mentioning values.

For example, we might want a function which adds up all the maximum
values selected from the rows of an array. In J:

+/@:(>./"1)

J takes considerable experience to read, particularly in Tacit
style. The above denotes, from RIGHT to LEFT: for each row ("1)
reduce (/) that row using the maximum operation >. and then (@:)
reduce (/) the result using addition (+). In english, this means:
find the max of each row and sum the results.

Note that the meaning of this expression is itself a verb, that is
something which operates on data. We may capture that meaning:

sumMax =: +/@:(>./"1)

Or use it directly:

+/@:(>./"1) ? (10 10 $ 10)

Tacit programming is enabled by a few syntactic rules (the so-called
hooks and forks) and by a bunch of function level operators called
adverbs and conjuctions. (For instance, @: is a conjunction rougly
denoting function composition while the expression +/ % # is a fork,
denoting the average operation. The forkness is that it is three
expressions denoting verbs separated by spaces.

The details obscure the value: its nice to program at function level
and it is nice to have a terse denotation of common operations.

J has one other really nice trick up its sleeve called verb
rank
. Rank itself is not an unusual idea in data analytic languages:
it just refers to the length of the shape of the matrix; that is, its
dimensionality.

We might want to say a bit about J’s basic evaluation strategy before
explaining rank, since it makes the origin of the idea more clear. All
verbs in J take one or two arguments on the left and the right. Single
argument verbs are called monads, two argument verbs are called dyads.
Verbs can be either monadic or dyadic in which case we call the
invocation itself monadic or dyadic. Most of J’s built-in operators
are both monadic and dyadic, and often the two meanings are unrelated.

NB. monadic and dyadic invocations of <
4 < 3 NB. evaluates to 0
<3 NB. evalutes to 3, but in a box.

Give that the arguments (usually called x and y respectively) are
often matrices it is natural to think of a verb as some sort of matrix
operator, in which case it has, like any matrix operation, an expected
dimensionality on its two sides. This is sort of what verb rank is
like in J: the verb itself carries along some information about how
its logic operates on its operands. For instance, the built-in verb
-: (called match) compares two things structurally. Naturally, it
applies to its operands as a whole. But we might want to compare two
lists of objects via match, resulting in a list of results. We can
do that by modifying the rank of -:

x -:”(1 1) y

The expression -:”(1 1) denotes a version of match which applies to
the elements of x and y, each treated as a list. Rank in J is roughly
analogous the the use of repmat, permute and reshape in Matlab: we can
use rank annotations to quickly describe how verbs operate on their
operands in hopes of pushing looping down into the C engine, where
it can be executed quickly.

To recap: array orientation, terseness, tacit programming and rank are
the really nice parts of the language.

The Bad and the Ugly

As a programming environment J can be productive and efficient, but it
is not without flaws. Most of these have to do with irregularities in
the syntax and semantics which make the language confusing without
offering additional power. These unusual design choices are
particularly apparent when J is compared to more modern programming
languages.

Fixed Verb Arities

As indicated above, J verbs, the nearest cousin to functions or
procedures from other programming languages, have arity 1 or
arity 2. A single symbol may denote expressions of both arity, in
which case context determines which function body is executed.

There are two issues here, at least. The first is that we often want
functions of more than two arguments. In J the approach is to pass
boxed arrays to the verb. There is some syntactic sugar to support
this strategy:

multiArgVerb =: monad define
‘arg1 arg2 arg3’ =. y
NB. do stuff
)

If a string appears as the left operand of the =. operator, then
simple destructuring occurs. Boxed items are unboxed by this
operation, so we typically see invocations like:

multiArgVerb('a string';10;'another string')

But note that the expression on the right (starting with the open
parentheses) just denotes a boxed array.

This solution is fine, but it does short-circuit J’s notion of verb
rank: we may specify the the rank with which the function operates on
its left or right operand as a whole, but not on the individual
“arguments” of a boxed array. But nothing about the concept of rank
demands that it be restricted to one or two argument functions: rank
entirely relates to how arguments are extracted from array valued
primitive arguments and dealt to the verb body. This idea can be
generalized to functions of arbitrary argument count.

Apart from this, there is the minor gripe that denoting such single
use boxed arrays with ; feels clumsy. Call that the Lisper’s bias:
the best separator is the space character.1

A second, related problem is that you can’t have a
zero argument function either. This isn’t the only language where
this happens (Standard ML and OCaml also have this tradition, though I
think it is weird there too). The problem in J is that it would feel
natural to have such functions and to be able to mention them.

Consider the following definitions:

o1 =: 1&-
o2 =: -&1

(o1 (0 1 2 3 4)); (o2 (0 1 2 3 4))
┌────────────┬──────────┐
│1 0 _1 _2 _3│_1 0 1 2 3│
└────────────┴──────────┘

So far so good. Apparently using the & conjunction (called “bond”)
we can partially apply a two-argument verb on either the left or the
right. It is natural to ask what would happen if we bonded twice.

(o1&1)
o1&1

Ok, so it produces a verb.

 3 3 $ ''
  ;'o1'
  ;'o2'
  ;'right'
  ;((o1&1 (0 1 2 3 4))
  ; (o2&1 (0 1 2 3 4))
  ;'left'
  ; (1&o1 (0 1 2 3 4))
  ; (1&o2 (0 1 2 3 4)))

┌─────┬────────────┬────────────┐
│     │o1          │o2          │
├─────┼────────────┼────────────┤
│right│1 0 1 0 1   │1 0 _1 _2 _3│
├─────┼────────────┼────────────┤
│left │1 0 _1 _2 _3│_1 0 1 2 3  │
└─────┴────────────┴────────────┘

I would describe these results as goofy, if not entirely impossible to
understand (though I challenge the reader to do so). However, none of
them really seem right, in my opinion.

I would argue that one of two possibilities would make some sense.

  1. (1&-)&1 -> 0 (eg, 1-1)
  2. (1&-)&1 -> 0″_ (that is, the constant function returning 0)

That many of these combinations evaluate to o1 or o2 is doubly
confusing because it ignores a value AND because we can denote
constant functions (via the rank conjunction), as in the expression
0"_.

Generalizations

What this is all about is that J doesn’t handle the idea of a
function very well. Instead of having a single, unified abstraction
representing operations on things, it has a variety of different ideas
that are function-like (verbs, conjuctions, adverbs, hooks, forks,
gerunds) which in a way puts it ahead of a lot of old-timey languages
like Java 7 without first order functions, but ultimately this
handful of disparate techniques fails to acheive the conceptual unity
of first order functions with lexical scope.

Furthermore, I suggest that nothing whatsoever would be lost (except
J‘s interesting historical development) by collapsing these ideas
into the more typical idea of closure capturing functions.

Other Warts

Weird Block Syntax

Getting top-level2 semantics right is hard in any
language. Scheme is famously ambiguous on the subject, but at
least for most practical purposes it is comprehensible. Top-level has
the same syntax and semantics as any other body of code in scheme
(with some restrictions about where define can be evaluated) but in
J neither is the same.

We may write block strings in J like so:

blockString =: 0 : 0 
Everything in here is a block string.       
)

When the evaluator reads 0:0 it switches to sucking up characters
into a string until it encounters a line with a ) as its first
character. The verb 0:3 does the same except the resulting string is
turned into a verb.

plus =: 3 : 0
    x+y
)

However, we can’t nest this syntax, so we can’t define non-tacit
functions inside non-tacit functions. That is, this is illegal:

plus =: 3 : 0
  plusHelper =. 3 : 0
    x+y
  )
  x plusHelper y
)

This forces to the programmer to do a lot of lambda lifting
manually, which also forces them to bump into the restrictions on
function arity and their poor interaction with rank behavior, for if
we wish to capture parts of the private environment, we are forced to
pass those parts of the environment in as an argument, forcing us to
give up rank behavior or forcing us to jump up a level to verb
modifiers.

Scope

Of course, you can define local functions if you do it tacitly:

plus =: 3 : 0
    plusHelper =. +
    x plusHelper y   
)

But, even if you are defining a conjunction or an adverb, whence you
are able to “return” a verb, you can’t capture any local functions –
they disappear as soon as execution leaves the conjunction or adverb
scope.

That is because J is dynamically scoped, so any capture has to be
handled manually, using things like adverbs, conjunctions, or the good
old fashioned fix f., which inserts values from the current scope
directly into the representation of a function. Essentially all modern
languages use lexical scope, which is basically a rule which says: the
value of a variable is exactly what it looks like from reading the
program. Dynamic scope says: the valuable of the variable is whatever
its most recent binding is.

Recapitulation!

The straight dope, so to speak, is that J is great for a lot of
reasons (terseness, rank) but also a lot of irregular language
features (adverbs, conjunctions, hooks, forks, etc) which could be
folded all down into regular old functions without harming the
benefits of the language, and simplifying it enormously.

If you don’t believe that regular old first order functions with
lexical scope can get us where we need to go, check out my
tacit-programming libraries in R and Javascript. I
even wrote a complete, if ridiculously slow implementation of J‘s
rank feature, literate-style, here.


Footnotes

1 It bears noting that ; in an expression like (a;b;c)
is not a syntactic element, but a semantic one. That is, it is the
verb called “link” which has the effect of linking its arguments into
a boxed list. It is evaluated like this:

(a;(b;c))

(a;b;c) is nice looking but a little strange: In an expression
(x;y) the effect depends on y is boxed already or not: x is always boxed regardless, but y is boxed only if it wasn’t boxed before.

2 Top level? Top-level is the context where everything
“happens,” if anything happens at all. Tricky things about top-level
are like: can functions refer to functions which are not yet defined,
if you read a program from top to bottom? What about values? Can you
redefine functions, and if so, how do the semantics work? Do functions
which call the redefined function change their behavior, or do they
continue to refer to the old version? What if the calling interface
changes? Can you check types if you imagine that functions might be
redefined at any time? If your language has classes, what about
instances created before a change in the class definition. Believe or
not, Common Lisp tries to let you do this – and its confusing!

On the opposite end of the spectrum are really static languages like
Haskell, wherein type enforcement and purity ensure that the top-level
is only meaningful as a monolith, for the most part.


Prime Industrial Space

In their enthusiasm, they built roads
(huge, wide things, six lanes or more, sidewalks)
for which they had no buildings, through forest,
or through meadow. But to drive them, empty
and spacious, is a kind of luxury.
You have passed from, at fifty miles per hour,
reality; prime industrial space.

And into what? At an empty crossroad,
kids have knocked down the “road closed” barriers,
the black circles of their tires crossing,
crisscrossing, looping across that square of
white concrete. With the barriers down
you can drive right through to where the road ends,
leave your car, and walk through dirt, to the woods.

Quick, Probabily Naive Thoughts about Turing Machines and Random Numbers

Here is a fact which is still blowing my mind, albeit quietly, from the horizon.

Turing Machines, the formalism which we use to describe computation, do not, strictly speaking, cover computational processes which have access to random values. When we wish to reason about such machines people typically imagine a Turing Machine with two tapes, one which takes on the typical role and another which contains an infinite string of random numbers which the machine can peel off one at a time.

Screen Shot 2016-05-30 at 9.42.43 AM

I know what you are all thinking: can’t I just write a random number generator and put it someplace on my turing machine’s tape, and use that? Sure, but those numbers aren’t really random, particularly in the sense that a dedicated attacker, having access to the output of your turing machine can in principle detect the difference between your machine and one with bona fide random numbers if it has access to your outputs. And, in fact, the question of whether there exists a random number generator which uses only polynomial time and space such that a polynomial time and space algorithm is unable to detect whether the numbers derive from a real random process or an algorithm is still open.

All that is really an aside. What is truly, profoundly surprising to me is this: a machine which has access to random numbers seems to be more powerful than one without random numbers. In what sense? There are algorithms which are not practical on a normal turing machine which become immanently practical on a turing machine with a random tape as long as we are able to accept a vanishingly small probability that the result is wrong. Algorithms about which we can even do delta/epsilon style reasoning: that is, we can make the probability of error as small as we like by the expedient of repeating the computation with new random numbers and (for instance) using the results as votes to determine the “correct answer.” This expedient does not really modify the big O complexity of algorithms.

Buridan’s Ass is a paradox in which a hungry donkey sits between two identical bales of hay and dies of hunger, unable to choose which to eat on account of their equal size. There is a strange sort of analogy here: if the Ass has a source of random numbers he can pick one randomly and survive. It is almost as if deterministic, finitist mathematics, in its crystalline precision, encounters and wastes energy on lots of tiny Ass’ Dilemmas which put certain sorts of results practically out of reach, but if we fuzz it up with random numbers, suddenly it is liberated to find much more truth than it was before. At least that is my paltry intuitive understanding.

In defense of “The Thin Line Aesthetic”

I was lucky to be one of the guest artists at the Code+Art Student Visualization contest at NCSU library recently, where parts of my generative art work Clocks were displayed. In preparation for the show, I wrote specific pieces for the space, which uses a large screen made out of Christie Micro-tiles. These are modular screens which can be used to construct large and even irregular displays. While the Micro-tiles gave me the largest amount of screen real-estate I’d ever worked with, they posed their own challenges. In particular, the luminance tends to vary between tiles in such a way that art works with predominantly white backgrounds can be distracting, since each tile making up the whole screen will appear at a different brightness. The effect is less noticeable when darker colors are supplied.

Thin Lines!

Thin Lines!

During these discussions, it was suggested that using dark backgrounds might help get us away from “the thin line aesthetic” so predominant in generative art. I agree, thin lines, typically dark on white, are extremely common in generative artworks. But here I will say a few words in their defense.

Clock 10 is a thin-line clock. I’ve had occasion to think very carefully about why this clock works as a generative art piece, and it is typically the clock I talk about when talking about the project as a whole because it has a relatively simple, but non-trivial, account. Briefly, Clock 10’s charged particles want to distribute themselves evenly over the face of the clock (since they are all positively charged, and hence repel one another). The clock hands persistently frustrate this tendency by moving particles from the second hand to the hour or minute hand. As such, the particles are constantly seeking, but never attaining, their low energy equilibrium state. Critically, there is not just one such equilibrium state: there exists a family of states related by symmetry transforms (continuous and discrete rotations).

What this boils down to is that Clock 10 traces out the symmetries of the ground states. This is why, if you let the clock run for a half hour, you see concentric rings appear: these rings are the places that particles would like to be modulo rotations, were they allowed to find those states without interference.

cant-number-3

A few aesthetic choices support the relationship of Clock 10 to this interpretation. The color of the trace left by each particle is adjusted in such a way that it only darkens as the particle settles down, so that paths near equilibrium are dark while those far from it are white. And, of course, we use thin lines, which allow lots of information about those trajectories to appear on the face of the clock.

I am a barbarian, so far as artistic pedigree is concerned, but if Clock 10 lives anywhere in the landscape of the practice of generative art, it is in the school of Complexism. Complexism suggests one role of generative art is to explore complex systems. In the sense the Clock 10 is an aesthetically pleasing visualization of the ground states of a certain physical system, it meets this criterion. And thin lines help it operate in this way because they allow us to see a lot of different trajectories in a small amount of space. They give the clock non-trivial texture: tendencies of the motion can be apprehended at a large scale while details of the motion are still discernable.

I tried a variety of other ways of visualizing the trajectories, but none were particularly satisfying because they obscured the fine-scale variations in a way which significantly reduced the information content of the visualization. Part of the impact of generative art is that it imitates nature, to an extent, in that it can compound over and over again many fine motions. The accumulation of so many effects is part of the immediate perception of a work, and undermining it undermines one of the fundamental advantages of using computers, systems capable simultaneously of great precision and great, repetitive patience.

So use thin lines! Or, if you are seeking alternative aesthetic choices, try to find ones which capture the same benefits, packing lots of precise detail into the image in such a way that larger trends are also made visible.

cant-number-1.5