In defense of “The Thin Line Aesthetic”

I was lucky to be one of the guest artists at the Code+Art Student Visualization contest at NCSU library recently, where parts of my generative art work Clocks were displayed. In preparation for the show, I wrote specific pieces for the space, which uses a large screen made out of Christie Micro-tiles. These are modular screens which can be used to construct large and even irregular displays. While the Micro-tiles gave me the largest amount of screen real-estate I’d ever worked with, they posed their own challenges. In particular, the luminance tends to vary between tiles in such a way that art works with predominantly white backgrounds can be distracting, since each tile making up the whole screen will appear at a different brightness. The effect is less noticeable when darker colors are supplied.

Thin Lines!

Thin Lines!

During these discussions, it was suggested that using dark backgrounds might help get us away from “the thin line aesthetic” so predominant in generative art. I agree, thin lines, typically dark on white, are extremely common in generative artworks. But here I will say a few words in their defense.

Clock 10 is a thin-line clock. I’ve had occasion to think very carefully about why this clock works as a generative art piece, and it is typically the clock I talk about when talking about the project as a whole because it has a relatively simple, but non-trivial, account. Briefly, Clock 10’s charged particles want to distribute themselves evenly over the face of the clock (since they are all positively charged, and hence repel one another). The clock hands persistently frustrate this tendency by moving particles from the second hand to the hour or minute hand. As such, the particles are constantly seeking, but never attaining, their low energy equilibrium state. Critically, there is not just one such equilibrium state: there exists a family of states related by symmetry transforms (continuous and discrete rotations).

What this boils down to is that Clock 10 traces out the symmetries of the ground states. This is why, if you let the clock run for a half hour, you see concentric rings appear: these rings are the places that particles would like to be modulo rotations, were they allowed to find those states without interference.

cant-number-3

A few aesthetic choices support the relationship of Clock 10 to this interpretation. The color of the trace left by each particle is adjusted in such a way that it only darkens as the particle settles down, so that paths near equilibrium are dark while those far from it are white. And, of course, we use thin lines, which allow lots of information about those trajectories to appear on the face of the clock.

I am a barbarian, so far as artistic pedigree is concerned, but if Clock 10 lives anywhere in the landscape of the practice of generative art, it is in the school of Complexism. Complexism suggests one role of generative art is to explore complex systems. In the sense the Clock 10 is an aesthetically pleasing visualization of the ground states of a certain physical system, it meets this criterion. And thin lines help it operate in this way because they allow us to see a lot of different trajectories in a small amount of space. They give the clock non-trivial texture: tendencies of the motion can be apprehended at a large scale while details of the motion are still discernable.

I tried a variety of other ways of visualizing the trajectories, but none were particularly satisfying because they obscured the fine-scale variations in a way which significantly reduced the information content of the visualization. Part of the impact of generative art is that it imitates nature, to an extent, in that it can compound over and over again many fine motions. The accumulation of so many effects is part of the immediate perception of a work, and undermining it undermines one of the fundamental advantages of using computers, systems capable simultaneously of great precision and great, repetitive patience.

So use thin lines! Or, if you are seeking alternative aesthetic choices, try to find ones which capture the same benefits, packing lots of precise detail into the image in such a way that larger trends are also made visible.

cant-number-1.5


Notes on `Quantum Computing Since Democritus, Chapter 1`

For a long time, I’ve been interested in the sorts of questions exemplified by the following example:

Suppose we are Isaac Newton or  Gottfried Leibniz. We have at our disposal two sources of inspiration: data, collected by intrepid philatelists like Tycho Brahe and something like theory, in the form of artifacts like Kepler’s Laws, Galileo’s pre-Newtonian laws of motion (for it was he who first suggested that objects in motion retain that motion unless acted upon), and a smattering of Aristotelian and post-Aristotelian intuitions about motion (for instance, John Philoponus’ notion that, in addition to the rules of motion described by Aristotle, one object could impart on another a transient impetus). You also have tables and towers and balls you can roll on them or drop from them. You can perform your own experiments.

The question, then, is how do you synthesize something like Newton’s Laws. Jokes about Newton’s extra-scientific interests aside, this is alchemy indeed, and an alchemy to which most training physicists receive (or at least I received) does not address itself.

Newton’s Laws are generally dropped on the first year physics student (perhaps after working with statics for awhile) fully formed:

First law: When viewed in an inertial reference frame, an object either remains at rest or continues to move at a constant velocity, unless acted upon by an external force.[2][3]
Second law: The vector sum of the external forces F on an object is equal to the mass m of that object multiplied by the acceleration vector aof the object: F = ma.
Third law: When one body exerts a force on a second body, the second body simultaneously exerts a force equal in magnitude and opposite in direction on the first body.

(this formulation borrowed from Wikipedia)

The laws are stated here in terms of a lot of subsidiary ideas: inertial reference frames, forces, mass. Neglecting the reference to mathematical structures (vector sums), this is a lot to digest: and it is hard to imagine Newton just pulling these laws from thin air.  It took the species about 2000 years to figure it out (if you measure from Zeno to Newton, since Newton’s work is in some sense a practical rejoinder to the paradoxes of that pre-Socratic philosopher), so it cannot be, as some of my colleagues have suggested, so easy to figure out.

A doctorate in physics takes (including the typical four year undergraduate degree in math, physics or engineering) about ten years. Most of what is learned in such a program is pragmatic theory: how to take a problem statement or something even more vague, identify the correct theoretical approach from a dictionary of possibilities, and then to “turn the crank.” It is unusual (or it was unusual for me) for a teacher to spend time posing more philosophical questions. Why, for instance, does a specific expression called the “Action,” when minimized over all possible paths of a particle, find a physical path? I’ve had a lot of physicist friends dismiss my curiosity about this subject, but I’m not the only one interested (eg, the introductory chapter of Lanczos’ “The Variation Principles of Mechanics”).

What I am getting to here, believe it or not, is that I think physicists are over-prepared to work problems and under-prepared to do the synthetic work of building new theoretical approaches to existing unsolved problems. I enjoy the freedom of having fallen from the Ivory Tower, and I aim to enjoy that freedom in 2016 by revisiting my education from a perspective which allows me to stop and ask “why” more frequently and with more intensity.

Enter Scott Aaronson’s “Quantum Computing Since Democritus,” a book whose title immediately piqued my interest, combining, as it does, the name of a pre-Socratic philosopher (the questions of which form the basis, in my opinion, for so much modern physics) with the most modern and pragmatic of contemporary subjects in physics. Aaronson’s project seems to accomplish exactly what I want as an armchair physicist: stopping to think about what our theories really mean.

To keep myself honest, I’ll be periodically writing about the chapters of this book – I’m a bit rusty mathematically and so writing about the work will encourage me to get concrete where needed.

Atoms and the Void

Atoms and the Void is a short chapter which basically asks us to think a bit about what quantum mechanics means. Aaronson describes Quantum Mechanics in the following way:

Here’s the thing: for any isolated region of the universe that you want to consider, quantum mechanics describes the evolution in time of the state of that region, which we represent as a linear combination – a superposition – of all the possible configurations of elementary particles in that region. So, this is a bizarre picture of reality, where a given particle is not here, not there, but in a sort of weighted sum over all the places it could be. But it works. As we all know, it does pretty well at describing the “atoms and the void” that Democritus talked about.

The needs of an introductory chapter, I guess, prevent him from describing how peculiar this description is: for one thing, there is never an isolated region of the universe (or at least, not one we are interested in, I hope obviously). But he goes on to meditate on this anyway by asking us to think about how we interpret measurement where quantum mechanics is concerned. He dichotimizes interpretations of quantum mechanics by where they fall on the question of putting oneself in coherent superposition.

Happily, he doesn’t try to claim that any particular set of experiments can definitely disambiguate different interpretations of quantum mechanics. Instead he suggests that by thinking specifically of Quantum Computing, which he implies gets most directly at some of the issues raised by debates over interpretation, we might learn something interesting.

This tantalizes us to move to chapter 2.

Aping J’s Verb Rank in Puff

This blog post will sketch out some thoughts relating to Puff, a function level programming language I am embedding in Javascript and J‘s notion of operator rank.

Rank as it pertains to nouns in J is fairly easy to understand: it is just the number of dimensions of an array. Scalars, like 10, have rank 0, the empty value (denotable as 0 $ 0) has rank 0, a simple vector (0 0 0) has rank 1, a 2d matrix rank 2, etc.

But J also has rank for verbs. Consider the verb +.

(1 2 3) + (4 5 6)
-> 5 7 9

(For J tyros: + is called a verb in J and furthermore we use it in its dyadic sense, which is to say we pass it arguments on the left and the right.)

Informally we understand from this that in J + operates element-wise on both its left and right operands. This means its left and right rank are both zero and it operates, then, on the rank zero elements of its arguments: the individual scalar values.

But there is more to the story. As a side note, We can denote multi-dimensional arrays in J like so:

]example =. 2 3 $ 1 2 3 4 5 6 
1 2 3
4 5 6

(For the curious, that is “change the shape ($) for array 1 2 3 4 5 6 so that it is 2 3)

J has a box operator which is handy for demonstrating rank issues. It is denoted < and wraps any value into a box, which is a special scalar value type which holds something else.

<1
┌─┐
│1│
└─┘

Operators in J have rank and the rank of < is infinite. This means that it always operates on its argument as a whole.

<(1 2 3 4 5 6)
┌───────────┐
│1 2 3 4 5 6│
└───────────┘

But the smart thing about J is that you can modify verbs with adverbs one of which returns a new verb with a different rank. See if you can guess what all this means:

<"0(1 2 3 4 5 6)
┌─┬─┬─┬─┬─┬─┐
│1│2│3│4│5│6│
└─┴─┴─┴─┴─┴─┘

The array denotation 1 2 3 4 5 6 is the same as before, but now we have written <"0 instead of <. " is an adverb which modified its right hand arguments’ rank such that it is the left hand value. The result of<"0 then is a verb with the same meaning as < except that it has rank 0. Verbs with rank 0 operate on the smallest possible cells of the array, so that

<"0(3 2 $ 1 2 3 4 5 6)
┌─┬─┐
│1│2│
├─┼─┤
│3│4│
├─┼─┤
│5│6│
└─┴─┘

each element of the input is boxed regardless of the incoming arrays shape or rank.

If we use a different rank:

<"1(3 2 $ 1 2 3 4 5 6)
┌───┬───┬───┐
│1 2│3 4│5 6│
└───┴───┴───┘

We get a different result. One-ranked verbs operate 1-cells (that is, the elements of rank 1) of the incoming array, in this case the arrays 1 2, 3 4, and 5 6.

The rules work for dyadic values too – each argument of the verb has a rank (a right rank and a left rank) which determines how the underlying logic represented by the verb is used to join elements from the right and left arguments.

By modifying verb rank you can custom tailor your iteration through argument arrays and avoid most explicit looping.

Puff

Puff is mostly aping the function level semantics of J but we can analogize verb rank too. Consider the Puff function map, which has a single argument meaning:

var plus1 = _p(plus,1);
map(plus1)([1,2,3]) -> [2,3,4]

plus1 above would have, in J an infinite rank: it always applies to its whole argument. When we say map(plus1) we have a new function which applies to the N-cells of its argument (in this case integers). In other words, map creates a new function which peels off one layer of its input and applies the original function, collecting the outputs.

What, then, is

var mm_plus1 = map(map(plus1))

?

(NB, we can denote this in Puff via rep(map,2,plus1))

Here is a hint:

mm_plus1([[1,2,3],[4,5,6]]) -> [[2,3,4],[5,6,7]]

Now we have a function operating on the N-2 cells of the input. Rank in J typically operates bottom up: we start at rank 0 operating on the 0 cells, and increasing rank operates on larger and larger chunks of the input array. In contrast, iterative application of map in Puff moves us from the larger chunks to smaller and smaller chunks, until a number of applications equal to the array rank has us operating on individual items.

J being J we can mimic this behavior using negative rank.

<"_2(3 2 $ 1 2 3 4 5 6)
┌─┬─┐
│1│2│
├─┼─┤
│3│4│
├─┼─┤
│5│6│
└─┴─┘

(_2 denotes the number -2 in J for possibly obscure reasons to do with making the parser simpler.)

Given that 3 2 $ 1 2 3 4 5 6 has rank 2, the verb <"_2 must operate on the 2-2=0 cells.

The J approach of, by default, thinking about rank from 0-cells up works well for that language because matrices in J are regular and they keep track of their rank. If we represent matrices as nested arrays in Javascript (this is not the only option, but it is the most idiomatic) then the real rank of a matrix cannot be known without a full traversal, which is prohibitive.

I might, one day, integrate a multidimensional matrix implementation into Puff and then enable rank modifying functions to work on that representation, but for now I want to focus on the successive use ofmap to simulate ranking down a function from infinite rank.

Consider Rank

Consider the following definition:

function rankedCall(f,n,a){
    if(n<0){
        return rep(map, n, f)(a);
    } else {
        throw new Error("Positive ranks not yet supported.");
    }
}

var rank = _c(rankedCall);

Such that:

rank(plusOne,1)([1,2,3]) -> [2,3,4]

Cute. This gets us somewhere. But what really makes rank useful is that each argument carries its own rank and the system resolves the looping for you. In J operators have at most two arguments (to which rank applies, simulating more arguments with lists of boxes bypasses ranked dispatch).

Dealing with multiple argument functions is tricky. Let’s start with two.

Consider:

// Puff provides a plus function
plus(1,3) -> 4
// but it doesn't work on arrays
plus([1,2,3],[4,5,6]) -> '1,2,34,5,6'

That last line is because Javascript idiotically interprets [1,2,3]+[4,5,6] to mean [1,2,3].toString()+[4,5,6].toString().

For these one dimensional arrays, we can get what we want with map which applies a function f of arity n to the items of n arrays.

map(plus,[1,2,3],[4,5,6]) -> [5,7,9]

(NB. In Puff we can also have said map(plus)([1,2,3],[4,5,6]))

What if we have [1,2,3] and [[4],[5],[6]], that is, the second argument is rank two?

Put aside questions of efficiency for a moment and consider the function:

function nextCellIndex(a, indexes){
    var indexes = indexes.map(id); // copy the array
    var delta = indexes.length-1;
    var subIndex = indexes.slice(0,delta);
    var indexedArray = index.apply(null, [a].concat(subIndex));
    var done = indexes[delta]+1 < indexedArray.length;
    while(!done){
      delta = delta -1;
      if(delta<0){
          return null;
      } else {
          indexedArray = index.apply(null, [a].concat(indexes.slice(0,delta)));
          done = indexes[delta]+1 < indexedArray.length;

      }
    }
    indexes[delta] = indexes[delta]+1;
    for(var i = delta+1; i<indexes.length; i = i + 1){
      indexes[i] = 0;
    }
    return indexes;
}

This function takes an array and an array of indexes and finds the next valid index into that array by incrementing the innermost index, checking whether that is in bounds, stopping if it is, or incrementing the next innermost and so on. If there is no valid next index, then null is returned.

If we want what J would call the -2 cells of an array a, we iteratively apply this function to a two element index vector.

var a = [[1],[2,3],[4]]
var indexes = repeatAccumulate(_p(nextCellIndex,a),3,[0,0])

Evaluating to:

indexes
[ [ 0, 0 ], [ 1, 0 ], [ 1, 1 ], [ 2, 0 ] ]

that is, the indexes of the -2 cells. We can get these by, for instance,

index.apply(null, a, indexes[0])

Note that a is not a regular matrix (the second item of a has a different length than the first and third – it has no obvious rank, but we can talk about its n-cells if we talk about them from outside in. We can write a function to give us these cells:

function cells(n, a){
    if(n<0){
      var nn = -n;
      var out = [];
      var indexes = initArray(nn,0);
      while(indexes){
          out.push(index.apply(null, [a].concat(indexes)));
          indexes = nextCellIndex(a,indexes)
      }
      return out;
    } else if (n===0){
      return a;
    } else {
      throw new Error("Positive cells not yet supported.");
    }
}

We can then just fall back onto map with the appropriate applications of cells:

map(plus,[1,2,3],cells(-2,[[1,2],[3]]))
-> [ 2, 4, 6 ]

Conceptually we’ve done well for ourselves: we’ve reproduced J‘s ability to change the way that functions join elements of arrays of arbitrary dimension. On top of that, by virtue of the arity of map, which can apply a function of any arity to any number of arrays, we have extended this idea to operators of any number of arguments (J is limited to monadic and dyadic verbs.)

In addition, Puff allows us to write the above function in a point free fashion:

var ex = f(2,au(map, al(plus), n0, r(n1,_p(cells, 2))));
ex([1,2,3],[[1,2],[3]])
-> [2, 4, 6]

(NB. al returns a function which always returns the argument to al, short for always, n0 returns the first element of a thing, n1 the second, etc. f (short for lambda) returns a function which first collects its arguments into an array and then passes them through its subsequent arguments as if via r (rCompose). Finally, au (short for augment) takes a set of functions and returns a new function which transforms its inputs via functions 1..n and applies function 0 to the that list.)

Positive Ranks

Using negative ranks is much more in-line with idiomatic javascript, since there are no native multidimensional arrays. We can produce a simple implementation of positive ranks if we make a few simple assumptions about usage. First consider:

function guessRank(a){
    var rank = 0;
    var done = false;
    while(!done){
        if(typeof a['length'] === 'number'){
          rank = rank + 1;
          a = a[0];
        } else {
          done = true;
        }
    }
    return rank;
}

Working like:

:Puff:> guessRank(10)
0
:Puff:> guessRank([1,2,3])
1
:Puff:> guessRank([[1,2],[2,3],[3,4]])
2

The assumption we are making here is that the rank of sub-elements is homogeneous (and hence, the first element is a sufficient indicator). Now that we can guess the rank of an array, we can fill in the positive rank branch of our cells function:

function cells(n, a){
    if(n<0){
      var nn = -n;
      var out = [];
      var indexes = initArray(nn,0);
      while(indexes){
          out.push(index.apply(null, [a].concat(indexes)));
          indexes = nextCellIndex(a,indexes)
      }
      return out;
    } else {
      var rank = n-guessRank(a);
      return cells(rank, a);
    }
}

Now we can finally write our implementation of J‘s conjunction ". Our version of " will be called rank and will take a function and a list of ranks and return a new function with the appropriate rank behavior.

function rank(f){
    var ranks = Array.prototype.slice.call(arguments,1,arguments.length);
    return function(){
    return map.apply(null,[f].concat(map(cells, 
                         Array.prototype.slice.call(arguments,0,arguments.length),
                         ranks)));
    }
}

We can now say:

rank(plus,0,0)([1,2,3],[[4],[5],[6]])

And get [5,7,9]. Just like J. Of course, as we’ve written the code here we won’t be anywhere near the efficiency of J – in particular we iterate over each argument array separately, where we could combine all those loops into just one. But performance isn’t everything and we can always optimize the Puff implementation as needed. Rewriting the approprite sequence functions (map,mapcat,crossMap) to handle lazy versions of the sequences and introducing a lazy cells operator would be the most elegant solution. I’m sure I’ll get there eventually.

In the meantime, I hope I’ve at least helped the reader understand J‘s rank concept in greater depth and also showed off some of the nice ways Puff can simulate J style while staying entirely in Javascript.


Compose is Better than Dot (or: Function Level Programming In Javascript)

That great minds think alike is a reflection of the fact that certain ideas have an appeal that is, if not innate, then compelling in context. Hence the use of dot notation in the creation of domain specific languages in Javascript: dot is a handy way of succinctly representing a computation with a context. This is closely related to monads, and many javascript libraries relying heavily on dot for syntactic sugar are very closely related to one monad or another (eg _ to the sequence monad, promises to a sort of continuation monad, etc).

(NB. Much of the code here is inspired by a pointfree/function level programming library I am building for Javascript called puff, available here);

What I aim to prove today is that function composition subsumes the functionality of dot in this regard and that we can embed in Javascript a powerful, succinct function-level programming language in the vein of APL, J, or K/Q based primarily on function composition.

First of all, the traditional definition of function composition:

/** compose functions
 *  return a new function which applies the last argument to compose
 *  to the values passed in and then applies each previous function
 *  to the previous result.  The final result is returned.
 */
function compose(){
    var fs = Array.prototype.slice.call(arguments, 0, arguments.length);
    return function(){
    var inArgs = Array.prototype.slice.call(arguments, 0, arguments.length);
    var i = fs.length-1;
    var res = fs[i].apply(null, inArgs);
    i = i - 1;
    for(; i>=0; i = i - 1){
        res = fs[i](res);
    }
    return res;
    }
}

This version of compose allows the programmer to pass in an unlimited number of functions and returns a new function which takes any number of arguments and then threads the result through all the previous functions, transforming it each time, starting with the last function and ending with the first.

This order of composition is inspired by the fact that the function in a traditional application expression precedes the argument (on the left), transforming:

f(h(g(o)))

“naturally” to

compose(f,h,g)(o)

or, if succinctness is of interest:

var c = compose;
c(f,h,g)(o)

In this way we drop a few parentheses and hopefully express more clearly our intent.

Of course we might not wish to apply our composition immediately: we can produce useful functions via composition, after all.

var plusOne(x){ return x + 1 };
var timesTen(x){ return x*10 };

var plusOneTimesTen = c(timesTen, plusOne);

We can now apply plusOneTimesTen to as many values as we wish. Handy. However, now our intuition about naming and the order of the arguments to c are at odds. Hence, we introduce:

function rCompose(){
   return compose.apply(null,
     Array.prototype.slice.call(arguments, 0, arguments.length).reverse());
}
var r = rCompose;

So that the above looks a bit nicer:

var plusOne(x){ return x + 1 };
var timesTen(x){ return x*10 };

var plusOneTimesTen = r(plusOne, timesTen);

This reverse composition operator is similar in many respects to dot in javascript except we have abstracted away this, to which each method invocation is implicitly addressed in a dot chain. In addition, instead of simple method names, each element in our r list can be any Javascript expression which evaluates to a function. This means that we can denote any sequence of operations this way without worrying whether or not they have been associated with any particular Javascript object.

With the right set of primitives and combinators, r forms the basis of a powerful, succinct function level programming language in which we can build, among other things, expressive domain specific languages. Or with which we can rapidly denote complex operations in a very small number of characters.

Well, first of all we want the bevy of basic functions:

plus, minus, div, times, split, join, toString, index, call,
apply, etc, array

These behave as you might expect, more or less (eg, some plausible implementations to give you the spirit of the idea):

function plus(){
   var out = arguments[0];
   Array.prototype.slice.call(arguments, 1, arguments.length)
    .forEach(function(x){
        out = out + x;
     });
   return out;
}

function index(o, i){
  return o[i];
}

function call(f){
  return f.apply(null, Array.prototype.slice.call(arguments, 0, arguments.length));
}

You get the idea.

Astute readers may realize that our composition function seems to only work with functions of a single argument. Remedying this will be a matter of some interest. The simplest approach is to provide for partial application:

/** partially fix arguments to f (on the right)
 *
 */
function partialRight(f /*... fixedArgs */){
    var fixedArgs = Array.prototype.slice.call(arguments, 1, arguments.length);
    var out = function(/*... unfixedArgs */){
    var unfixedArgs = Array.prototype.slice.call(arguments, 0, arguments.length);
    return f.apply(null,unfixedArgs.concat(fixedArgs));
    }
    out.toString = function(){
    return "partialRight("+f.toString()+","+fixedArgs.join(",")+")";
    }
    return out;
}

/** partially fix arguments to f (on the left)
 *
 */
function partialLeft(f /*... fixedArgs */){
    var fixedArgs = Array.prototype.slice.call(arguments, 1, arguments.length);
    var out = function(/*... unfixedArgs */){
    var unfixedArgs = Array.prototype.slice.call(arguments, 0, arguments.length);
    return f.apply(null,fixedArgs.concat(unfixedArgs));
    }
    out.toString = function(){
    return "partialLeft("+f.toString()+","+fixedArgs.join(",")+")";
    }
    return out;
}

These functions (they might be adverbs in J) take a function and a set of values and return a new function, “fixing” the arguments to the left or right of the argument list, depending on whether we’ve usedpartialLeft or partialRight.

Its handy to introduce the following bindings:

var p_ = partialRight;
var _p = partialLeft;

I hope these are relatively mnemonic (Javascript unfortunately isn’t an ideal environment for very short, expressive names).

We can get a surprising amount of mileage out of these ingredients already. For instance, a function to remove break tags from a string and replace them with newlines (sort of contrived):

var remBreaks = r(p_(split,'<br>'),p_(join,'\n'));

compared to

function remBreaks(s){
   return s.split('<br>').join('\n');
}

(NB. If split and join are written as curried functions, as they are in puff, the above is a little shorter:

var remBreaks = r(split('<br>'),join('\n'));

Providing a meaningful default currying (which args should be applied first) is a little tricky, though.)

Going Further

The above example demonstrates that we can do some handy work with r as long as it involves simply transforming a single value through a set of functions. What may not be obvious here is that a determined programmer can denote any computation whatsoever this way, even if multiple values might need to be kept around and transformed.

Consider the function which I will call augment or au:

/** given a function f and a additional functions gs
 *  return a new function h which applies each g
 *  to its single argument and then applies f to the 
 *  resulting list of values 
 */
function augment(f /*... gs*/){
    var gs = Array.prototype.slice.call(arguments, 1, arguments.length);
    var out = function(a){
    return f.apply(null, gs.map(function(g){
        return g(a);
    }));
    }
    out.toString = function(){
    return "augment("+f.toString()+","+gs.map(toString).join(", ")+")";
    }
    return out;
}

And a nice default currying of index:

function index(o, ix){
   if(typeof ix === "undefined"){
     var realIx = o;
     return function(o){
       return o[realIx];
     }
   } else {
     return o[ix];
   }
}
var ix = index;

Now:

var fullName = augment(join(' '), ix('first'), ix('last'));

Such that:

fullName({first:'Ronald',last:'Reagan'}) -> "Ronald Reagan"

What have we just accomplished? We’ve demonstrated that we can take an arbitrary function of any arity and automatically transform it into a function which reads its input arguments from a single object. The sort of single object which we might be passing from function to function via r.

Putting it Together

To go further we need to add just one more utility function: cleave

function cleave(v /*... fs*/){
    var fs = Array.prototype.slice.call(arguments, 1, arguments.length);
    return fs.map(function(f){
    return f(v);
    });
}

and the shortcut:

function cl_(){
  return function(f){
    return cleave.apply(null, [f].concat(Array.prototype.slice.call(arguments,0,arguments.length)));
  }
}

(This is just cleave curried on the right, eg in puff: c_(cleave).)

// replaceMiddleName name newMiddleName -> transformedName
var replaceMiddleName = r(args(2),
                          cl_(r(first, split(' ')), second),
                          cl_(r(first,first),second,r(first, third)),
                          au(join(' '), first, second, third));

Let’s go nuts with some of the extra utility functions in puff:

var replaceMiddleName = f(2,
                          cl_(r(n0,split(' ')),n1),
                          cl_(n00, n1, n02),
                          join(' '));

Puff lets you do all of this insanity quite easily. There are browser and nodejs builds. You use it by saying, in node:

require('puff').pollute(global);

Which pollutes the global namespace with the puff functions. Since terseness is the rule I can’t imagine why you’d do otherwise, but you can also say:

var puff = require('puff');
puff.r(...)

In the browser, add:

<script src="./build/puff-browser.js"></script>

To your head. This will define a global puff object, which you can then use directly or say:

puff.pollute(window);

Quick Thoughts about Interactive Fiction

I’ve recently started a podcast called Text Adventure Purgatory wherein myself and several friends play and talk about Text Adventures/Interactive Fiction. Doing so has crystalized, in my mind, a few thoughts have been in mere fluid suspension in the back of my head about games and fun in general.

“A Theory of Fun for Game Design,” by Raph Koster asserts the following basic premise: fun is learning. This predicts that if a game offers to you a system which you can learn, then you will have fun playing it up until you have exhausted either the system or your capacity to continue learning about it. It’s silly to suggest that this theory covers everything that is fun or everything we might want to assert is a game (this kind of idealism is counterproductive in any context, if you ask me), but it is, I would argue, a useful one.

What is learning, anyway? I think neuroscience and contemporary machine learning techniques (which are inspired by and inspire neuroscience) can provide us with a useful model of the process: learning is an optimization problem which attempts to map inputs onto “desired” outputs or outcomes. Eg: the pixels (and their history) on a screen are mapped by our brains into a series of button presses which result in Mario reaching the end of the screen, where he touches the flag pole. Better than just describing the process, we now have a reasonable idea of how it happens too, and how to imitate the process in software.

There are lots of techniques for the latter, but they basically boil down to optimizing an objective function (the mapping from input to output) by exploring the input space, finding, and following trends in the output space. That is, start with a naive model, take some characteristic input data, apply the model to it, measure the outcome, make small changes to the model to improve the outcome (lots of strategies for this step), repeat until the model behaves well enough for your purposes. In the brain this happens by adjusting synaptic weights (and other physiological properties) of the neurons in question. In computerized learning systems this occurs by modifying the numerical parameters of the model.

Now we are ready for the point of this reflection: text adventures and interactive fiction provide too sparse a set of inputs and outputs to meaningfully train a system for playing. They (generally, I’m sure exceptions exist or attempt to exist) don’t provide a rich enough state space for learning, and hence they aren’t fun in the way that “A Theory of Fun” proposes we interpret that word.

What do I mean by “too sparse?” I mean, for one thing, that for any state in the game I can specify some non-perverse measure of similarity and value for that measure which has the following property: there will be no neighboring states included within that boundary. This is in contrast to games which involve simulated motion in space, which is, for the purposes of our discussion, continuous (that computers actually only simulate discrete spaces is not really material to the discussion: they are discrete spaces of sufficient granularity that our brains perceive them to be continuous).

For instance, there is a state in Deadline, the infocom game we played for several episodes in TAP, wherein the player character has discovered several pieces of broken china in the rose garden near the balcony of the library in which a murder has taken place. We arrive at this state only and exactly when a particular sequence of events (amounting, in isolation, to a few turns in the right order) has occured. There is nothing to refine about the process of reaching this state: either you perform the sequence of actions that produce this outcome or you do not so perform them.

A bit of reflection reveals how much in contrast this is with more typical videogames: in Super Mario Brothers, for instance, there are effectively an infinite number of ways to touch (to specify a single instance) the final flagpole in each level. As we vary the exact moment we press the jump botton, where we jump from, how long we hold it, how long we have run before, we refine the final state of interest and can find a solution which maximizes our height on the pole. There is a continuum of input states and output states (and a clear way of measuring our success) which allows those learning circuits (to use a drastically oversimplifying colloquialism) to grab onto something.

When playing a text adventure, in contrast, we essentially have nothing to do but explore, often by brute force, the state space the game gives us branch by branch until we find the final state. This is not usually fun, and using the context clues embedded in the text rarely helps: they can be either obtuse, in which case we are in the first strategy, or obvious, in which case there isn’t much to do but follow their instructions and traverse the graph. This problem is exacerbated by the fact that text adventures present themselves to us as text, creating the illusion of a rich, detailed world where, computationally, the exact opposite is true: everything reduces to a set of nodes connected by edges. Labeling more than one of those edges as “ending” the game helps a little: we can repeat the experience and land at different ending nodes by virtue of knowledge obtained on previous playthroughs, but we are still jumping for discrete state to discrete state, connected by discrete edges of low cardinality.

This isn’t a dig at interactive fiction: it is a way of explaining why it doesn’t “play like” other kinds of videogames, despite sharing a medium (computers). Novels, for instance, are even more restricted than interactive fiction: they proceed only and exactly in one way and come to life only and exactly as we read them.

Maybe these reflections tell us what we already know: that interactive fiction is more literature than game and that we should look elsewhere than traditional videogame experiences for an interpretive strategy which will allow us to discuss interactive fiction meaningfully.

Skunks

Skunks

All this burning and yet still sodden world:
dead skunks along the road I drive homeward,
each day, raising smells like the underside
of lavender, dust and slate, my dry mouth.

They are, each day, ever more abstracted,
white and black coarse fur, driven by car wheel
ever more towards lumps of tufted pink gore,
then to brown, strangely flat things, dirt, dust.

 

Vincent Toups

Buzzards

I think nothing so honest
exists as the huge buzzard,
ready, unashamed, to eat
that which rots, to eat bowels,
to eat the soft grape of dead eye.

Luxuriously, they glide,
one by one, enormous winged
black birds, to perch on the corpse,
or to dance, wings outstretched,
around it on the dead grass.

We are trussed buzzards, bound tight,
just a hop from some big corpse,
rotted now, nearly perfect,
tender, fragrant. If we could
just slip our bindings to feast.

 

Duckspeak Vs Smalltalk

The Decline of the Xerox PARC Philosophy at Apple Computers

Malcolm Gladwell’s recent piece, “Creation Myth”, in the New Yorker, about innovation and implementation via Xerox PARC, academia and Apple Computers, tells one interesting story about that surprising time in our modern history. But the story of the tensions and synergies between visionaries and businessmen elides a few interesting details about what was going on, and why, at Xerox PARC at the time. Gladwell’s version of history features a nimble entrepreneur, Steve Jobs, capitalizing on an idea the value of which a monolithic company, Xerox, can’t see. But the story of Apple and Xerox PARC is also that of a design philosophy meant to empower people diverging into one meant to entertain them or to sell them things.

When Steve Jobs visited Xerox PARC and saw the first mouse, the system he was looking at, the Alto, was running a programming environment and language called Smalltalk. While the details of this system are glossed over in the Gladwell piece, they deserve more careful attention. Although The Alto bears a superficial resemblance to modern computers, it differed in one major area: the relationship between software developers and users.

For most people software is a solid edifice – it presents a few modes of interaction to the user, maybe a special panel of customization options somewhere, but is otherwise as opaque and unmodifiable as a modern car. If users bother to think about software at all, they think of it as a product, constructed somewhere by people called “programmers” and distributed to the user. If that software doesn’t do what the user wants, he might send a hopeful technical support email, or he might just shop around for something else.

Of course, there is consumer software that includes more powerful extension features, so that, in principal, the user can add their own functionality, but these features don’t seem to be popularly used. Firefox is an example of user-extensible software, but the vast majority of users don’t use this capacity except to download what a small percentage of computer literate users write.

In other words, it is reasonably safe to say that most people who use computers have never written software.

Why should this be, and what does it have to do with Xerox PARC, Smalltalk and Steve Jobs? Well, an integral part of the Xerox PARC Philosophy was to dismantle the wall between software developers and computer users, to develop systems so easy to program that doing so would be a natural, simple aspect of computer use.

The early years of computing technology naturally produced a division between users and programmers – programming early computers was a highly technical discipline which required specific knowledge of the way the idiosyncratic hardware systems in those days worked. But while computers rapidly increased in power, the tools that programmers used to program them developed relatively conservatively. It is easy to imagine a world where those tools developed along with the computers, until programming itself became so easy that the average user would feel comfortable doing it. After all, the point of any program is to automate or facilitate tedious work, and in this respect programming itself is no different than a word processor.

That wasn’t exactly how things happened, and the reason why is a fascinating and arguably still unresolved story in and of itself. Part of that story takes place at Xerox PARC.

The Xerox PARC Philosophy

I mentioned above that the computer Steve Jobs saw on his visit to to Xerox PARC, the Alto, was running something called a Smalltalk System. Smalltalk is still around, and you can even download a self-contained Smalltalk System called Squeak and play around with one yourself. What you’ll see, if you do, is something which is probably very similar to what Steve Jobs saw on that day – a desktop-ish interface, with dragable windows and clickable buttons. And of course, you interact with the mouse.

Pharo Smalltalk

The Graphical Programming Environment of Pharo Smalltalk.

Both systems also share a fascinating property which “Creation Myth” leaves unmentioned. In Smalltalk, you can, using something called “The Browser,” pull up the “source code” for any object in the system. “Object” in this case means anything in the system whatsoever, including windows, widgets, numbers. “Source code” is the stuff that a compiler translates into machine code so the computer can do something with it. If you want, you can modify that code right there, or copy it and create a new object with user-customized behavior. The entire system is transparent and modifiable.

Most of the programming languages people used in 1979 would have looked very nearly like gibberish to a lay person. Early computers were slow, which meant that compilers took a long time to work unless they were very simple. This meant that most early programming languages were just thin shells on top of the numbers-as-command codes of machine language. Even by 1979, languages hadn’t developed much further in public use – corporate and government users (pretty much the only users before personal computing) were interested in cost-effectiveness and systems their programmers already knew, so language and system design was very conservative. New languages came along, but often they were incremental improvements on previous designs.

The designers of Smalltalk (Alan Kay, Dan Ingalls, and Adele Goldberg principally, and others), given the resources and freedom of Xerox PARC, worked actively to reverse this trend. Whereas a hodgepodge of cultural and technical realities constrained the way most other programming languages looked and felt, both Smalltalk the language and the system were written from the ground up to be so easy that a child could use them (hence the name). It was much more ambitious than just that, however. Kay saw Xerox PARC as being on the vanguard of a real revolution in human/computer interaction. In “The Early History of Smalltalk,” Alan Kay writes of this “Xerox PARC” vision of personal computing:

… the user interface would have to become a learning environment along the lines of Montessori and Bruner; and [the] needs for large scope, reduction in complexity, and end-user literacy would require that data and control structures be done away with in favor of a more biological scheme of protected universal cells interacting only through messages that could mimic desired behavior.
… we were actually trying for a for a qualitative paradigm shift in belief structures — a new Kuhnian paradigm in the same spirit as the invention of the printing press…

It is obvious from the “The Early History of Smalltalk” that Alan Kay has a direct, emotional involvement in his subject matter (he says so in fact). What is equally obvious is that Kay’s retrospective must be bittersweet at best. Smalltalk and the Alto were, at the time, the avatar of “The Xerox PARC Design Philosophy”. The systems Apple went on to produce would imperfectly capture this philosophy, and arguably, later, jettison it altogether.

In one anecdote, Kay relates showing a custom system (built in Smalltalk) meant to facilitate non-expert “programming,” to executives from Xerox PARC. This system was a kind of highly advanced programming language meant to make human-machine interaction at a very high level intuitive for non-expert users. At one point during a demonstration, a vice president, after an hour of working with the system, realized he was programming. What they accomplished, then, was a keystone for a software system which Kay felt bridged the gap between the numbers coursing through a CPU somewhere, and human intuitive reasoning.

Kay viewed programming as a natural aspect of human computer interaction, and he designed his systems to make programming the computer as easy and intuitive as creating a new Word Document or browsing the web is on modern computers. When Steve Jobs visited Xerox PARC and saw the Alto, he brought more than just the user interface to Apple Computers, he brought an entire philosophy of personal computing.

HyperCard

The Xerox PARC philosophy can be seen in a variety of technological lineages still discernible in the Apple universe. Objective C, a variant of the Smallktalk language, though without its attendant environment (the SmallTalk system), is still in use. Kay himself is quick to point out HyperCard, an early and still incredibly popular application environment, which encouraged user extension and programmability in a language called “HyperTalk,” which was inspired by Smalltalk and the Alto, was a good realization of the Xeroc PARC philosophy on the Mac.

Hypercard  Screenshot

A screenshot of Hypercard in action. Linked from here.

HyperCard, like much of the work from this period, defies comparison to modern software. Although often described as a kind of hypertexual rolodex, its “cards” could contain more than static information – they could also contain user-created multimedia and interactive components. Users would begin by adding cards for various pieces of information, but then, say as a card representing sales data grew to require a calculator, an interactive component for that purpose could be added. These components were themselves added interactively from within HyperCard.

HyperCard, and the people using it, organically grew many applications which left a permanent mark on computer history. A particularly telling fact is that the original version of the game Myst, a fantasy adventure game, was a HyperCard App. On the other side of the Atlantic, figuratively and literally, Renault, a french car manufacturer, used HyperCard to maintain its business inventory. HyperCard became the program its users needed it to be because it was open, extensible and encouraged user programming and interaction as a fundamental use-case. Even modern extensible software like Firefox tends to separate use from extension development – the average user might have no idea that Firefox supports user extension. In Hypercard, these features were “on the surface” of the design.

HyperCard also illustrates some of the difficulties that might be responsible for the gradual shift away from Xerox PARC-like open models of personal computing. According to rumor, the developer of HyperCard, Bill Atkinson, allegedly1 gave the product to Apple in 1987, with the understanding that it would be distributed for free with each Mac. The program was an immediate success. HyperCard produced a tremendous amount of feedback from the community, but since it was a free product, Apple wasn’t sure how much internal resources should be devoted to handling HyperCard development.

Perhaps seeking a way of turning the HyperCard phenomenon into a revenue stream, Apple eventually transferred HyperCard development to a subsidiary company, which attempted to transform it into a profitable business model. HyperCard was no longer released for free, but a locked down version, capable of playing, but not developing, HyperCard Applications was freely available. The “developer’s edition,” recognizable as just Hypercard, was available for purchase. In an effort to make HyperCard into a business model, Apple inadvertantly had separated users into “developers” and “users.” This, combined with the development of work-alikes with more features, seemed to destroy HyperCard’s momentum, and, despite later attempts at revival at Apple, the system fell out of use2.

Waiting for the Dynabook

Alan Kay invented the laptop computer – at least he developed a concept computer called The Dynabook which for all intents and purposes was a modern laptop and more. He envisioned that such a system, directed mostly at children (but usable by adults) would run Smalltalk, and while its possible to build the conceptual system Kay imagined in 1968 today, he still believes that the Dynabook doesn’t exist. Although tablet computers resemble the Dynabook superficially, and the One Laptop Per Child project comes close, Kay believes that his essential vision is unfulfilled. Kay points out, when asked about this, that the necessary technologies for a Dynabook device are quite old, but that corporate and cultural practices simply haven’t caught up to using them appropriately.

The Dynabook

Sketch of the Dynabook design (from Wikipedia.)

Consider by contrast any one of Apple’s iDevices. The touch screen, networking capability and user friendly design are reminiscent of the Dynabook, but, whereas on a Smalltalk system one could click on any widget and see and modify the source code, an iPad is essentially completely locked down. Not only does Apple require a license to develop and sell software for the iDevices in their “App Store,” but to even develop, for personal use, software for your own device, a separate “Developer’s Kit” (and the Apple Computer to run it on) must be acquired. Whereas Smalltalk was designed from the bottom up to facilitate programming for young and inexperienced users, the iPad targets its development tools, which are arguably byzantine by the standards of Smalltalk, to a relatively small group of developers. On top of that, software is only distributable after passing through an often arbitrary and, in any case, secretive Apple review process.

While the Dynabook was meant to be a device deeply rooted in the ethos of active education and human enhancement, the iDevices are essentially glorified entertainment and social interaction (and tracking) devices, and Apple controlled revenue stream generators for developers. The entire “App Store” model, then works to divide the world into developers and software users, whereas the Xerox PARC philosophy was for there to be a continuum between these two states. The Dynabook’s design was meant to recruit the user into the system as a fully active participant. The iDevice is meant to show you things, and to accept a limited kind of input – useful for 250 character Tweets and Facebook status updates, all without giving you the power to upset Content Creators, upon whom Apple depends for its business model. Smalltalk was created with the education of adolescents in mind – the iPad thinks of this group as a market segment.

HyperCard was, by comparison, much closer to the Dynabook ethos. In a sense, the iPad is the failed “HyperCard Player” brought to corporate fruition, able to run applications but completely unsuited for developing them, both in its basic design (which prioritizes pointing and clicking as the mechanism of interaction), in the conceptual design of its software, and in the social and legal organization of its software distribution system.

It is interesting that at one point, Jobs (who could not be reached for comment) described his vision of computers as “interpersonal computing,” and by that standard, his machines are a success. It is just a shame that in an effort to make interpersonal engagement over computers easy and ubiquitous, the goal of making the computer itself easily engaging has become obscured. In a world where centralized technology like Google can literally give you a good guess at any piece of human knowledge in milliseconds, its a real tragedy that the immense power of cheap, freely available computational systems remains locked behind opaque interfaces, obscure programming languages, and expensive licensing agreements.

The last 30 years have accustomed us to breakneck advancements in the technology we use every day, and yet at the personal level these advancements have been limited almost exclusively to communication and entertainment – so much so that arguably the public lacks even the the vocabulary to express what it is that modern computing could be doing for them or what they could be doing with modern computing. Spreadsheets are the closest most people get to “computing” with their personal computers. The electronic spreadsheet, which is itself an adaptation of an analog technology, was conceptualized in 1961.

If you ask Alan Kay about personal computing now, he is remarkably upbeat. In his view, the rapid development of technology simply outpaces the ability of corporate and educational systems to adapt, and this leads to a “pop culture” of sorts which dominates the culture of computer use. In other words, the divide between users and programmers, or at least between the truly computer literate and the merely casual computer user, isn’t a top down phenomena imposed upon the people by those in control of technology. It is an inevitable result of the rapid pace of development.

I think one of the main consequences of the inventions of personal computing and the world wide Internet is that everyone gets to be a potential participant, and this means that we now have the entire bell curve of humanity trying to be part of the action. This will dilute good design (almost stamp it out) until mass education can help most people get more savvy about what the new medium is all about. (This is not a fast process). What we have now is not dissimilar to the pop music scene vs the developed music culture (the former has almost driven out the latter — and this is only possible where there is too little knowledge and taste to prevent it). Not a pretty sight.

Alan Kay is still pushing for more symbiotic conceptualization of human/computer interaction, although he describes Smalltalk as part of his “distant past”. He presently heads a non-profit organization he co-founded called “Viewpoints Research Institute,” whose purpose is to continue to consider the questions of educational and personal computing. We’d never have gotten the iPhone if it hadn’t been for his influence at Xerox PARC. Maybe one day we’ll be lucky enough to get the Dynabook.


1 Bill Atkinson is presently a nature photographer and couldn’t be reached for comment.

2 However, HyperCard’s influence is still felt today. Last year, Dale Dougherty, editor of Make Magazine, wrote in wired that the iPad needed a HyperCard-type application. Tilestack, a web based, HyperCard-a-like with a pay-to-distribute model, recently went bust. Squeak Smalltalk includes a “Morph,” a kind of extendable program, which is loosely based on HyperCard. Although from a parallel technological lineage altogether, Emacs, which is still in wide use, and which the author used to write this article, resembles HyperCard in many respects.


Copyright J. Vincent Toups 2011

The Fox

Amen I say to you: the fox which lives
in the abandoned building near our cabin
means nothing at all. It bears no portents.
Molecule by molecule it tumbles
through space, much like the rest of us, a dense
cloud, a dense cloud, a dense cloud of
matter whispering itself the quiet
evocations of its own being! Or!

Maybe not.

It is brown. It stares at us silently,
while we sit there in our car watching it.
It is the brown of dead leaves, wet from rain
in late December, of mud, of the wood
of the empty building near our cabin.

Optional Keyword Arguments in J

J is great! It is a wonderful little language for data analysis tasks. However, to programmers used to working in modern dynamic languages, it sometimes feels a little restrictive. In particular, a ubiquitous feature in the post-Python (let’s call it) era is hash-maps. Even in older languages, like Common Lisp, the association list – allowing abritrary mappings between keys and values, is a very common idiom.

J exposes no native functionality that exactly meets this use case, one common application of which is named, optional arguments to functions.

In developing an interactive gnuplot interface for J, I wanted to pass optional keyword arguments to functions so that plots can be easily customized. So I developed a simple representation of key/val pairs which is efficient enough for small collections of named values.

Consider:

nil =: (0$0)

rankOne =: 1: -: (#@:$)
rankTwo =: 2: -: (#@:$)
talliesEven =: 0: -: (2: | #)
twoColumns =: 2: -: 1&{@:$

opts =: monad define
  keysandvals =. y
  assert. rankOne keysandvals
  assert. talliesEven keysandvals
  ii =. 2 | (i. #y)
  keys =. (-. ii) # y
  vals =. ii # y
  keys (>@:[ ; ])"(0 0) vals
)

Opts is a monadic verb which takes a flat boxed array of even length and returns a rank two boxed array whose first column is keys and whose second column is values:

opts 'key1';10;'key2';11
+----+--+
|key1|10|
+----+--+
|key2|11|
+----+--+

Look up is simple: find the key’s index in the first column and index the second column with it. Return nil if the key isn’t found.

getopt =: dyad define
  options =. x
  key =. y
  assert. rankTwo options
  assert. twoColumns options
  if. 0 -: #options do.
   nil
  else. 
    ii =. ((i.&1)@:(((key&-:@:>)"0)@:(((0&{)@:|:)))) options
    if. ii < #options do.
      (>@:(ii&{)@:(1&{)@:|:) options
    else.
      nil
    end.
  end.
)

Eg:

(opts 'key1';10;'key2';11) getopt 'key1'
-> 10

We can now define a handy conjunction to allow the specification of a default value:

dft =: conjunction define 
:
  r =. x u y
  if. r -: nil do.
   n
  else.
   r
  end.
)

Which we use this way:

(opts 'key1';10;'key2';11) getopt dft '___' 'key1'
-> 10
(opts 'key1';10;'key2';11) getopt dft '___' 'key3'
'---'

This allows us to pass optional arguments to verbs and specify default values relatively easily, as in this example from my gnuplot library:

histogram =: verb define 
  (ensureGnuPlot'') histogram y
:
  data =. y
  if. boxedP x do.
   options =. x
   gph =. options getopt dft (ensureGnuPlot'') 'gnuplot'
  else.
   gph =. x
   options =. (opt '')
  end.
  mn =. <./ data
  mx =. >./ data 
  bw =. options getopt dft (0.1&* mx-mn) 'binwidth'
  pttl =. options getopt dft '' 'plot-title'
  'binwidth=%f\n' gph gpfmt <bw
  gph send 'set boxwidth binwidth'
  gph send 'bin(x,width)=width*floor(x/width) + binwidth/2.0' 
  s =. 'plot "%s" using (bin($1,binwidth)):(1.0) smooth freq title "%s" with boxes'
  s gph gpfmt ((asFile (asVector data));pttl)
)

Where, in the dyadic case, we detect whether x is boxed, and if so, treat it as a list of options. We extract each option by name, providing a reasonable default.

This seems like a common problem, so I am wondering if anyone else in the J community has solved it before in a different way?