Author Archives: Vincent

About Vincent

Just your average physicist/computational neuroscientist turned software engineer.

Fatherhood is a Recurring Confrontation with Mortality

In childhood you are only aware of death in the immediate relationship between harm and pain. In adolescences the nature of the world imposes itself on you in the development of sexual maturity – the great social and physical sorting demanded by the fundamental reproductive urge. In these layered experiences awareness of death is already present. Unless life is particularly free of distraction or genuine succor, you learn to ignore these signs of mortality and even direct, conscious awareness of death.

For me, sometime in my thirties, it became a recurring thought that nothing more remarkable than an excess of kinetic energy would be enough to replace my own rich experience of existence with a resounding and permanent nothingness, but in an Epicurean way this was a calming background knowledge. Whatever vicissitudes life might throw at me I could be assured that at some point in the future they would be at the very least moot.

Being a father has changed this pleasant detente with mortality. The identification I feel with my child is so powerful that my own comfortable relationship with the possibility of death has been disrupted. Not only does the possibility of my own death and its effect on the life of my child put the sting back into imagining death for myself, but I now must imagine all the various ways death might manifest for him.

But its not just that. Fatherhood is a great harrowing of the body and the mind. You lose sleep and you exhaust yourself physically. My son likes to be thrown into the air and to hold on to my hands and climb up to stand on my shoulders. My muscles and joints ache with fatherhood. Fatherhood hurts in the most mundane ways. The body rises up out of the ignorable background noise of being to make itself known in its falling apart.

My partner and I decided to have kids in some sense out of a vote of confidence for life. Despite the cruelty, insanity and unknowability of the world we had the sense that the experience of life was worth not just having, but continuing. I didn’t fully anticipate that the act of bringing a child into the world would back-react in such a concrete way on my own experience of being mortal.

Philosophy of Strategy Game Design (an attempt)

I don’t get to do a lot of game development these days (now that I am a dad and I have a full time job). But I still think about game design a fair bit in my spare moments. Arguably, The Death of the Corpse Wizard is a strategy game and I enjoy talking about strategy game design in particular with the Keith Burgun Games community. There are lots of ways of talking about this subject (and I might even believe that at a fundamental level, one can’t make a good game of any kind via reductive strategy) but I, personally, find my thinking is influenced by two sources: philosophy and physics.

In particular, Bernard Suits’ book “The Grasshopper” has left a lasting impression on me, both as a kind of literary work and as an organized and systematic attempt to define what games are. I think this kind of philosophical approach can be useful for understanding even specific sorts of games, like strategy games, and I’d like to sketch an approach to the problem in that style here.

First, let me recapitulate some of Suits’ basic ideas. He defines a game as “The voluntary pursuit of a goal by less than efficient means.” This is a compact definition and thus requires some exposition. His frequent example is golf: the goal in golf is to put a ball in a hole. When we play golf we do not pursue this goal in any way. We intentionally pursue it by the less then efficient means of swinging a stick at the ball, as many times as necessary, until it lands in the hole. It seems obvious that golf only constitutes a game if we undertake it voluntarily. I have more to say on this point, but I think its reasonable to suggest that while we may go through the motions of a game with a gun to our heads, we can hardly be said to be “playing.”

This is not much remarked upon in The Grasshopper, but I think there is a reasonable implication in Suits’ definition: a game is an undertaking which is pursued for its own sake. This  is plausible if we step out of the game and watch a person play: if a person voluntarily pursues a goal by less than efficient means, it must be because the less than efficient means of pursuit are themselves the object of the behavior. After Suits, I believe it is fair to provide the following description of leisure: any activity undertaken for its own end. Thus, games are naturally leisure. We undertake the pursuit of the goal for the sake of the pursuit rather than for some external purpose.

(This helps us understand the requirement that the undertaking be voluntary: if we were coerced by violence to play the game, we would be undertaking the activity as a means of avoiding violence, not for its own sake).

Can we understand how to design better games by considering this frame?

Strategy and Strategy Games

By the above, we might suggest that when someone plays a strategy game, their goal is not to satisfy the win condition of the game. For instance, in Chess, the win condition is that the opponent’s King is in Check. But a player who simply re-arranged the pieces when their opponent is not looking isn’t playing Chess, though they are pursuing the goal of Chess. To want to play chess is to wish to reach that goal by a highly restrictive set of less than efficient means. (Suits uses the word “lusory goal” to suggest this ancillary character for the in-game goal).

Can we put a finer point on the true goal, then? Yes – the purpose is to play. If we restrict ourselves to more specific sorts of games, we can give more specific answers.

When we play strategy games our goal is to strategize. When we design strategy games our goal is to furnish a context in which the player can strategize.

Thus, to understand our job as a game designer we need only understand what it is to strategize. Simple stuff first: to strategize is to construct a strategy. What is a strategy? I’ll provide my definition here, though it doesn’t differ much from the ordinary one:

A strategy is an efficient, robust, plan.

A plan is an algorithm which takes you from some starting state to a final, desired state. A recipe for chocolate chip cookies is a plan, but it isn’t a strategy. That is because it is not robust. That is, if you find you don’t have 2 cups of flour on hand, the recipe has nothing to say about the situation. Your lack of flour is a condition for which the plan has no contingency. Robustness is a probabilistic notion: a robust plan succeeds at reaching the goal frequently when you apply it over and over again in varying situations.

An exhaustive search of the state space of the traveling salesman problem is a plan as well. But it isn’t a strategy (or it is a very, very poor one) because it isn’t efficient. Efficiency relates to the fact that there are limits on our ability to make decisions (most of the time this limit is most concretely understood in terms of time, but it might also be something like ability – we simply can’t exhaustively search the state space of Go, for example). Most generally, humans have a limited ability to exert themselves towards any end. Thus, we seek to marshal our efforts by virtue of efficient plans. This is particularly true in competitive games – if a strategy is strenuous to apply, chances are you will eventually fail to do it, at least partially.

Generating Insight

We can already get some juice out of this definition, as strategy game designers. Our games must have one or more goals (so that the player can strategize towards them). But that isn’t enough – the game must have one or more sources of variability (I’m purposefully avoiding the word randomness here). In a system without uncertainty of some kind, a plan cannot be robust because there are not varying situations over which we can test it. We might also say the robustness of plans in such a system is trivial or degenerate – all successful plans have an equal probability of succeeding: 1. Without variation in play, the player can only ever improve the efficiency of a given plan and in those circumstances they are engaged in a different activity: algorithm design. This may be leisure in some circumstances, but it isn’t strategy generation.

What about the notion of “efficiency?” First, let’s eliminate a possible source of confusion. By efficient, I don’t mean that the plan itself arrives at the goal in some limited number of turns or some other unit. Such lusory efficiency is probably a desirable property of a strategy, but I mean something different by “efficiency” here. What I mean is that the process by which the current game state is transformed into the next action is efficient. That is, it makes good use of the player’s limited cognitive resources.  This corresponds to the intuition that a good strategy doesn’t have a lot of fiddly bits, that it abstracts the true degrees of freedom in the game into effective degrees of freedom.

A trivial example: suppose we fire a virtual cannon and we want to know where the ball will land. The worst possible strategy is to memorize the table relating angle and powder volume to the final displacement. A better strategy is to understand Newton’s laws and energy conservation, which profoundly limits the amount of information you need on hand to predict the final state of the cannon ball. Firing a cannon allows this kind of simple strategy formation because the apparent degrees of freedom are redundant in specific ways that you can learn.

Thus, if you want to design strategy games you need to present the player or players with apparent degrees of freedom which contain much simpler dynamics that they can learn. The true dynamics of the game should emerge from the basic rules. These true dynamics might only be approximate, they might only apply in certain circumstances which the player also learns to identify. But the key idea is that the player needs a system which is not just complex, but which is complex in a specific way that allows approximations to be valid in some domains.

Space is a perfect example (which explains why it appears as a game component in so many games). In some fundamental sense, in a real time game, for instance, to predict everything in advance you need to track each object and figure out its update rule on each time step. But many objects move in straight lines at a constant velocity and thus can easily be projected ahead in time. What other sorts of mechanical contrivances have this property?


So far we’ve just recapitulated standard game design advice. Can we generate some novel insights?

Its more or less standard lore that games should not be calculation heavy. I’d argue this general advice is malformed and the above definition produces a deeper insight. In a good strategy game calculation should eventually yield to approximation. Systems should be designed such that calculation should reveal one or more effective theories that apply in a limited number of circumstances. The effective theories can’t be “at the surface” because otherwise they would be trivial – its not a strategy if you are certain about which effective theory you need to put into place upon initial contact with the game system. Experience and lugubrious thought should be required to transform knowledge of the basic rules into a suite of personal effective theories along with heuristics facilitating choice among them. This activity of generalizing knowledge of game state and how to choose appropriate generalizations given the knowledge you have is precisely the activity of strategy formation.

This is why, for instance, adding a timer to a game to prevent calculation doesn’t solve the problem in many games. The true problem is that there are no effective theories embedded in the low level game rules, not that players have too long to calculate. Because a strategy is necessarily (or, by definition, if you prefer) efficient, merely adding a time limit doesn’t make people strategize. It just cuts off calculation. On the other hand, if there are accessible, effective theories, then players will naturally gravitate to them because of their efficiency. Don’t add timers: adjust the basic rules to make strategy more efficient than calculation.


Another insight generated by this strategy is that fun appears nowhere in the definition of a strategy game. Leisure is a much more expansive notion than “fun” and, I’d argue, we can’t really understand what strategy games are, in particular, if we restrict ourselves to those activities which are merely fun. The pleasure of learning a strategy game involves, in part, struggle, precisely because the true effective theories upon which we should base efficient and robust planning are obscured, in part, by the surface rules of the game.


This definition of strategy gaming doesn’t surface goals – there may be one or more goals as long as they are not degenerate (eg, as long as one is not the obvious, easier goal). These goals may be boolean or score based (though for reasons I won’t elaborate upon here, I think boolean goals are better).


A strategy game is a context for strategization, the production of strategies. A strategy is an efficient and robust plan. In order for a plan to be robust, it has to withstand unanticipated changes, and thus a strategy game must involve one or more types of uncertainty over which plans can be evaluated. In order to be efficient, a plan has to abstract over details of the game state – it has to free the player from managing all the minutia of the game state in favor of one or more appropriate, high level, conceptions of the game. Players naturally want efficient plans because they are less strenuous to apply and thus provide a natural advantage. Both efficiency and robustness imply a variety of conditions on the system in which the game functions: it must be variable and it must admit summary representations.

My hope with this approach is to highlight the fundamental features of strategy games rather than their superficial elements.

A Quick Look at My Intellectual Future

I’m working on a giant post about the 2nd Annual Phenomenological Approaches to Physics Conference I attended a few months ago. My mid-life crisis has taken the form of a desire to get my head entirely around the issues related to the interpretation of quantum mechanics. I believe I am, after two years, getting the shape of the problem more or less together, in my head. Here is a rough map of the territory I want to focus on in the next few years.

  • General Relativity – the problem with quantum mechanics might just as easily be formulated as a problem with General relativity, since it is this theory which predisposes physicists to prize locality. The issue is typically developed in the context of special relativity, because wave function collapse seems incompatible with the fact that space-like separated events have no state of affairs with respect to temporal order (as I have come to see it). But General Relativity is also famously incompatible with Quantum Field Theory as we currently do it. This seems to be a minority position among philosophers, but its hard not to wonder whether the issues with the interpretations of quantum mechanics are entangled with the physical question of how to formulate a theory of quantum gravity.I’ve been doing Alex Flournoy’s Course from Colorado School of Mines using Carroll’s Spacetime and Geometry.
  • Probability – Regular probabilities enter in the interpretation of quantum mechanics in a surprisingly straightforward way, even if you can’t decide on what exactly is going on during a quantum measurement. The troubling issue with quantum mechanics is the way that measurements on ensembles of systems have surprising correlations, even when they are space-like separated. Thus, I want to have a very good account of the philosophy of probability itself. For instance, we take it for granted that, classically, correlations between distant events imply some timelike worldlines connecting them. Is this naive? I have a working knowledge of the basics of probability but I’m not yet sure where to get a good grounding in the underlying philosophical foundations of the field.
  • Mathematical Foundations of Physics – In the gedanken-experiment of Schrödinger’s Cat we’re asked to entertain the notion of a superposition of a classical object: a cat. But this superposition is quite different from those which we typically entertain in quantum mechanics because there isn’t any obvious symmetry group which allows us to view the superposition of “live” and “dead” as the eigenstate of some related measurement. It seems like this property in particular that tickle’s Einstein’s nose in the original EPR paper. My hunch is that there is such a group for the cat measurement operator but that its trivial – it contains only a single element or all of its elements have the property that their eigenvalues are indistinguishable from one another, classically. This is far beyond my current mathematical ability to appreciate. In general, an improvement in my mathematical literacy would help here.I’m planning on doing Frederic Schuller’s course on the mathematical foundation of physics.
  • Quantum Field Theory – Speaking of a lack of mathematical background, I’d like to get a firmer grasp on this subject. Most demonstrations of the problems with Quantum Mechanics depend on extremely basic single particle gedanken-experiments. I attended The Rutgers-Columbia Workshop on the Metaphysics of Science: Quantum Field Theories Conference in 2018 and it seemed from my very naive perspective that second quantization (or whatever) wasn’t particularly enlightening to the foundational questions. Its actually somewhat unclear whether effective QFTs can really serve as a foundational account of anything, given their probably lack of convergence and the issues associated with normalization,  even in the case of standard model physics (to say nothing of GR). Still, a working knowledge of the field might be enlightening.I’ve got A. Zee’s “Quantum Field Theory in a Nut Shell” but in many respects its over my head.
  • Statistical Mechanics  – The realist point of view is, of course, that quantum mechanics is just statistical mechanics of some unknown quantum mechanical system. Even in the case of ordinary interpretation of the theory, without any desire to reduce it to a classical system, it would be handy for me to have a better grasp of stat mech. I did well in this course in undergrad but I don’t have the material at my fingertips anymore. Open to suggestions on this too.

This is enough material to occupy a full time employed dad for like 15 years, which is something I try not to think about.

A Handy Web Server in Emacs Lisp

As I mentioned in my last post I’m still using Emacs. One of the big reasons is that I do the vast majority of my work as a Data Scientist over a text based terminal.

This differs substantially from the workflow of many of my colleagues. I’m not sure I’d say my workflow is optimal (though the case could be made), but it is foisted upon me by circumstance: my wife is a farmer and we have very low quality internet. Barely broadband, in fact. Consequently, if I want to work remotely, I have to make an effort to use the lowest bandwidth tools available to me. For me that is mosh which maintains a persistent connection to my remote machines and papers over some of the latency and Emacs.

Screen Shot 2019-10-13 at 8.06.58 PM

How the internet gets to my house.

Its important for me that all my data scientific work is reproducible and so I do most of my development in a Docker environment. I also break it up into discrete pieces which I orchestrate with Make. If you are a data scientist or any kind of scientist, you might be wondering how I manage this, since most of our work consists of generating and pouring through lots of figures.

The answer is that I dump all my figures directly to disk and then I look at them over a basic web server. Up until recently, that was something like this:

python3 -m http.server 8080

As I generate figures, I just pull them up in my web browser. This workflow works surprisingly well with tools like plotly, though my usual practice is to generate a pdf, png and plotly version of every figure. When I’m on a low bandwidth connection, I’m usually looking at those PNGs.


This works fine. However, I generate hundreds of figures. The python one liner above always lists the contents of a directory in alphabetical order, which can make it hard to find precisely the thing I’m looking for. In addition to that, I’ve noticed Python 3’s built in web server tends to choke on large HTML files without linebreaks (which is precisely what plotly generates).

After shopping around for a slightly nice solution without much luck (a bunch of low configuration web servers are faster or smarter than Python, but none let me sort by date, as far as I could tell) I just decided to write my own.


The tool needs to show me the most recently modified files underneath a specific directory. Its always going to be running behind a firewall but it should present a pretty small security cross section. It should be minimal in terms of requirements.

I like to write Lisp whenever I can so I decided to write the tool in Emacs Lisp. The added benefit is that it can run directly in the environment I use to do my work.

I don’t talk often about this, but Emacs Lisp is one of my favorite programming languages and Lisp dialects.

Firs things first. M-x package-list-packages Then let’s install the web-server package, which is what everyone seems to be using these days (if anything). We’ll also be using my own shadchen ML-style pattern matching library (which is a bit amateurish but totally serviceable).

We’re going to try to follow good Emacs Lisp conventions. Shadchen doesn’t but I’ll try to rewrite it sometime soon.

;; -*- lexical-binding: t; -*-

Because we’re civilized, modern, people.

(require 'shadchen)
(require 'web-server)

We’re going to need to generate some basic HTML. Consider:

(require 'shadchen)

(defun fserver-el (tag attributes &rest args)
  (cons tag (cons attributes args)))

(defun fserver-render-html-worker (html buffer)
  (with-current-buffer buffer
    (let ((tag (car html))
          (attributes (cadr html))
          (args (cddr html)))
      (insert (format "<%s" tag))
      (if (> (length attributes) 0)
          (insert " "))
      (loop for (a . r) on attributes by #'cdr
            (match a
              ((p #'symbolp s)
               (insert (symbol-name s)))
              ((list (p #'symbolp s)
                     (p #'stringp str))
               (insert (format "%s=%S"
              ((list (p #'symbolp s)
                     (p #'numberp n))
               (insert (format "%s=%S"
                               (number-to-string n))))
              ((list (p #'symbolp s)
                     (p #'symbolp s2))
               (insert (format "%s=%S"
                               (symbol-name s2)))))
            (if r (insert " ")))
      (insert ">")
      (if (> (length args) 0)
          (insert (format "\n")))
      (loop for (a . r) on args by #'cdr do
             ((stringp a) (insert a))
             ((listp a) (fserver-render-html-worker a buffer)))
            (insert (format "\n")))
      (insert (format "</%s>\n" tag)))))

(defun fserver-render-html-to-string (html)
    (fserver-render-html-worker html (current-buffer))
    (indent-region (point-min) (point-max))
    (buffer-substring (point-min) (point-max))))

(defmacro fserver-html (&rest body)
  `(cl-flet ((e (&rest args) (apply #'fserver-el args)))
     (fserver-render-html-to-string ,@body)))

With this little friend we can write the following sorts of code:

 (e 'div '((class nice-list))
  (e 'ul nil
   (e 'li '(selected) "Element 1")
   (e 'li nil "Element 2"))))

And get back the following string:

"<div class=\"nice-list\">
    <li selected>
      Element 1

      Element 2



Isn’t Lisp great?

That covers generating HTML. Surprisingly simple.

Servicing HTTP Requests

Now we need our server.

(defvar *fserver-server* nil)
(defvar *fserver-port* 8888)
(defvar *fserver-handlers* nil)

(defun fserver-add-handler (matcher handler)
  (setq *fserver-handlers* 
        (append *fserver-handlers* (list (cons matcher handler))))

(defun fserver-clear-handlers ()
  (setq *fserver-handlers* nil)

(defun fserver-restart ()
  (if *fserver-server* (ws-stop *fserver-server*))
  (setq *fserver-server* 
        (ws-start *fserver-handlers* *fserver-port*)))


Now we just M-x fileserver-restart. Of course our server doesn’t do anything.

Let’s add some handlers. First a “Hello World”.

 '(:GET . "/")
 (lambda (request)
   (with-slots (process headers) request
     (ws-response-header process 200 `("Content-type" . "text/html"))
       (e 'html nil
          (e 'head nil
             (e 'title nil "Hello World"))
          (e 'body nil
             (e 'div nil "Hello World"))))))))

Try it out, if you’re following along. It works!

Design of the Service

Our server is going to let us do two things: list files associated with a project and then serve those files.

We don’t want to expose our entire hard drive or just a single sub-path to the internet. So the server side will configure a list of project names and their local directory head.

(defvar *fserver-projects* nil)
(defun fserver-add-project (project-name root-directory)
  (setq *fserver-projects* (cons `(,project-name . ,root-directory)

(defun fserver-clear-projects ()
  (setq *fserver-projects* nil))

(defun fserver-get-project-root (project-name)
  (cdr (assoc project-name *fserver-projects*)))

Now we can design our first resource: one that lists all the files in a project, ordered by date:

GET /project/<project-name>/<filter>

Should list all the files beneath the project root (except for some obvious stuff like git contents and temp files). Filter is a set of characters that has to appear in the filename or it isn’t shown. If filter is empty, then all files are listed.

For the sake of simplicity we’re going to limit project names and filters to alphanumeric characters including underscores. This is easy to match as a regular expression, which is what we need to provide to our handler as a first pass at matching:

Here is a first jab:

(defun fserver-project-p (project-name)
  (not (not (fserver-get-project-root project-name))))

(defun fserver-valid-project-path (path)
  (let* ((parts (split-string path "/" t)))
     (string= "project" (car parts))
     (fserver-project-p (cadr parts)))))

(defun fserver-parse-project-path (path)
  (if (fserver-valid-project-path path)
      (let* ((parts (cdr (split-string path "/" t)))
             (project (car parts))
             (root (fserver-get-project-root project))
             (pattern (cadr parts)))
        (list project root pattern))

(defun fserver-handle-project (request)
  (with-slots ((process process)
               (headers headers))
    (match (fserver-parse-project-path (cdr (assoc :GET headers)))
      ((list project root pattern)
       (ws-response-header process 200 '("Content-type" . "text/plain"))
       (let ((files (shell-command-to-string
                     (format "find %s -type f" root))))

Which we register like this:

 (cons :GET
        (rx (and line-start "/project/"
          (one-or-more (or alphanumeric
                           (char "_")))
            (char "/")
            (one-or-more (or alphanumeric
                             (char "_")))
            (zero-or-more (char "/")))
           (zero-or-more (char "/"))))))

(We’re using Emacs’s excellent regular expression construction facilities.)

fserver-parse-project-path validates that we have a good project, parses out the name, retrieves the associated root location, and also extracts the pattern, if any.

Then we just use that information to invoke find and get a raw file list. We want this list to be filtered down by the pattern and to exclude a few other things and then to be sorted by date-time. Finally, after we figure that out, we’re going to convert it to HTML.

We can prune out the git stuff with

(format "find %s -type f | grep -v \\.git" root)

But what is the best way to sort by date?

Something like this is pretty portable:

(defun fserver-stat-command ()
  (match system-type
    ('darwin "gstat -c '%Y --- %n' ")
    ((or 'gnu 'gnu/linux)
     "stat -c '%Y --- %n' ")))

(defun fserver-handle-project (request)
  (with-slots ((process process)
               (headers headers))
    (match (fserver-parse-project-path (cdr (assoc :GET headers)))
      ((list project root pattern)
       (ws-response-header process 200 '("Content-type" . "text/plain"))
       (let ((files (shell-command-to-string
                     (format "find %s -type f | xargs %s | grep -v \\.git | sort -r " root (fserver-stat-command)))))

I’m checking whether I’m on Darwin in order to call the appropriate version of stat there. We’ve also filtered out the git subdirectory. Now let’s construct links. We are going to want our files to be accessible via the server at

GET /files/<project>/<path>

Where path is relative to the project given. So we want to replace the path returned by our shell call with the appropriate material. And we may as well convert it to HTML while we’re at it. Its more efficient using a regular expression to do this than it is to write it out in Elisp:

(defun fserver-handle-project (request)
  (with-slots ((process process)
               (headers headers))
    (message (format "%S" (cdr (assoc :GET headers))))
    (match (fserver-parse-project-path (cdr (assoc :GET headers)))
      ((list project root pattern)
       (ws-response-header process 200 '("Content-type" . "text/html"))
       (let* ((files (shell-command-to-string
                      (format "find %s -type f | xargs %s %s | grep -v \\.git | sort -r " root (fserver-stat-command)
                              (if pattern (format "| grep %s " pattern) ""))))
              (retargeted (replace-regexp-in-string (regexp-quote (file-truename root))
                                                    (format "/file/%s" project)
              (linkified (replace-regexp-in-string
                          "\\([0-9]+\\) --- \\(.*\\)"
                          "<li><a href=\"\\2\">\\2</a></li>"
                (e 'html nil
                   (e 'head nil
                      (e 'title nil (format "Project: %s" project)))
                   (e 'body nil
                      (e 'h1 nil (format "Project: %s" project))
                      (if pattern
                          (e 'h3 nil (format "Just those matching: %s" pattern))
                      (e 'ol nil linkified))))))))))

Now all we have to do is implement the file serving resource. This is enough to parse and validate the resource for a file request:

(defun fserver-parse-file-path (url)
  (let* ((parts (split-string url "/" t))
         (project (cadr parts))
         (rest-of-path (cdr (cdr parts)))
         (filename (file-truename
                    (replace-regexp-in-string (regexp-quote (format "/file/%s" project))
                                              (fserver-get-project-root project)
    (if (and
         (file-exists-p filename)
         (fserver-project-p project))
        (list project (file-truename (fserver-get-project-root project))
               (replace-regexp-in-string (regexp-quote (format "/file/%s" project))
                                         (fserver-get-project-root project)

Our handler is then relatively straightforward:

(defun fserver-handle-file (request)
  (with-slots ((process process)
               (headers headers))
    (match (fserver-parse-file-path (cdr (assoc :GET headers)))
      ((list project root file)
       (ws-send-file process file)))))

This does the job – but our browser just tries to download any file we grab this way. We need to specify the Mime-Type for at least a few types of files if we want to get the desired behaviors. For instance, for HTML and image types, we want the browser to open them.

This leads, finally, to:

(defun fserver-get-mime-type (file)
  (match (downcase (fserver-file-extension file))
    ((and (or "png" "jpg" "jpeg" "gif" "bmp")
     (format "image/%s" ext))
    ((and (or "html" "htm")
    ((or "txt" "md" "Rmd" "text" "csv" "tsv")

(defun fserver-handle-file (request)
  (with-slots ((process process)
               (headers headers))
    (match (fserver-parse-file-path (cdr (assoc :GET headers)))
      ((list project root file)
       (ws-send-file process file (fserver-get-mime-type file))))))

 (cons :GET
        (rx (and line-start "/file/"
                 (one-or-more (or alphanumeric
                                  (char "_")))
                 (char "/")
                 (one-or-more anything))))

And now we’re done, pretty much. A simple, useful, file server we can control from inside emacs.

Emacs Apologia (2019)

Its 2019. I’ve been using Emacs for more than a decade and I’m not inclined to stop. Sometimes, my colleagues get on my case about it – why not use (for instance) RStudio or Jupyter or whatever other IDEs are floating around out there.

They’ve got a point: if you’re doing something, its hard for Emacs to beat a custom solution which usually has much bigger mind share and corporate support to boot.

But most of the time I’m not doing one thing. I’m doing a few, related, things and its in this context where Emacs shines. I tell my friends that Emacs is a general purpose text-based task thingamajig.

Imagine the scene!

You’ve got a problem you’re working on in R. Because you’re extremely professional, you do your work in a dev environment which is reified as a docker image. You realize you need to add a dependency – so you just say in your *R* buffer (maybe you’re using ESS or maybe not – I don’t).

> install.packages("gbm")

You also add the dependency to your deps.R script which your docker file runs. M-x shell creates a new shell, where you

> docker build . -t ds

And your container is updated in the background. Maybe you find yourself doing this a lot, so you say

(defun do-build ()
   (get-buffer-process (shell "*docker-build*"))
   (format "docker build . -t ds\n")))

And then you can just M-x local-set-key C-c C-c do-build and its just a keystroke away.

While that is happening your trying to figure why some values are turning up NA when you try to read from an sqlite DB into a data frame. You want to inspect the database manually. So you - M-x shell *sqlite* then.

> docker run ... sqlite3 /path/to/sqlite.db

Now, you want to run exactly the sql you’ve got in your R script, so you write the following absolute gem of a function:

(defvar *default-comint-buffer*
(defun region->comint (s e)
  (interactive "r")
  (let* ((bufs (get-buffers-with-processes))
         (dflt (or *default-comint-buffer*
                   (car bufs)))
         (buffer (completing-read "where? " bufs nil t dflt))
         (s (concat (buffer-substring s e)
                    (format "\n"))))    
    (comint-send-string (get-buffer-process (get-buffer buffer))
    (pop-to-buffer (get-buffer buffer))
    (setq *default-comint-buffer* buffer)))

And now you can highlight sql fragments in any buffer and M-x region->comint *sqlite* and you’ll execute that code and jump to the buffer.

And region->comint will do an enormous amount of leg work for you. Suppose your project uses multiple languages: R for one step, Python for another. A hassle if you’re using a Notebook or RStudio, but relatively easy to orchestrate inside Emacs.

Sure, lots of stuff is missing. People really love Tab completion and its not always perfect in Emacs.

But if you do complicated, multi-environment, text based tasks Emacs is still, far and away, the best tool for the job. And the fact that it works over a terminal and can act as a server, which means you can pop in and out as you need, leaving the environment up for months at a time. These days I keep multiple Emaxen running as daemons, one for each active project.

Emacs is indispensible for me especially in 2019.

A spacesuit that just gets bigger and bigger and bigger after you put it on.

A spacesuit that just gets bigger and bigger and bigger after you put it on. At first you are sort of seduced by the power it gives you to act upon the actual universe, what with its ever expanding battery of big guns and missile launchers. But eventually the suit itself becomes the exterior universe relative to your own, increasingly small, role in its physical presence and operation.

Pretty soon it takes half a day to crawl through the ductwork from what used to be your helmet, but is now more of a tight control room, to the machinery controlling your energy pistol. When you get there you can’t remember why you were going in the first place: some exigency of an outside world which seems ever more remote. The automated systems of the suit can handle those conflicts, whose terms and ambitions now seem ambiguous at best.

The suit keeps getting bigger until it has its own strange ecology. It used to be tight and uncomfortable but now the interior is vast. And in that vastness, the technological immediacy of the spaces has receded, literally. Just big empty spaces like empty warehouses or abstract artistic installations in whose largest dimensions one can just barely make out what (used to be, lets be honest) a thing you used to call the air recirculator.

Its much too big for one person.

First day of fall

My child is blessed to be born near the fall equinox, and so I found myself lying in the basket swing of his new swingset (a birthday present) yesterday morning, enjoying the first cool whether of the year, while he happily chattered and repeatedly ascended and descended his slide when a peculiar thing happened. I imagined that he might slide these tiny wooden cars he has down the slide where they would fly off into the grass, perhaps be to be forgotten, their tiny chrome hubcaps becoming flecked with minute patches of rust over which a finger could pass and feel a slight texture.

I’m tempted to say that this idle image became peculiarly vivid in my mind as I swung back and forth looking at the sky, but that is not accurate. It is more accurate to say that the image became suffused with a sense significance quite larger than the things in it and, in any case, disjoint from them. As though I was staring at a key or a door the use of which would remove me radically from the context in which I was currently living and transport me elsewhere, like closing a particularly engrossing book and being surprised to return to an entirely distinct sequence of events: your own life.

“Want to play in the sandbox,” Felix said, and so I got up to open it for him and, very gradually, the sensation diminished.

RMarkdown/knitr etc Considered Harmful

Typically, I write my scientific reports in Latex. A makefile orchestrates all my analysis in stages, and some steps produce latex fragments that appear in the final document. A typical step reads the previous steps’s appropriate data into R, performs a single calculation, model training or evaluation, or generates a figure while simultaneously writing out the appropriate fragment of Latex that describes the process, including quantitative details when necessary.

I like this because each step is simple to understand, its dependencies are clearly documented by the makefile, and the reporting on the step is located right where the code is. And Make automatically handles rebuilding the appropriate parts of my document when I tell it to.

Contrast this with RMarkdown, which encourages the scientist to pile state into the document willy-nilly. Steps which depend on one another can be separated by large regions of text and code. As you develop your Markdown file, the strong temptation is to evaluate fragments of code in your interpreter, which can lead to hard to understand bugs and unreproducible results.

Most notebook style authoring tools have this problem.

I suppose its a classic story of usability vs correctness and as usual, I don’t know why I expect correctness to win.

Being a Dad Rules

A few nights ago I dreamed that I was standing on the edge of a giant sunken waterway, some kind of vast floodwater system in which six inches or so of water flowed at a good clip over cobblestones slick with moss. In it, someone was running around, chasing an animal or other creature.

I was high above.

Suddenly they were down there with my son. They were horseplaying, and they swung him around by his feet and tossed him out into the deeper water. I screamed “He can’t swim you asshole” and desperately tried to plan the fastest route down (it was so far down) so that I could get to him before he drowned. I woke up in an awful panic, which took half an hour to go away.

Black Hole Information Paradox

A Cafe I visit routinely on my morning commute exploded yesterday. We also took pictures of a black hole for the first time. My son used his potty for the first time.

Feeling slightly overwhelmed by the crazy confluence of scales which intersected in my life yesterday. On our weekly date, Shelley asked me about the long term structure and fate of the universe. Hard not to think, absurd as it is, about my own child careening into the future. Is some distant descendant going to look out the window at an earth which can barely support life on account of the increase in solar radiation or suffer some other painful sense of final detachment from the universe?

The owner of the cafe died in the explosion. I talked to him on Monday when I stopped to get a tea on the way to work. Now that impression of a friendly old man framed by the accoutrements of a bustling cafe has taken on a hyper-reality, like the morning light streaming in from the windows as the sun came up over the buildings across the street really was the excitation of a mysterious quantum field. One characterized by nothing more or less than handful of symmetry relations which ascended picoseconds after the universe began and whose reign will still be absolute when the universe is nothing but black holes and the distant, cooling, cosmic horizon.