Category Archives: quantum computing since democritus

Quick, Probabily Naive Thoughts about Turing Machines and Random Numbers

Here is a fact which is still blowing my mind, albeit quietly, from the horizon.

Turing Machines, the formalism which we use to describe computation, do not, strictly speaking, cover computational processes which have access to random values. When we wish to reason about such machines people typically imagine a Turing Machine with two tapes, one which takes on the typical role and another which contains an infinite string of random numbers which the machine can peel off one at a time.

Screen Shot 2016-05-30 at 9.42.43 AM

I know what you are all thinking: can’t I just write a random number generator and put it someplace on my turing machine’s tape, and use that? Sure, but those numbers aren’t really random, particularly in the sense that a dedicated attacker, having access to the output of your turing machine can in principle detect the difference between your machine and one with bona fide random numbers if it has access to your outputs. And, in fact, the question of whether there exists a random number generator which uses only polynomial time and space such that a polynomial time and space algorithm is unable to detect whether the numbers derive from a real random process or an algorithm is still open.

All that is really an aside. What is truly, profoundly surprising to me is this: a machine which has access to random numbers seems to be more powerful than one without random numbers. In what sense? There are algorithms which are not practical on a normal turing machine which become immanently practical on a turing machine with a random tape as long as we are able to accept a vanishingly small probability that the result is wrong. Algorithms about which we can even do delta/epsilon style reasoning: that is, we can make the probability of error as small as we like by the expedient of repeating the computation with new random numbers and (for instance) using the results as votes to determine the “correct answer.” This expedient does not really modify the big O complexity of algorithms.

Buridan’s Ass is a paradox in which a hungry donkey sits between two identical bales of hay and dies of hunger, unable to choose which to eat on account of their equal size. There is a strange sort of analogy here: if the Ass has a source of random numbers he can pick one randomly and survive. It is almost as if deterministic, finitist mathematics, in its crystalline precision, encounters and wastes energy on lots of tiny Ass’ Dilemmas which put certain sorts of results practically out of reach, but if we fuzz it up with random numbers, suddenly it is liberated to find much more truth than it was before. At least that is my paltry intuitive understanding.

Notes on `Quantum Computing Since Democritus, Chapter 1`

For a long time, I’ve been interested in the sorts of questions exemplified by the following example:

Suppose we are Isaac Newton or  Gottfried Leibniz. We have at our disposal two sources of inspiration: data, collected by intrepid philatelists like Tycho Brahe and something like theory, in the form of artifacts like Kepler’s Laws, Galileo’s pre-Newtonian laws of motion (for it was he who first suggested that objects in motion retain that motion unless acted upon), and a smattering of Aristotelian and post-Aristotelian intuitions about motion (for instance, John Philoponus’ notion that, in addition to the rules of motion described by Aristotle, one object could impart on another a transient impetus). You also have tables and towers and balls you can roll on them or drop from them. You can perform your own experiments.

The question, then, is how do you synthesize something like Newton’s Laws. Jokes about Newton’s extra-scientific interests aside, this is alchemy indeed, and an alchemy to which most training physicists receive (or at least I received) does not address itself.

Newton’s Laws are generally dropped on the first year physics student (perhaps after working with statics for awhile) fully formed:

First law: When viewed in an inertial reference frame, an object either remains at rest or continues to move at a constant velocity, unless acted upon by an external force.[2][3]
Second law: The vector sum of the external forces F on an object is equal to the mass m of that object multiplied by the acceleration vector aof the object: F = ma.
Third law: When one body exerts a force on a second body, the second body simultaneously exerts a force equal in magnitude and opposite in direction on the first body.

(this formulation borrowed from Wikipedia)

The laws are stated here in terms of a lot of subsidiary ideas: inertial reference frames, forces, mass. Neglecting the reference to mathematical structures (vector sums), this is a lot to digest: and it is hard to imagine Newton just pulling these laws from thin air.  It took the species about 2000 years to figure it out (if you measure from Zeno to Newton, since Newton’s work is in some sense a practical rejoinder to the paradoxes of that pre-Socratic philosopher), so it cannot be, as some of my colleagues have suggested, so easy to figure out.

A doctorate in physics takes (including the typical four year undergraduate degree in math, physics or engineering) about ten years. Most of what is learned in such a program is pragmatic theory: how to take a problem statement or something even more vague, identify the correct theoretical approach from a dictionary of possibilities, and then to “turn the crank.” It is unusual (or it was unusual for me) for a teacher to spend time posing more philosophical questions. Why, for instance, does a specific expression called the “Action,” when minimized over all possible paths of a particle, find a physical path? I’ve had a lot of physicist friends dismiss my curiosity about this subject, but I’m not the only one interested (eg, the introductory chapter of Lanczos’ “The Variation Principles of Mechanics”).

What I am getting to here, believe it or not, is that I think physicists are over-prepared to work problems and under-prepared to do the synthetic work of building new theoretical approaches to existing unsolved problems. I enjoy the freedom of having fallen from the Ivory Tower, and I aim to enjoy that freedom in 2016 by revisiting my education from a perspective which allows me to stop and ask “why” more frequently and with more intensity.

Enter Scott Aaronson’s “Quantum Computing Since Democritus,” a book whose title immediately piqued my interest, combining, as it does, the name of a pre-Socratic philosopher (the questions of which form the basis, in my opinion, for so much modern physics) with the most modern and pragmatic of contemporary subjects in physics. Aaronson’s project seems to accomplish exactly what I want as an armchair physicist: stopping to think about what our theories really mean.

To keep myself honest, I’ll be periodically writing about the chapters of this book – I’m a bit rusty mathematically and so writing about the work will encourage me to get concrete where needed.

Atoms and the Void

Atoms and the Void is a short chapter which basically asks us to think a bit about what quantum mechanics means. Aaronson describes Quantum Mechanics in the following way:

Here’s the thing: for any isolated region of the universe that you want to consider, quantum mechanics describes the evolution in time of the state of that region, which we represent as a linear combination – a superposition – of all the possible configurations of elementary particles in that region. So, this is a bizarre picture of reality, where a given particle is not here, not there, but in a sort of weighted sum over all the places it could be. But it works. As we all know, it does pretty well at describing the “atoms and the void” that Democritus talked about.

The needs of an introductory chapter, I guess, prevent him from describing how peculiar this description is: for one thing, there is never an isolated region of the universe (or at least, not one we are interested in, I hope obviously). But he goes on to meditate on this anyway by asking us to think about how we interpret measurement where quantum mechanics is concerned. He dichotimizes interpretations of quantum mechanics by where they fall on the question of putting oneself in coherent superposition.

Happily, he doesn’t try to claim that any particular set of experiments can definitely disambiguate different interpretations of quantum mechanics. Instead he suggests that by thinking specifically of Quantum Computing, which he implies gets most directly at some of the issues raised by debates over interpretation, we might learn something interesting.

This tantalizes us to move to chapter 2.