Beyond Binary: Quantum and Biologically-Inspired Computing

CS 301 Lecture, Dr. Lawlor
This pre-Thanksgiving break lecture is purely for your own edification--this material won't show up on the test.

Quantum Computers

(2007-10-03 Warning: I spent 2006 rather confused about some aspects of quantum computing, so some of what follows is outright incorrect!)

So very small things, like electrons, have several very odd physical properties:

This means if you build a computer that uses the quantum properties of very small things like electrons, you can get some very strange computational effects:
Here's an easy one.  This is the pseudocode for quantum table search:
	qint x=?; /* x is in a superposition of all possible ints */
qint z=arr[x]; /* z is the superposition of all possible table values */
collapse(z,2); /* the value in the table is/was actually 2 */
/* x should now have collapsed */
std::cout<<x<<"\n";
We've collapsed z to 2.  x is entangled with z, so x must also have collapsed, to a consistent value.  Hence arr[x]==2.  Note that it doesn't matter how many entries are in the array--on a quantum computer, you can search any array in constant time!  (If no entries in the table have the value 2, x will contain an uncollapsed random value.   If several entries in the table contain the value 2, x will be a randomly chosen one of the valid entries.)

For a more arithmetic example, here's the pseudocode for quantum subtraction:
	qint x=?, y=?; /* x and y are in a superposition of all possible ints */
qint z=x+y; /* z is the superposition of all possible sums of x and y */
collapse(z,7); /* the sum is actually 7 */
collapse(y,3); /* and y is actually 3 */
/* x should now have collapsed */
std::cout<<x<<"\n";
The trick is that quantum entanglement "solves for x"; x is entangled with y and z.  y and z have collapsed.  Hence x collapses to a consistent value for the given y and z.   Here's the pseudocode for quantum division:
	qint x=?, y=?; /* x and y are in a superposition of all possible ints */
qint z=x*y; /* z is the superposition of all possible products of x and y */
collapse(z,6); /* the product is actually 6 */
collapse(y,3); /* and y is actually 3 */
std::cout<<x<<"\n";
Note that there's only one x that actually makes the equation work--x==2.  A quantum computer's register automatically collapses to this value.

Both subtraction and division are trivial on a normal non-quantum machine.  But here's the pseudocode for quantum factorization:
	qint x=?, y=?; /* x and y are in a superposition of all possible ints */
qint z=x*y; /* z is the superposition of all possible products of x and y */
collapse(z,775872829); /* the product is this */
std::cout<<x<<","<<y<<"\n"; /* what are X and Y? */
When we try to print out x and y, they haven't fully collapsed yet.  But the act of printing them (actually, the act of reading them out into a normal machine) will cause them to collapse to something consistent.  Sometimes x will be 1 and y will be z.  That's the product you don't want.  But about half the time x will be 9677 and y will be 80177, which are the prime factors of z!

That is, factorization is just as easy as division on a quantum machine--it's a constant-time operation (assuming a big enough quantum machine).  On a classical machine, division of ints is kinda slow (20ns), but factorization is insanely slow--it's an exponential-time search, which can take billions of years for large integers.

Factoid!  RSA encryption is based on the difficulty of factoring large numbers.  SSH and SSL use RSA encryption.  Hence a quantum computer with a 4096-bit "qint" could almost instantly break the best encryption algorithms available today.

Peter Shor described how to build a theoretical quantum factorization circuit in 1994.  People have actually built quantum machines with 8-bit registers

Practical Difficulties with Quantum Computers

The real practical problem with quantum computers is that of "decoherence"--premature wavefunction collapse.  If the electrons involved interact with too much of the outside world, they collapse (to some random value) too early, and then you're back to normal circuitry.  The largest quantum computers actually built so far have only 8 bits, and only one register, which means the maximum theoretical speedup is 28=256 times faster than a normal machine.  In practice, current quantum computers are way slower than normal computers.  With larger quantum computers currently manufacturing defects, thermal noise, and leakage current destroy wavefunctions before they can do any useful work.  But a huge variety of electron (and other particle) containment devices are being tried and built, so useful quantum computers may someday be built.  Or perhaps there is some physical limitation on the scale of wavelike effects, and hence quantum computers will always be limited to a too-small number of bits or too-simple circuitry.

Roger Penrose has a theory that the human brain is actually just a quantum computer.  This could explain how we're able to do some of the really amazing things we do, like recognize pictures of objects.  The deal is that it's easy for a computer to *simulate* a picture of an object (that is, object->picture is easy), but to *recognize* a picture of an object means searching over all possible objects, orientations, lighting, etc. (that is, picture->object is hard).  For a quantum computer, these are the same problem--a quantum program to do rendering could just render the superposition of all possible objects, collapse the rendering to the picture, and then check to see which object it actually drew!

Biological Computing

As components get smaller and faster, ordinary computers are getting more and more sensitive to small disturbances.  Already, cosmic rays can impact server uptimes. Anecdotally, there's a curious fact about space travel.  Due to the higher radiation:
Human brain cells can be killed in large absolute numbers by chemicals such as alcohol and a variety of viruses and bacteria (e.g., meningitis).  Yet the brain remains operational (at degraded efficiency) despite such abuse, and rarely suffers significant permanent damage from these injuries.  That is, the human brain is a suprisingly fault-tolerant computer.  Unfortunately, we have no idea how the human brain operates (see Gerstner's online neuron reverse-engineering attempts).  But we can infer it's at least as smart as the reactions of a single cell.

Ecosystem Design

A single cell uses a number of interesting design principles.  Many of these same principles are shared by healthy ecosystems, economic markets, functioning democracies, and piles of gravel.  I've come to call these principles "ecosystem design":
As an example, I claim an automobile, CPU, dictatorship, and orderly stack of bricks use "machine design", not ecosystem design: these systems have crucial decisionmaker parts (e.g., the braking system, control unit, dictator, or middle brick) whose failure can dramatically change the entire system.   Machine design requires somebody to go in and fix these crucial parts now and then, which is painful.

Computing With Biology

Leonard Adleman demonstrated how to use DNA to quickly perform an exhaustive search in parallel.  The idea is to dump all the input molecules into a well-mixed test tube.  The inputs stick together in various ways, some of which represent the outputs you're looking for.  You then have to use chemistry to winnow out the (huge number of) incorrect outputs until you're left with the correct outputs, which you can read off.  The advantage is density--since each input letter can be represented by a single molecule, and the computation proceeds in parallel and in 3D, you can search enormous search spaces quickly. 

You can get companies online that will synthesize custom genes for you.  For example, $100 will buy you 1 nano-mol of flourescent probe molecules to tag a particular protein or sequence you're interested in.  1 mol is 6.022 x 1023 molecules.  So your 1 nano-mol is actually 6.022 x 1014 molecules.  That's 6 trillion molecules per dollar!