Quantum Bayesian Networks

July 28, 2010

Searching For the Optimal Gin Joint

Filed under: Uncategorized — rrtucci @ 10:48 am

From the movie Casablanca:
Rick (Bogart): “Of all the gin joints in all the towns in all the world, she walks into mine.”

I’ve often talked in this blog about using a quantum computer to sample a non-negative function P^*(x). If Z=\sum_x P^*(x), then P(x) = P^*(x)/Z is a proper (i.e. normalized) probability distribution. Now suppose that E(x) is another non-negative function, called an objective or energy function, which we want to optimize; that is, we want to find in the set of all its local minima, one point, called the global minimum, which has less (or equal) energy (sometimes only very slightly less) than all the others. E(x) can have an astronomical number of local minima, in which case we say that it describes a highly “frustrated” system.

How are sampling and optimization connected? Suppose we set P^*(x) = \exp(-E(x)/T). Now when the temperature T is close to zero, P^*(x) has very tall, sharp peaks at the minima of E(x). Hence, we can optimize E(x) by sampling P^*(x) at low T. This is partly what we do when we do simulated annealing.

Although one can use sampling to do optimization, this does not mean that sampling implies optimization; one can use sampling to do other things besides optimization. For example, one can use it to find the expected value of a quantity \omega(x) over any (not just a spiky one) probability distribution P(x). Or one can use sampling to do inference with a Bayesian network. Likewise, optimization does not imply sampling; there are other methods for doing optimization besides sampling (for instance, the Newton-Raphson and conjugate-gradient methods, although those two only work if the function E(x) is differentiable.)

If E(x) is highly frustrated and discontinuous, and the state space (set of all possible x) is very large, then there are no quick methods for finding the global minimum of E(x). In such cases, we resign ourselves to finding, in a reasonable amount of time, what we hope will be a fairly good minimum. Methods for doing this are called metaheuristic methods.

Metaheuristics performed with a classical computer

Computer scientists have invented dozens of metaheuristic optimization methods. The Wikipedia entry for the word “metaheuristic” has a very nice chronological table indicating milestones in the history of metaheuristic optimization (for classical computers). Two very important ones are simulated annealing (SA) and genetic algorithms (GA).

I won’t describe SA and GA in detail here, but I enthusiastically recommend that you learn about both of them if you haven’t already. They both mimic processes that are favored by Nature: SA mimics the gradual cooling of a metal, and GA mimics evolution through natural selection. In GA, one considers a population of many individuals. At the end of each year, children are generated from their parents and replace them. In SA, one evolves a single individual, changing his temperature at discrete times. Both SA and GA have an objective function E(x) and a sequence of points x that descend the E(x) curve most of the time, but sometimes also ascend it, so as to avoid getting trapped in the first local minimum visited. As you can see, if you look past the metallurgy/genetics language, SA and GA are quite similar. One can think of SA as GA for a population of one. In fact, some people have devised hybrids of SA and GA that retain some of the best qualities of both methods.

An interesting special case of GA is Genetic Programming (GP). In GP, the state space is a set of valid computer programs and the objective function grades how well each of those programs solves a particular problem. The goal is to find a pretty good computer program for solving a predefined problem.

Metaheuristics performed with a quantum computer

This is a fascinating field which is still largely unexplored. I believe it is possible to devise a variety of metaheuristic optimization methods for running on QCs—for example, quantum computerizations of GA and SA. And because quantum states can be “magically” present everywhere in the state space, all at once, QC algorithms that use such quantum states can explore the state space more quickly and thoroughly than algorithms that run on classical computers.

Let us review the software that already exists in this area.

  • QGame Written by Lee Spector (Spector is a professor at Hampshire College, MA, and an expert in GA). QGame is an application of classical GP. It tries to find the best QC program (sequence of gates) for achieving a particular task. Spector has also written a book explaining QGame: Automatic Quantum Computer Programming: A Genetic Programming Approach (2004, Kluwer Academic Publishers) (150 pages, $20 at Amazon) Note: Technically, QGame falls under the category of metaheuristics performed on a classical computer, not a QC.
  • Binary Image Classifier for Adiabatic Quantum Computers (AQC), by Hartmut Neven and D-Wave. (Dr. Neven started a company which was purchased by Google. Today he leads the Google team responsible for Google Goggles)
  • QuSann (quantum simulated annealing) and Quibbs (quantum Gibbs sampling).

Is this list final? Hell no! Scientists have barely begun to scratch the surface of the field of metaheuristics for QCs.

From the movie Animal House:
Bluto (John Belushi): Over? Did you say “over”? Nothing is over until we decide it is! Was it over when the Germans bombed Pearl Harbor? Hell no!

Advertisements

Create a free website or blog at WordPress.com.

%d bloggers like this: