# Quantum Bayesian Networks

## October 28, 2009

### The MCMC Revolution

Filed under: Uncategorized — rrtucci @ 7:31 pm

Check out
The MCMC Revolution
Persi Diaconis, Bull. Amer. Math. Soc. 46 (2009), 179-205.

This paper was a lot of fun to read. Though Diaconis is a mathematician, a large fraction of the paper is written at a level understandable by most persons with an undergraduate degree in science or engineering. (Just skip those parts that lie beyond your level). The goal of the paper is to introduce its readers to MCMC (Markov Chain Monte Carlo), and to make them appreciate the enormous usefulness of MCMC in most scientific fields.

Those who read this blog know that the most efficient way of doing MCMC is with a quantum computer. Although the paper is only about s-l-o-w poke classical computers, I still recommend it, so you can see how our dads did things 🙂

The paper motivates the mathematical theory of Markov chains using two cool examples.

(1)(a Metropolis sampling example) A real life detective story. The California prison system asked the Stanford Statistics Department to help it decypher some coded messages that it had intercepted. The messages had been written by a prison inmate using non-standard symbols (in what is called a substitution cypher). A regiment of 2 Stanford students was assigned the task of changing this light bulb. The students used Metropolis sampling with a code “plausibility” measure equal to —censored—. I won’t give away the punchline.

(2)(Monte Carlo integration example) Consider possible placements of $N$ hard discs of radius $r$ inside the unit square. The discs must not overlap and must be completely contained in the unit square. The centers of the $N$ discs are described by a point $x \in R^{2N}$. Let $X(N,r)$ be the subset of $R^{2N}$ containing all possible placements of $N$ discs of radius $r$. The goal is to integrate some function $f:R^{2N} \rightarrow R$ over the region $X(N,r)$.

According to Diaconis

“To someone working in my part of the world, asking about applications of Markov chain Monte Carlo (MCMC) is a little like asking about applications of the quadratic formula. The results are really used in every aspect of scientific inquiry. The following indications are wildly incomplete. I believe you can take any area of science, from hard to social, and find a burgeoning MCMC literature specifically tailored to that area.”

The paper ends with a list of references suggested for further reading, references that describe the application of MCMC in the fields of (1)Chemistry and Physics (2)Biology (3)Statistics (4)Group Theory (5)Computer Science (theory).

Persi Diaconis is a distinguished professor in the Department of Statistics at Stanford University. Since his teens, he has been a skillful performer of magical card tricks. His wife, Susan P. Holmes, is also a distinguished professor in the same Stanford department, where she teaches, you guessed it, applications of MCMC to Biology.

## October 19, 2009

### New Quantum Computer Algorithm is So-so

Filed under: Uncategorized — rrtucci @ 7:10 pm

Alas, my recent paper on Quantum Gibbs Sampling has so far failed to generate any interest in the scientific or programming worlds :(. However, it occurs to me that the paper might still have a successful career in the entertainment world, as a source of themes for comic book writers :). Some possible themes that come to mind

• Mr. Spock, Vulcan, Devises New Gibbs Sampling Algorithm For Quantum Computers

• New Gibbs Sampling Algorithm Endows Quantum Computers with SuperPowers

• New Gibbs Sampling Algorithm Allows Quantum Computers to Think Like God

• New Gibbs Sampling Algorithm Created at Vulcan’s Forge

• New Gibbs Sampling Algorithm Created at Neptune’s Furnace

• One Quantum Computer Algorithm to Rule them All

• Lex Luthor Buys Exclusive Rights to New Gibbs Sampling Algorithm For Quantum Computers

• Prof. Moriarty Buys Exclusive Rights to New Gibbs Sampling Algorithm For Quantum Computers; Sherlock Holmes Said to be Depressed About Sale and Taking Laudanum

• Mycroft Holmes Buys Exclusive Rights to New Gibbs Sampling Algorithm For Quantum Computers; Beats His Brother, Sherlock Holmes, to the Sale

• Lex Luthor: “Fellow Super-villains, Pseudo-Solving Systems of Linear Equations is for Ninnies; Give Me Faster Gibbs Sampling, and I WILL CONQUER THE WORLD!” (yikes)

## October 10, 2009

### Quantum Computerization of MCMC

Filed under: Uncategorized — rrtucci @ 8:52 am Typical Americans watch Apollo 11 moon walk on TV, real time (except for 2.5 second time delay needed for light signal to make earth-moon round trip)

1969- Apollo 11 mission, Neil Armstrong and Buzz Aldrin become the first and second man to walk on the surface of the moon.

Apollo 11 mission:
Duration- 8 days round trip.
Lunar Module (lunar lander)- the Eagle.
Touchdown site- Sea of Tranquility.
Neil Armstrong: “Houston, Tranquility Base here. The Eagle has landed.”
Neil Armstrong: “That’s one small step for [a] man, one giant leap for mankind”
Neil Armstrong has been quoted as saying that at the time he thought his chances of surviving the Apollo 11 mission were 50/50. Yet he did his job without flinching. Truly a man made of the right stuff! Roaring Saturn V rocket propels Apollo 11 crew towards the heavens

2009 is the 40th anniversary of the Apollo 11 mission. It’s also the zeroth anniversary of my new paper on Quantum Gibbs Sampling. One small step for mankind, one giant leap for Bob (me).

My new paper (Ref. Tuc3 below) extends 2 previous papers of mine (Tuc1, Tuc2). Papers by other workers that particularly inspired me in this work are Refs. Som1 and Woc1 on quantum simulated annealing. All these papers fall under the general category of quantum computerizations of MCMC (Markov Chain Monte Carlo), a subject that I find extremely interesting. Why?, you ask (Okay, maybe you didn’t ask, but let me tell you anyway).

MCMC allows you to sample probability distributions. If you’ve ever needed to sample a simple probability distribution $P(x)$ defined for $x$ in a real interval, you are probably familiar with the inverse transform method, Unfortunately, this method requires finding the cumulative probability distribution and inverting it, a task which is very difficult for many probability distributions. Suppose, for example, that your probability distribution $P(x)$ is defined for $x=(x_1,x_2,\ldots,x_N)$, where each $x_i$ can assume values in a discrete set $S_i$. (This $P(x)$ can be described as a classical Bayesian network). Unfortunately, the inverse sampling method does not work for sampling most B. nets. One method that does work and is very popular is called MCMC (this includes Gibbs sampling, Metropolis-Hastings sampling and Simulated Annealing). If you are new to this subject, a wonderful introduction can be found in Chapter 4 (the section entitled “Monte Carlo Methods”) of David MacKay’s magnificent, highly intuitive book entitled “Information Theory, Inference, and Learning Algorithms” (available for free in electronic pdf form, or for money in paper form).

Why is sampling a probability distribution $P(x)$ (i.e., a classical B. net), where $x$ lives in a multidimensional domain, so difficult? A domain with a large number of dimensions is huge. The majority of the samples come from those regions of the domain where the probability is high. But finding those regions requires that we explore the entire domain!

Why is sampling a probability distribution useful? If you have a sample of points $x^{(j)}$ for $j=1,2, \ldots, N_{sam}$, distributed according to $P(x)$, then you can evaluate approximately the expected value of any function $\phi(x)$ using $E(\phi) \approx \frac{1}{N_{sam}}\sum_{j=1}^{N_{sam}} \phi(x^{(j)})$

You can also approximate probability distributions $P(x_H|x_E)$, where $x_H$ is the subset of $(x_1,x_2,\ldots,x_{N})$ which constitutes your Hypothesis variables, and $x_E$ is the subset of $(x_1,x_2,\ldots,x_{N})$ which constitutes your Evidence variables. Such Bayesian probability distributions are very useful for making inferences (in AI, bioinformatics and data mining, for example).

Why is quantum computerization of MCMC useful? Speed! Typically, for any $\epsilon>0$, a quantum computer can sample a B. net with ${\cal O}(\epsilon)$ precision in ${\cal O}(1/\sqrt{\delta})$ steps, whereas it takes a classical computer ${\cal O}(1/\delta)$ steps to achieve the same precision. Here $\delta$ is the distance between the two largest eigenvalue magnitudes of the Markov chain being used.

References:

• Tuc1– R.R Tucci, “Use of a Quantum Computer to do Importance and Metropolis-Hastings Sampling of a Classical Bayesian Network” arXiv:0811.1792

• Tuc2– R.R Tucci, “Code Generator for Quantum Simulated Annealing ” arXiv:0908.1633

• Tuc3– R.R Tucci, “Quantum Gibbs Sampling Using Szegedy Operators” arXiv:0910.1647

• Som1– R. Somma, S. Boixo, H. Barnum, “Quantum Simulated Annealing” arXiv:0712.1008

• Woc1– Pawel Wocjan, Anura Abeyesinghe, “Speed-up via Quantum Sampling” arXiv:0804.4259

## October 1, 2009

### Bioinformatics and Quantum Computing

Filed under: Uncategorized — rrtucci @ 3:46 pm

Check out
“Your Chance to get Seriously Wealthy from the Next Wave of Computational Biology”
Penny Sleuth, Aug 28th, 2009, by Patrick Cox

An excerpt:

As more powerful computing comes online, the pace of bioinformatics discovery will accelerate. Quantum computing, because it is particularly suited to sorting out cell biology, will enable a “quantum” leap in understanding.

I totally agree with Mr. Cox’s prediction. We live in an era of large data sets. These large data sets beg analysis by parallel processors, using the tools of Probability and Statistics. Quantum computers, because of their intrinsically parallel processing and probabilistic nature, are ideally suited to meet this challenge. One specific way in which quantum computers can benefit bioinformatics: A commonly used tool in bioinformatics is MCMC (Markov Chain Monte Carlo). Methods are already known (this is not a pipe dream) for doing MCMC on a quantum computer, much faster than on a classical computer. (I will discuss this in more detail in my next post, entitled “Quantum Computerization of MCMC”). So using a QC will help bioinformaticists do their job more quickly.

Further reading for the bioinformatically challenged (like me):

### I, Grammaticus, speak about Quantum Computerization Versus Quantization

Filed under: Uncategorized — rrtucci @ 3:30 pm

Another installment in the “I Grammaticus” series. Previous posts in this series (1)here (on Quayle convention) and (2)here (on simulation).

I, Grammaticus, Caesar of the vast Roman Empire, have been warned by the god Jupiter, in a dream, to use the terms “quantization” and “quantum computerization” distinctly, or else there will be dire consequences for the future of Eternal Rome.

One quantizes a classical physical theory to get a corresponding quantum physical theory. One quantum computerizes a classical algorithm to obtain a quantum algorithm that produces the same answer as the classical one, but using some quantum means.

(Physical theories are in some sense algorithms, so maybe someday someone will show that there is a deep connection between quantization and q. computerization. Only Jupiter knows this.)

In the past, physicists have used the word “quantization” in the following fashion:

The classical picture of an atom, a bunch of negatively charged electrons in planetary orbits around a positively charged nucleus, leads to contradiction. Electrons are charged particles and charged particles radiate energy when they accelerate. In the planetary picture, the electrons would be accelerating (towards the nucleus), so they would radiate. This radiation would cause their orbits to decay, and they would soon plummet into their nucleus. So if the planetary picture of atoms were correct, all atoms would be unstable, which is certainly not what we observe. To fix this classical picture, we “quantize” classical mechanics. By this, we mean that we replace the deterministic classical theory by a probabilistic one. But not just any classical probabilistic one, but a quantum probabilistic theory, one where probabilities are built by taking the magnitude squared of a sum of complex “probability amplitudes”. In the quantum picture of an atom, the planetary orbits are replaced by a probability clould around the nucleus. The electrons are no longer accelerating so they do not radiate.

Besides quantizing the classical mechanics of point particles, physicists have also found it useful to quantize classical fields and strings. A classical field theory assigns a vector to each spacetime point. Quantizing classical field theories leads to Quantum Electrodynamics (QED) and ultimately to the Standard Model. A classical string theory assigns a classical string to each spacetime point. Quantizing classical string theories leads to String Theory.

Blog at WordPress.com.