Quantum Bayesian Networks

November 24, 2010

What Are the Limits of What We Can Calculate? Quantum Computers Can Sample Functions That Are Inaccessible to Classical Computers

Filed under: Uncategorized — rrtucci @ 9:06 pm

If 20th century humans had entrusted complexity theorists with the task of building digital computers, I dare say we would all still be using abacuses. And complexity theorists would still be writing papers about the amazing complexity issues raised by the abacus. Complexity theorists are not known for their skill at building practical things.

Maybe you don’t have the time and energy necessary to grok the definitions and inter-relationships among the hundreds of complexity classes in the Complexity Class Zoo. Maybe you are waiting for an executive summary, issued after complexity theorists figure out a classification theorem for all those classes. Until that executive summary is available, the field of complexity theory will continue to seem hopelessly half-baked to many of us.

Nevertheless, there is no doubt that some very smart people are complexity theorists, and that their work has enriched quantum computing significantly. So it pays off, even if you are a person with a more practical bend of mind, to pay attention to what they are up to. Complexity theorists also help to sell quantum computers to the public, by proving that quantum computers can calculate certain quantities that are totally inaccessible to classical computers. For example, Peter Shor proved that quantum computers can find the factors of very large integers in polynomial time, whereas all known classical algorithms require exponential time to do so. Another example is the recent paper:

“The Computational Complexity of Linear Optics”, by AA (Scott Aaronson and Alex Arkhipov). ArXiv abstract here. Scott’s blog post about it here.

This paper is about a topic that is near and dear to my heart, a topic that is key to what my program Quibbs does. The topic is: using a QC to sample probability distributions. So the rest of this blog post will be devoted to giving a brief summary of the AA paper. Perhaps I’ll say enough to pique your interest in the dark arts of complexity theory.

AA propose an optics experiment that would calculate (to a certain precision) a special probability distribution. For photon numbers 10<n<50, the AA optics experiment would take polynomial(n) steps to calculate this probability distribution, if quantum mechanics is to be believed. AA prove (assuming some very plausible conjectures about complexity classes) that a classical computer would require exponential(n) steps to calculate the same probability distribution (to the same precision). For n slightly larger than 50, classical computers would take longer than the age of the universe to get an answer.


  • m= number of modes (m is mnemonic for mode) 
  • n= total number of photons 
  • \Phi^m_n = set of m-tuples (s_1, s_2, \ldots, s_m), where the components s_j are non-negative integers, and where s_1 + s_2 + \ldots +s_m = n. Each s_j represents the number of photons in the j’th mode and thus n is the total number of photons. For example, \Phi^2_3 = \{(3,0), (2,1), (1,2), (0,3)\}
  • 1_n = (1,1,\ldots,1,0,0,\ldots,0), where there are n ones and m-n zeros. This assumes m\geq n.

AA consider a conditional probability distribution P(s|\theta)=|\langle s |U(\theta)| t\rangle|^2 where

  • s, t \in \Phi^m_n and t=1_n is preferable. 
  • U(\theta) is a unitary transformation produced by an ideal (free of environmental noise) optical device consisting of passive elements (beam splitters and phase shifters). The optical device has m input ports and the same number of output ports. A “mode” (i.e., a monochromatic beam of light) enters and exits each input and output port. By assumption, all input modes have the same frequency. Because phase shifters and beam splitters do not affect the frequency of a mode, all modes inside the device are at the same frequency too. 
  • \theta takes poly(n) bits to specify. It includes all parameters that U depends on

Let \hat{P}_N(s|\theta) be an estimate of P(s|\theta) calculated from a number N=poly(m,n) of samples of s. These samples are obtained from a non-ideal experiment with environmental noise. AA assume (they expect experimentalists to corroborate this) that it is possible to perform an optical experiment with poly(n) elementary gates (beam splitters or phase shifters) for which \sum_s |P(s|\theta)-\hat{P}_N(s|\theta)| < \epsilon where \epsilon = \frac{1}{p(n)} for some polynomial p() of n.


  1. Is this type of probability distribution useful for anything?

    Comment by Geordie — November 24, 2010 @ 9:57 pm

  2. Hi Geordie,
    It’s possible that the AA probability distributions are useful, but I don’t believe that anyone has shown this to be the case. Of course, there are probably many other probability distributions that are also inaccessible to classical computers, and some of those might be more useful than the specific ones considered by AA.

    Comment by rrtucci — November 25, 2010 @ 3:21 am

RSS feed for comments on this post. TrackBack URI

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

Blog at WordPress.com.

%d bloggers like this: