Quantum Bayesian Networks

January 6, 2010

Grover’s Algorithm for Dummies

Filed under: Uncategorized — rrtucci @ 9:20 pm

When first trying to learn about Grover’s algorithm (GA), many people can follow the math (which is fairly simple if you understand the basics of quantum mechanics and Dirac notation), but they can’t understand the purpose of GA. Can GA be used to search an unstructured database, as is frequently claimed? For instance, can it be used to search a disordered telephone directory for the telephone number of Mary Smith? Not really. So then, what are quantum computerists raving about? Are they all lying or seriously deluded? Not really. It turns out that they are right to rave; GA is indeed a very useful result. In my opinion, the problem is that most GA expositions do an atrocious job explaining its use. So let me try to do a better job here.

I will assume that you’ve read the GA math, which you can find, for example, in this Wikipedia article.

Let N_S=2^{N_B} be the number of states for N_B bits. GA is most useful when N_B is very large. However, for simplicity, let’s assume that N_B=4. Suppose you have a register with N_B bits. Let Tar be a target configuration for that register. For example, for N_B=4, Tar might be 1010. Let the beginning configuration be the zero configuration, the one in which all bits are zero. Suppose the job of the computer, whether classical or quantum mechanical, is to transform the register from the zero configuration to configuration Tar.

A classical computer might do this

0000\rightarrow 0001\rightarrow0010\rightarrow0011\ldots\rightarrow Tar

Here each arrow represents one step. I’ll call this sequence of steps CW (Classical Walk). On average, a CW will take N_S/2 steps to go from 0000 to Tar.

A quantum computer, on the other hand, can do GA, which looks like this

|0000\rangle\rightarrow |Super_1\rangle\rightarrow |Super_2\rangle \rightarrow \dots\rightarrow |Tar\rangle.

Here we define |Super_1\rangle=\frac{1}{\sqrt{N_S}}\sum_{x=0}^{N_S-1}|x\rangle. Furthermore, |Super_2\rangle, |Super_3\rangle, etc. denote linear combinations (i.e., Superpositions) of the vectors |x\rangle for x=0,1,\ldots, N_S-1.
The number of steps GA takes to reach Tar is about \sqrt{N_S}, much smaller than the number of steps taken by CW. That’s because one GA step is much more powerful than one CW step. In each step, GA performs, in one fell swoop, a matrix multiplication by a HUGE N_S\times N_S matrix (albeit a sparse one).

So why can’t the classical computer (or the quantum one) just go from 0000 to Tar (respectively, from |0000\rangle to |Tar\rangle ) in just one step? Or else, suppose Tar=0101. Why can’t the classical computer (or the quantum one) do this: 0000\rightarrow 0100\rightarrow 0101 (respectively, |0000\rangle\rightarrow |0100\rangle\rightarrow |0101\rangle )? Because the rules of the game entail that it is not possible for any step to know Tar so well that it can change one or more of the bits to their final value in one fell swoop. That would require much more precise information about Tar than the oracle is capable of giving.

Note that each CW step is “almost Tar-blind”; meaning that it does not depend on what Tar is, except in deciding whether to stop or go on. It’s as if we were moving from 0000 to Tar in a dark room, taking small blind steps, until we hit the target. Once we hit the target, we recognize it as such and stop.

Now note that each GA step is NOT almost Tar-blind; in fact, each GA step is a rotation by a small angle \alpha in the plane that contains the vectors |Super_1\rangle and |Tar\rangle, and \alpha depends on the angle between |Super_1\rangle and |Tar\rangle. So the GA steps are not almost Tar-blind. Far from it.

You might ask, if the CW steps are constrained to be almost Tar-blind and the GA steps aren’t, isn’t the analogy between CW and GA unfair? The analogy between CW and GA is ultimately just that, an analogy. Analogies are not unique or perfect. The only perfect analogy of X is X itself. Many people, including Grover himself, find the CW/GA analogy useful. In fact, it appears that this analogy is what led Grover to discover his algorithm in the first place. So the analogy can indeed be useful. But don’t push it, buddy. An unfortunate and very common example of pushing it too far: it’s clear that one can use CW to search a disordered telephone directory for Mary Smith. However, one cannot use GA, all alone, to do such a search. So it’s incorrect, and quite confusing to beginners, to assert that GA can be used to search an unstructured database.

If GA can’t be used to search an unstructured data base, then what good is it? Well, it is extremely useful for doing “amplitude amplification”, meaning that it magnifies the amplitude of |Tar\rangle. It tells you how to go from |Super_1\rangle to |Tar\rangle in \sqrt{N_S} “easy” steps! By easy steps, I mean that each step can be decomposed into a SEO (Sequence of Elementary Operations like CNOTs and qubit rotations), in such a way that the SEO has a length that is merely polynomial in N_B.

And that’s the lesson. For overachievers, here are a few additional observations:

Fig.1

Geometric Picture of GA
If n\in \{0,1\}, then clearly (-1)^n = 1 -2n. To prove it, just substitute 0 and 1 for n on both sides. If |\psi\rangle is a normalized quantum state, then it is likewise true that (-1)^{|\psi\rangle\langle\psi|}= 1 - 2|\psi\rangle\langle\psi|. As you can see from Fig.1, the operation 1 - 2|\psi\rangle\langle\psi| is a reflection: it takes any vector and replaces it by its reflection about the plane perpendicular to |\psi\rangle. Let \Gamma be the plane that contains vectors |Tar\rangle and |Super_1\rangle, and let \Gamma^\perp be the axis perpendicular to \Gamma. Reflections are a special type of orthogonal matrix. Since R_{Tar}=(-1)^{|Tar\rangle\langle Tar|} and R_1=(-1)^{|Super_1\rangle\langle Super_1|} are both orthogonal matrices, G=-R_1R_{Tar} must be an orthogonal matrix too. In fact, one can show that G is a rotation (about the axis \Gamma^\perp) by an angle \alpha=2\arcsin(\langle Tar|Super_1\rangle)\sim 1/\sqrt{N_S}. To do GA, one applies G about \sqrt{N_S} times to |Super_1\rangle, and this yields (approximately) |Tar\rangle.

Use of GA in Quantum Gibbs Sampling
Being such a rabid fan of quantum Gibbs sampling, I cannot resist mentioning the fact that quantum Gibbs sampling is an application of GA. If \pi(x) is a probability distribution where x=0,1,2,\dots,N_S-1, then define the state |\sqrt{\pi(x)}\rangle = \sum_x \sqrt{\pi(x)}|x\rangle. The goal of Gibbs sampling is to steer an arbitrary, known quantum state into |\sqrt{\pi(x)}\rangle. Then N_{sam} measurements of |\sqrt{\pi(x)}\rangle will yield N_{sam} samples x^{(s)} for s=1,2,\dots N_{sam}. These samples will be distributed according to \pi(x). The steering is done using GA. (Besides GA, it takes a few more ingredients to cook a tasty quantum Gibbs sampler; for example, it also takes quantum multiplexors, Szegedy operators and quantum Phase Estimation.)

About these ads

4 Comments »

  1. Very interesting and useful post!
    I particularly liked the way you explained Grover’s Algorithm to explain how to get from a initial state (like 0000) to another (like 0100), considering not enough information is provided to go in a single step.

    Cheers from Peru

    Comment by Jose — March 10, 2010 @ 5:29 am

  2. The following thread in Physics Forums might also be of interest to the readers of this blog post:
    Grover’s Algorithm: is it really a search algorithm

    Comment by rrtucci — April 5, 2011 @ 12:54 pm

  3. please provide solved example about Grover algorithm in details please,please

    Comment by m zidan — October 22, 2012 @ 10:47 am

  4. Hi Zidan, This might not be the most pedagogical “worked out” example in the world but it’s the one that I personally am most familiar with: My software Quibbs uses Grover’s algorithm on a quantum computer to do “sampling” of a probability distribution. The Quibbs software and links to papers that explain it can be found at my website http://www.ar-tiste.com

    Comment by rrtucci — October 22, 2012 @ 2:54 pm


RSS feed for comments on this post. TrackBack URI

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

The Rubric Theme. Blog at WordPress.com.

Follow

Get every new post delivered to your Inbox.

Join 38 other followers

%d bloggers like this: