Quantum Bayesian Networks

April 14, 2014

Can Quantum Computers Do Machine Learning in General in Polynomial Time?!!!

Filed under: Uncategorized — rrtucci @ 2:50 pm

In 2 recent papers (here and here), Seth Lloyd, Masoud Mohseni and Patrick Rebentrost have proposed a method for using a quantum computer to do “supervised machine learning”. Their papers have been trumpeted by Lloyd and the press as a significant breakthrough. On Jan 29, 2014, Lloyd gave a lecture about his algorithm at Google.

A nagging question that you might have upon browsing the 2 Lloyd papers is, how can this be? His algorithm has a sub-polynomial time complexity. Don’t the meatier machine learning problems have exponential time complexity, even on a quantum computer? After all, as Scott Aaronson states in the masthead to his blog, “quantum computers are not known to be able to solve NP-complete problems in polynomial time”.

An answer to this nagging question about Lloyd’s 2 papers was given in the following paper by a group of Microsoft researchers:

“Quantum Nearest-Neighbor Algorithms for Machine Learning”, by Nathan Wiebe, Ashish Kapoor, Krysta Svore (abstract)

The Microsoft paper points out in its introduction that the machine learning algorithm that Lloyd et al implement on a QC is known to perform poorly in many cases on a classical computer, so its quantum computerized version will perform just as poorly, except that it will produce the same crummy answer more quickly than a classical computer.

Let me explain briefly what the Microsoft paper says about the Lloyd et al algorithm.

Suppose you are given a point \vec{x}\in \mathbb{R}^N and 2 disjoint sets A and B of points in \mathbb{R}^N. The task solved by Lloyd et al is to assign \vec{x} to either A or B, the one which is “closest” to \vec{x}.
Two simple methods of doing this are:

  • Nearest centroid (NC) method: Calculate the centroid (i.e. mean) of A and of B. Then assign \vec{x} to the set whose centroid is closest to \vec{x}.
  • Nearest neighbor (NN) method: Find the point (called the nearest neighbor of \vec{x}) in A \cup B which is closest to \vec{x}. Assign \vec{x} to the set which contains that nearest neighbor.

Lloyd et al use the NC method, which often performs poorly, whereas the Microsoft group uses the NN method, which performs much better. The catch is that whereas the Lloyd NC method has a polynomial time complexity on a QC, the Microsoft NN method has an exponential time complexity on a QC. (In fact, the Microsoft NN method uses Grover’s algorithm. My Lisbeth software for doing AI on a QC also uses Grover’s algorithm.)

Assume that red data points are uniformly distributed over red areas and blue data points are uniformly distributed over blue areas. In these 3 examples, the Lloyd method would classify 50% of the red data points as blue whereas the Microsoft method would classify all red data points as red.


  1. Hi Robert: The link you gave about “Lloyd gave a lecture about his algorithm at Google” is not working. Thanks.

    Comment by Sol Warda — April 14, 2014 @ 3:11 pm

  2. Thank you Sol. I fixed it.

    Comment by rrtucci — April 14, 2014 @ 3:22 pm

RSS feed for comments on this post. TrackBack URI

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

Blog at WordPress.com.

%d bloggers like this: