Quantum Bayesian Networks

March 22, 2019

Life in the time of the Bayesian Wars: Qubiter can now do Back-Propagation via Autograd

Filed under: Uncategorized — rrtucci @ 12:57 am

The main purpose of this blog post is to announce that the quantum simulator that I manage, Qubiter (https://github.com/artiste-qb-net/qubiter), can now do minimizations of quantum expected values, using Back-Propagation. These minimizations are a staple of Rigetti’s cloud based service (https://rigetti.com/qcs), which I am proud and grateful to have been granted an account to. Woohoo!

Technically, what Rigetti Cloud offers that is relevant to this blog post is

Hybrid Quantum Classical NISQ computing to implement the VQE (Variational Quantum Eigensolver) algorithms.

Phew, quite the mouth full!

The back-prop in Qubiter is currently done automagically by the awesome software Autograd (https://github.com/HIPS/autograd), which is a simpler version of, and one of the primary, original inspirations for PyTorch. In fact, PyTorch contains its own version of Autograd, which is called, somewhat confusingly, PyTorch Autograd, as opposed to the the earlier HIPS Autograd that Qubiter currently uses.

I also plan in the very near future to retrofit Qubiter so that it can also do back-prop using PyTorch and Tensorflow. This would enable Qubiter to do something that Autograd can’t do, namely to do back-prop with the aid of distributed computing using GPU and TPU. I consider enabling Qubiter to do back-prop via Autograd to be a very instructive intermediate step, the bicycle training wheels step, towards enabling Qubiter to do distributed back-prop via PyTorch and TensorFlow.

AI/Neural Network software libs that do back-propagation are often divided into the build-then-run and the build-as-run types. (what is being built is a DAG, an acronym that stands for directed acyclic graph). Autograd, which was started before Pytorch, is of the build-as-run type. PyTorch (managed by Facebook & Uber, https://en.wikipedia.org/wiki/PyTorch) has always been of the b-as-run type too. Tensorflow (managed by Google, https://en.wikipedia.org/wiki/TensorFlow) was originally of the b-then-run type, but about 1.5 years ago, Google realized that a lot of people preferred the b-as-run to the b-then-run, so Google added to Tensorflow, a b-as-run version called Eager TensorFlow. So now Tensorflow can do both types.

PyTorch and TensorFlow also compete in an area that is near and dear to my heart, bayesian networks. The original PyTorch and TensorFlow both created a DAG whose nodes were only deterministic. This is a special case of bayesian networks. In bnets, the nodes are in general probabilistic, but a special case of a probabilistic node is a deterministic one. But in recent times, the PyTorch people have added also probabilistic nodes to their DAGs, via an enhancement called Pyro (Pyro is managed mainly by Uber). The TensorFlow people have followed suit by adding probabilistic nodes to their DAGS too, via an enhancement originally called Edward, but now rechristened TensorFlow Probability (Edward was originally written by Dustin Tran for his PhD at Columbia Uni. He now works for Google.) And, of course, quantum mechanics and quantum computing are all about probabilistic nodes. To paraphrase Richard Feynman, Nature isn’t classical (i.e., based on deterministic nodes), damnit!

In a nutshell, the Bayesian Wars are intensifying.

It’s easy to understand the build-as-run and build-then-run distinction in bnet language. The build-then-run idea is for the user to build a bnet first, then run it to make inferences from it. That is the approach used by my software Quantum Fog. The build-as-run approach is quite different and marvelous in its own right. It builds a bnet automagically, behind the scenes, based on the Python code for a target function with certain inputs and outputs. This behind the scenes bnet is quite fluid. Every time you change the Python code for the target function, the bnet might change.

I believe that quantum simulators that are autograd-pytorch-tensorflow-etc enabled are the future of the quantum simulator field. As documented in my previous blog posts, I got the idea to do this for Qubiter from the Xanadu Inc. software Pennylane, whose main architect is Josh Izaac. So team Qubiter is not the first to do this. But we are the second, which is good enough for me. WooHoo!

PennyLane is already autograd-pytorch-tensorflow enabled, all 3. So far, Qubiter is only autograd enabled. And Pennylane can combine classical, Continuous Variable and gate model quantum nodes. It’s quite general! Qubiter is only for gate model quantum nodes. But Qubiter has many features, especially those related to the gate model, that PennyLane lags behind in. Check us both out!

In Qubiter’s jupyter-notebooks folder at:

https://github.com/artiste-qb-net/qubiter/tree/master/jupyter-notebooks

all the notebooks starting with the string “MeanHamilMinimizer” are related to Qubiter back-prop

1 Comment »

  1. I didn’t mention in the main article something that is kind of important. I held back because I didn’t want to make an already long article longer. What I didn’t mention is this:

    It is known that the computational complexity of forward propagation and backward propagation are about the same. So just as simulating (forward propagation) of a quantum circuit blows up exponentially with the number of qubits, so does back-prop. So, why do it? For the same reasons we do simulations, to learn new ideas and test things. For me, a bnet devotee, the fact that a bnet is built behind the scenes during back-prop, is a siren call. Imagine all the things I can learn about bnets by studying the ones generated by back-prop! Lessons that might find applicability in the quantum bayesian nets, quantum fog project.

    Comment by rrtucci — March 22, 2019 @ 1:34 am


RSS feed for comments on this post. TrackBack URI

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

Blog at WordPress.com.

%d bloggers like this: