Quantum Bayesian Networks

March 30, 2019

Second New Idea For Doing Back-Propagation On a Quantum Computer

Filed under: Uncategorized — rrtucci @ 4:59 pm

Last weekend, in a blog post entitled “New Idea For Doing Back-Propagation On a Quantum Computer”, I announced that I had uploaded to the Qubiter repo an essay entitled

“Calculating the Gradient of a Cost Function for a Parametric Quantum Circuit in FIVE EASY PIECES”

This weekend, I decided to add to that essay a new, more technical, 5 page appendix that fills some of the gaps left behind in the main part of the essay. So even if you perused the essay before today, you might be interested in re-opening it to take a peek at the new addition.

What I do in the new appendix is to show how to express the gradient of the cost function as a sum of mean values that are readily evaluated empirically on a real qc. So far, the PennyLane software only considers evaluating the gradient of cost functions wherein the parameters being differentiated only occur in one qubit gates with *no* controls. I show in this new appendix how to deal with the case when the parameters being differentiated also occur in gates with any (0, 1, 2, …) number of controls.

The Yellow Brick Road follows the gradient of a quantum cost function. God’s truth. To do back-propagation, “close your eyes and tap your heels together three times. And think to yourself, there’s no place like home.”

March 26, 2019

Explanation of Automatic Differentiation (from paper by Hai-Jun Liao, Jin-Guo Liu, Lei Wang, Tao Xiang)

Filed under: Uncategorized — rrtucci @ 4:22 am

In my latest blog posts, I’ve been advocating for the use of back-propagation in quantum computing. Today, by coincidence, I came across this very impressive paper and software applying back-propagation to tensor networks in physics.

https://arxiv.org/abs/1903.09650
“Differentiable Programming Tensor Networks”,
by Hai-Jun Liao, Jin-Guo Liu, Lei Wang, Tao Xiang

The authors are mostly from CAS (Chinese Academy of Science) in Beijing. Nice work like this convinces me that China is producing top quality work in AI and physics. I am very happy that Dr. Tao Yin, one of the cofounders of our startup, artiste-qb.net, is Chinese, currently living in Shenzhen. The paper in question has a very nice section explaining auto differentiation. Blogs make nice scrapbooks, so I copied that section and present it below.




March 25, 2019

New Idea For Doing Back-Propagation On a Quantum Computer

Filed under: Uncategorized — rrtucci @ 2:13 am

This weekend, I wrote a short article entitled: “Calculating the Gradient of a Cost Function for a Parametric Quantum Circuit in FIVE EASY PIECES” and I filed it in Qubiter’s repo. The article introduces some new ideas that I think might be very useful to the Hybrid Quantum Classical Programme. (Notice British/French spelling of last word for added allure) Check it out! The 1970’s movie “Five Easy Pieces”, featuring the scary Jack Nicholson, has some cosmic connection to my article, I think.

March 22, 2019

Life in the time of the Bayesian Wars: Qubiter can now do Back-Propagation via Autograd

Filed under: Uncategorized — rrtucci @ 12:57 am

The main purpose of this blog post is to announce that the quantum simulator that I manage, Qubiter (https://github.com/artiste-qb-net/qubiter), can now do minimizations of quantum expected values, using Back-Propagation. These minimizations are a staple of Rigetti’s cloud based service (https://rigetti.com/qcs), which I am proud and grateful to have been granted an account to. Woohoo!

Technically, what Rigetti Cloud offers that is relevant to this blog post is

Hybrid Quantum Classical NISQ computing to implement the VQE (Variational Quantum Eigensolver) algorithms.

Phew, quite the mouth full!

The back-prop in Qubiter is currently done automagically by the awesome software Autograd (https://github.com/HIPS/autograd), which is a simpler version of, and one of the primary, original inspirations for PyTorch. In fact, PyTorch contains its own version of Autograd, which is called, somewhat confusingly, PyTorch Autograd, as opposed to the the earlier HIPS Autograd that Qubiter currently uses.

I also plan in the very near future to retrofit Qubiter so that it can also do back-prop using PyTorch and Tensorflow. This would enable Qubiter to do something that Autograd can’t do, namely to do back-prop with the aid of distributed computing using GPU and TPU. I consider enabling Qubiter to do back-prop via Autograd to be a very instructive intermediate step, the bicycle training wheels step, towards enabling Qubiter to do distributed back-prop via PyTorch and TensorFlow.

AI/Neural Network software libs that do back-propagation are often divided into the build-then-run and the build-as-run types. (what is being built is a DAG, an acronym that stands for directed acyclic graph). Autograd, which was started before Pytorch, is of the build-as-run type. PyTorch (managed by Facebook & Uber, https://en.wikipedia.org/wiki/PyTorch) has always been of the b-as-run type too. Tensorflow (managed by Google, https://en.wikipedia.org/wiki/TensorFlow) was originally of the b-then-run type, but about 1.5 years ago, Google realized that a lot of people preferred the b-as-run to the b-then-run, so Google added to Tensorflow, a b-as-run version called Eager TensorFlow. So now Tensorflow can do both types.

PyTorch and TensorFlow also compete in an area that is near and dear to my heart, bayesian networks. The original PyTorch and TensorFlow both created a DAG whose nodes were only deterministic. This is a special case of bayesian networks. In bnets, the nodes are in general probabilistic, but a special case of a probabilistic node is a deterministic one. But in recent times, the PyTorch people have added also probabilistic nodes to their DAGs, via an enhancement called Pyro (Pyro is managed mainly by Uber). The TensorFlow people have followed suit by adding probabilistic nodes to their DAGS too, via an enhancement originally called Edward, but now rechristened TensorFlow Probability (Edward was originally written by Dustin Tran for his PhD at Columbia Uni. He now works for Google.) And, of course, quantum mechanics and quantum computing are all about probabilistic nodes. To paraphrase Richard Feynman, Nature isn’t classical (i.e., based on deterministic nodes), damnit!

In a nutshell, the Bayesian Wars are intensifying.

It’s easy to understand the build-as-run and build-then-run distinction in bnet language. The build-then-run idea is for the user to build a bnet first, then run it to make inferences from it. That is the approach used by my software Quantum Fog. The build-as-run approach is quite different and marvelous in its own right. It builds a bnet automagically, behind the scenes, based on the Python code for a target function with certain inputs and outputs. This behind the scenes bnet is quite fluid. Every time you change the Python code for the target function, the bnet might change.

I believe that quantum simulators that are autograd-pytorch-tensorflow-etc enabled are the future of the quantum simulator field. As documented in my previous blog posts, I got the idea to do this for Qubiter from the Xanadu Inc. software Pennylane, whose main architect is Josh Izaac. So team Qubiter is not the first to do this. But we are the second, which is good enough for me. WooHoo!

PennyLane is already autograd-pytorch-tensorflow enabled, all 3. So far, Qubiter is only autograd enabled. And Pennylane can combine classical, Continuous Variable and gate model quantum nodes. It’s quite general! Qubiter is only for gate model quantum nodes. But Qubiter has many features, especially those related to the gate model, that PennyLane lags behind in. Check us both out!

In Qubiter’s jupyter-notebooks folder at:

https://github.com/artiste-qb-net/qubiter/tree/master/jupyter-notebooks

all the notebooks starting with the string “MeanHamilMinimizer” are related to Qubiter back-prop

March 9, 2019

Current Plans for Qubiter and when is walking backwards a good thing to do?

Filed under: Uncategorized — rrtucci @ 6:21 pm

This post is to keep Qubiter fans abreast of my current plans for it.

As I mentioned in a previous post entitled “Welcome to the Era of TensorFlow and PyTorch Enabled Quantum Computer Simulators”, I have recently become a big fan of the program PennyLane and its main architect, Josh Izaak.

Did you know PennyLane has a Discourse? https://discuss.pennylane.ai/ I love Discourse forums. Lots of other great software (like PyMC, for instance) have discourses too.

I am currently working hard to PennyLanate my Qubiter. In other words, I am trying to make Qubiter do what PennyLane already does, to wit: (1)establish a feedback loop between my classical computer and the quantum computer cloud service of Rigetti (2) When it’s my computer’s turn to act in the feedback loop, make it do minimization using the method of back-propagation. A glib way of describing this process is: a feedback loop which does forward-propagation in the Rigetti qc, followed by backwards-propagation in my computer, followed by forward-propagation in the Rigetti qc, and on and on, ad nauseam.

I am proud to report that Qubiter’s implementation of (1) is pretty much finished. The deed is done. See my Qubiter module https://github.com/artiste-qb-net/qubiter/blob/master/adv_applications/MeanHamil_rigetti.py This code has not been tested on the Rigetti cloud so it is most probably still buggy and will change a lot, but I think it is close to working

To do (1), I am imitating the wonderful Pennylane Rigetti plugin, available at GitHub. I have even filed an issue at that github repo
https://github.com/rigetti/pennylane-forest/issues/7

So far, Qubiter does not do minimization by back-propagation, which is goal (2). Instead, it does minimization using the scipy function scipy.optimize.minimize(). My future plans are to replace this scipy function by back-propagation. Remember why we ultimately want back-propagation. It’s because, of all the gradient based minimization methods (another one is conjugate gradient), backprop is the easiest to do in a distributed fashion, which takes advantage of GPU, TPU, etc. The first step, the bicycle training wheels step, towards making Qubiter do (2) is to use the wonderful software Autograd. https://github.com/HIPS/autograd Autograd replaces each numpy function by an autograd evil twin or doppelganger. After I teach Qubiter to harness the power of Autograd to do back-prop, I will replace Autograd by the more powerful tools TensorFlow and PyTorch (These also replace each numpy function by an evil twin in order to do minimization by back-propagation. They also do many other things).

In doing back-propagation in a quantum circuit, one has to calculate the derivative of quantum gates. Luckily, it’s mostly one qubit gates so they are 2-dim unitaries that can be parametrized as

U=e^{i\theta_0}e^{i\sigma_k\theta_k}

where the k ranges over 1, 2, 3 and we are using Einstein summation convention. \theta_0, \theta_1,\theta_2, \theta_3 are all real. \sigma_k are the Pauli matrices. As the PennyLane authors have pointed out, the derivative of U can be calculated exactly. The derivative of U with respect to \theta_0 is obvious, so let us concentrate on the derivatives with respect to the \theta_k.

Let
U = e^{i\sigma_3\theta_3} = C + i\sigma_3 S
where
S = \sin\theta_3, C = \cos \theta_3.
Then
\frac{dU}{dt} = \dot{\theta}_3(-S + i\sigma_3 C)

More generally, let
U = e^{i\sigma_k\theta_k} = C +  i\sigma_k \frac{\theta_k}{\theta} S
where
\theta = \sqrt{\theta_k\theta_k}, S =  \sin\theta, C = \cos \theta
Then, if I’ve done my algebra correctly,

\frac{dU}{dt}=-S \frac{\theta_k}{\theta}  \dot{\theta_k}+ i\sigma_k\dot{\theta_r}  \left[\frac{\theta_k\theta_r}{\theta^2} C+   \frac{S}{\theta}(-\frac{\theta_k\theta_r}{\theta^2}    + \delta_{k, r})\right]

I end this post by answering the simple riddle which I posed in the title of this post. The rise of Trump was definitely a step backwards for humanity, but there are lots of times when stepping backwards is a good thing to do. Minimization by back propagation is a powerful tool, and it can be described as walking backwards. Also, when one gets lost in a forest or in a city and GPS is not available, I have found that a good strategy for coping with this mishap, is to, as soon as I notice that i am lost, back track, return to the place where I think I first made a mistake. Finally, let me include in this brief list the ancient Chinese practice of back walking. Lots of Chinese still do back-walking in public gardens today, just like they do Tai Chi. Both are healthy low impact exercises that are specially popular with the elderly. Back walking is thought to promote muscular fitness, because one uses muscles that are not used when walking forwards. Back walking is also thought to promote mental agility, because you have to think a little bit harder to do it than when walking forwards. (Just like counting backwards is a good test for sobriety and for detecting advanced Alzheimer’s)

March 5, 2019

The iSWAP, sqrt(iSWAP) and other up-and-coming quantum gates

Filed under: Uncategorized — rrtucci @ 8:28 am

The writing is on the wall. The engineers behind the quantum computers at Google, Rigetti, IonQ, etc., are using, more and more, certain variants of the simple SWAP gate, variants that are more natural than the SWAP for their devices, variants with exotic, tantalizing names like the iSWAP, and sqrt(iSWAP). In the last day or two, I decided to bring Qubiter up-to-date by adding to its arsenal of gates, a gate that I call the SWAY. SWAY is very general. It includes the humble SWAP and all its other variants too. So, what is this SWAY, you ask?

Let \sigma_X, \sigma_Y, \sigma_Z be the Pauli Matrices.

Recall that the swap of two qubits 0, 1, call it SWAP(1, 0), is defined by

SWAP = diag(1, \sigma_X, 1)

NOTE: SWAP is qbit symmetric, meaning that SWAP(0,1) = SWAP(1,0)

We define SWAY by

SWAY = diag(1, U2, 1)

where U2 is the most general 2-dim unitary matrix satisfying \sigma_X U2 \sigma_X=U2. If U2 is parametrized as

U2 = \exp(i[ \theta_0 + \theta_1\sigma_X + \theta_2\sigma_Y + \theta_3\sigma_Z])

for real \theta_j, then

\theta_2=\theta_3=0.

NOTE:SWAY is qbit symmetric (SWAY(0,1)=SWAY(1,0)) iff \sigma_X U2 \sigma_X=U2 iff \theta_2=\theta_3=0

The Qubiter simulator can now handle a SWAY with zero or any number of controls of type T or F. Very cool, don’t you think?

Here is a jupyter notebook that I wrote to test Qubiter’s SWAY implementation

https://github.com/artiste-qb-net/qubiter/blob/master/jupyter-notebooks/unusual_gates_like_generalized_swap.ipynb

Blog at WordPress.com.

%d bloggers like this: