# Quantum Bayesian Networks

## March 9, 2019

### Current Plans for Qubiter and when is walking backwards a good thing to do?

Filed under: Uncategorized — rrtucci @ 6:21 pm

This post is to keep Qubiter fans abreast of my current plans for it.

As I mentioned in a previous post entitled “Welcome to the Era of TensorFlow and PyTorch Enabled Quantum Computer Simulators”, I have recently become a big fan of the program PennyLane and its main architect, Josh Izaak.

Did you know PennyLane has a Discourse? https://discuss.pennylane.ai/ I love Discourse forums. Lots of other great software (like PyMC, for instance) have discourses too.

I am currently working hard to PennyLanate my Qubiter. In other words, I am trying to make Qubiter do what PennyLane already does, to wit: (1)establish a feedback loop between my classical computer and the quantum computer cloud service of Rigetti (2) When it’s my computer’s turn to act in the feedback loop, make it do minimization using the method of back-propagation. A glib way of describing this process is: a feedback loop which does forward-propagation in the Rigetti qc, followed by backwards-propagation in my computer, followed by forward-propagation in the Rigetti qc, and on and on, ad nauseam.

I am proud to report that Qubiter’s implementation of (1) is pretty much finished. The deed is done. See my Qubiter module https://github.com/artiste-qb-net/qubiter/blob/master/adv_applications/MeanHamil_rigetti.py This code has not been tested on the Rigetti cloud so it is most probably still buggy and will change a lot, but I think it is close to working

To do (1), I am imitating the wonderful Pennylane Rigetti plugin, available at GitHub. I have even filed an issue at that github repo
https://github.com/rigetti/pennylane-forest/issues/7

So far, Qubiter does not do minimization by back-propagation, which is goal (2). Instead, it does minimization using the scipy function scipy.optimize.minimize(). My future plans are to replace this scipy function by back-propagation. Remember why we ultimately want back-propagation. It’s because, of all the gradient based minimization methods (another one is conjugate gradient), backprop is the easiest to do in a distributed fashion, which takes advantage of GPU, TPU, etc. The first step, the bicycle training wheels step, towards making Qubiter do (2) is to use the wonderful software Autograd. https://github.com/HIPS/autograd Autograd replaces each numpy function by an autograd evil twin or doppelganger. After I teach Qubiter to harness the power of Autograd to do back-prop, I will replace Autograd by the more powerful tools TensorFlow and PyTorch (These also replace each numpy function by an evil twin in order to do minimization by back-propagation. They also do many other things).

In doing back-propagation in a quantum circuit, one has to calculate the derivative of quantum gates. Luckily, it’s mostly one qubit gates so they are 2-dim unitaries that can be parametrized as

$U=e^{i\theta_0}e^{i\sigma_k\theta_k}$

where the k ranges over $1, 2, 3$ and we are using Einstein summation convention. $\theta_0, \theta_1,\theta_2, \theta_3$ are all real. $\sigma_k$ are the Pauli matrices. As the PennyLane authors have pointed out, the derivative of U can be calculated exactly. The derivative of U with respect to $\theta_0$ is obvious, so let us concentrate on the derivatives with respect to the $\theta_k$.

Let
$U = e^{i\sigma_3\theta_3} = C + i\sigma_3 S$
where
$S = \sin\theta_3, C = \cos \theta_3$.
Then
$\frac{dU}{dt} = \dot{\theta}_3(-S + i\sigma_3 C)$

More generally, let
$U = e^{i\sigma_k\theta_k} = C + i\sigma_k \frac{\theta_k}{\theta} S$
where
$\theta = \sqrt{\theta_k\theta_k}, S = \sin\theta, C = \cos \theta$
Then, if I’ve done my algebra correctly,

$\frac{dU}{dt}=-S \frac{\theta_k}{\theta} \dot{\theta_k}+ i\sigma_k\dot{\theta_r} \left[\frac{\theta_k\theta_r}{\theta^2} C+ \frac{S}{\theta}(-\frac{\theta_k\theta_r}{\theta^2} + \delta_{k, r})\right]$

I end this post by answering the simple riddle which I posed in the title of this post. The rise of Trump was definitely a step backwards for humanity, but there are lots of times when stepping backwards is a good thing to do. Minimization by back propagation is a powerful tool, and it can be described as walking backwards. Also, when one gets lost in a forest or in a city and GPS is not available, I have found that a good strategy for coping with this mishap, is to, as soon as I notice that i am lost, back track, return to the place where I think I first made a mistake. Finally, let me include in this brief list the ancient Chinese practice of back walking. Lots of Chinese still do back-walking in public gardens today, just like they do Tai Chi. Both are healthy low impact exercises that are specially popular with the elderly. Back walking is thought to promote muscular fitness, because one uses muscles that are not used when walking forwards. Back walking is also thought to promote mental agility, because you have to think a little bit harder to do it than when walking forwards. (Just like counting backwards is a good test for sobriety and for detecting advanced Alzheimer’s)

## 1 Comment »

1. Hmm, when the breakthrough gets tough, the tough gets…noisy!
https://sociable.co/technology/darpa-exploit-quantum-computing-without-quantum-computer/

Comment by technofeudalism — March 20, 2019 @ 5:32 pm

Blog at WordPress.com.