Quantum Bayesian Networks

April 30, 2019

Leaving artiste-qb.net

Filed under: Uncategorized — rrtucci @ 3:06 am

After working for 5 years at artiste-qb.net, I am leaving that company for personal reasons. I am looking for a new job. I would prefer to have a job doing what I love, quantum computing software and algorithm development, but I am open to other kinds of job offers as well. I am proud to be the almost sole developer of Qubiter and Quantum Fog, two insanely great software libraries. I believe that Qubiter, with its new addition for calculating gradients of quantum cost functions, will become a seminal work in the field. I also have an unpublished but very mature library for doing quantum entanglement calculations that is also insanely great. I write software so that it will last forever 💍💍💎💎

May 14, 2019

Quantum simulator Qubiter now has a native TensorFlow Backend

Filed under: Uncategorized — rrtucci @ 2:05 am


I am pleased to announce that my quantum simulator Qubiter (available at GitHub, BSD license) now has a native TensorFlow Backend-Simulator (see its class `SEO_simulator_tf`, the `tf` stands for TensorFlow). This complements Qubiter’s original numpy simulator (contained in its class `SEO_simulator`). A small step for Mankind, a giant leap for me! Hip Hip Hurray!

This means that Qubiter can now calculate the evolution of a state vector using CPU, GPU or TPU. Plus it can do back-propagation on a quantum circuit. Here is a jupyter notebook that I wrote that uses Qubiter’s TF backend to do VQE (Variational Quantum Eigensolving). (I like to call VQE, mean Hamiltonian minimization)

https://github.com/artiste-qb-net/qubiter/blob/master/jupyter-notebooks/MeanHamilMinimizer_native_with_tf.ipynb

Numpy is a tensor library in Python and TensorFlow (produced by Google) is another tensor library in Python. TensorFlow matches by name and functionality, almost 1 to 1, every function in numpy. But the TF namesake functions are much more powerful than their numpy counterparts. Besides accessing the usual CPU, they can access distributed computing resources like GPU and TPU. Furthermore, they can be asked to “tape” a history of their use, and then to replay that history in reverse, back-propagation mode so as to calculate derivatives of a list of user-designated variables. These derivatives can then be used to minimize a cost (aka loss or objective) function. Such minimizations are the bread and butter of classical and quantum neural nets.

These are exciting times for TF:

  • Just last March, Google released officially TF version 2.0.

  • In TF 2.0, the Eager mode (which is the mode that Qubiter’s TF backend uses) has been elevated to default. TF Eager uses dynamical graphs like the TF competitor, PyTorch, produced by Facebook, uses.

  • TF 2.0 also incorporates “Edward”, a software lib by Dustin Tran for doing calculations with Bayesian Networks. Edward in its TF incarnation is called TF Probability. Plus TF 2.0 is also incorporating Keras (a library for doing layered neural nets) and PyMC (a lib for doing Markov Chain Monte Carlo calculations)

  • Anyone can run TF 2.0 on the cloud via Google’s Colab. That is, in fact, how I developed and tested Qubiter’s TF backend.

In theory, all you have to do to convert a numpy simulator to a tensorflow one is to change all functions that start with the prefix `np.` by their namesakes that start with the prefix `tf.`. However, the devil is in the details. I, like most good programmers, hate repeating code. I try to avoid doing so at all costs, because I don’t want to have to correct the same bug in multiple places. So I built class `SEO_simulator_tf` by sub-classing the original numpy simulator class `SEO_simulator`. That is one of the beauties of “sub-classing” in OOP (Object Oriented Programming), of which I am a great fan: sub-classing helps you avoid repeating a lot of code. So `SEO_simulator_tf` ended up being a deceivingly short and simple class: all it does is basically turn on a bunch of flags that it passes to its parent class `SEO_simulator`. Most of the new code is in the parent class `SEO_simulator` and other classes like `OneBitGates` which have been modified to respond to all those flags that `SEO_simulator_tf` sets.

May 5, 2019

Quantum Matrix Rain

Filed under: Uncategorized — rrtucci @ 9:11 pm

In a previous blog post, I compared the evaluation of gradients in quantum computing via multi-threading to the “digital rain” of The Matrix movie series. This Sunday, I was bored, so I decided to make my own artistic rendition of quantum multi-threading. It turns out to be a trivial task, requiring almost zero javascript knowledge, to write a rudimentary animation, if one uses the power of the browser and the HTML tag marquee.

http://www.ar-tiste.com/quantum_matrix_rain.html

May 1, 2019

Popular Talk on Multi-Threading, Gradients, AI, Quantum Computing

Filed under: Uncategorized — rrtucci @ 3:33 am

An old friend asked me to prepare a short talk at the popular science level about my latest work on multi-threading, gradients, AI, quantum computing. Voilà
http://www.ar-tiste.com/threaded_grads_popular_talk.pdf

April 22, 2019

Multi-Threading and Gradients of Cost Functions in Quantum Computing

Filed under: Uncategorized — rrtucci @ 5:15 pm

I am VERY pleased to announce that last night, Easter Sunday, I uploaded a major addition to the Qubiter repo at GitHub. The relevant code files all start with the word “Stairs” and are located in the adv_applications folder of Qubiter.

What does the new code do? I explain this in detail in a paper that I wrote for the occasion. The paper is entitled

Calculation of the Gradient of a Quantum Cost Function using ‘Threading’. Application of these ‘threaded gradients’ to a Quantum Neural Net inspired by Quantum Bayesian Networks

https://github.com/artiste-qb-net/qubiter/blob/master/adv_applications/threaded_grad.pdf

As usual, I included lots of docstrings explaining the code, and a main() method at the end of each class, illustrating its usage and testing it. I will also write some jupyter notebooks with examples of usage in the next week or so.

So what do I mean by threading?(I and most people use the words threading and multi-threading synonymously) I believe I am one of the first persons to use the word threading in connection with quantum computing. What I mean by it is the strategy of partitioning the qubits in a (gate model) quantum computer into small, disjoint sets (“islands”) that are uncorrelated from each other and run concurrently. The qubits within one of these islands are strongly correlated but qubits from different islands are probabilistically independent. This is an ideal scenario for NISQ (Noisy Intermediate Scale Quantum) devices and HQC (Hybrid Quantum Classical) computing being pursued by Rigetti Inc. and others. It is also a good fit for calculating the gradient of quantum cost functions: Each island, after many shots and final measurements, yields a mean value, and a linear combination of the mean values from all the islands equals the gradient. In an artistic, poetical sense, qc threading reminds me of what is commonly called “digital rain”, especially if one draws quantum circuits with time pointing downwards, like Qubiter does.

April 9, 2019

I’m on my way to Canaan’s Land, I did it my way

Filed under: Uncategorized — rrtucci @ 3:08 am


The grand challenge: Minimizing Cost functions obtained from parameterized quantum circuits. It’s not obvious what is the best way to do this, but doing this efficiently is an absolute necessity for the Hybrid Quantum Classical computing Programme of Rigetti and others to succeed, I think.

After weeks of deliberation, this weekend I finally decided on my own plan to meet this challenge. I am coding it as we speak. I’m lucky that I am the main author of the quantum simulator Qubiter, because it provides me a lot of polished tools that I know inside out, and that are necessary or extremely useful to meet this challenge. It’s a big head start on others who might embark on the same quest and would try to write their own Qubiter-like tools first. I am also fortunate that my friend Dr. Tao Yin is going to help me.

My idea looks very promising to me, but as Richard Feynman once warned in one of my favorite quotes of his, human beings are very good at fooling themselves, the easiest person to fool is yourself. So even though it looks great to me now, my algo might turn out to be a dud. Still, it will be a lot of fun to test its worth.

On previous blog posts, I have commented on the software PennyLane, which is attempting to meet this challenge. Tonight, during my daily visit to arXiv, I noticed two papers, one that came out today (https://arxiv.org/abs/1904.03206), and another that came out on Mar 28 (https://arxiv.org/abs/1903.12166) that attempt to meet this grand challenge too, by using a tomographic approach. PennyLane and this tomographic approach are formidable competitors to my approach, which is quite different to theirs. It will be fun to race them against each other, even if mine loses.

March 30, 2019

Second New Idea For Doing Back-Propagation On a Quantum Computer

Filed under: Uncategorized — rrtucci @ 4:59 pm

Last weekend, in a blog post entitled “New Idea For Doing Back-Propagation On a Quantum Computer”, I announced that I had uploaded to the Qubiter repo an essay entitled

“Calculating the Gradient of a Cost Function for a Parametric Quantum Circuit in FIVE EASY PIECES”

This weekend, I decided to add to that essay a new, more technical, 5 page appendix that fills some of the gaps left behind in the main part of the essay. So even if you perused the essay before today, you might be interested in re-opening it to take a peek at the new addition.

What I do in the new appendix is to show how to express the gradient of the cost function as a sum of mean values that are readily evaluated empirically on a real qc. So far, the PennyLane software only considers evaluating the gradient of cost functions wherein the parameters being differentiated only occur in one qubit gates with *no* controls. I show in this new appendix how to deal with the case when the parameters being differentiated also occur in gates with any (0, 1, 2, …) number of controls.

The Yellow Brick Road follows the gradient of a quantum cost function. God’s truth. To do back-propagation, “close your eyes and tap your heels together three times. And think to yourself, there’s no place like home.”

March 26, 2019

Explanation of Automatic Differentiation (from paper by Hai-Jun Liao, Jin-Guo Liu, Lei Wang, Tao Xiang)

Filed under: Uncategorized — rrtucci @ 4:22 am

In my latest blog posts, I’ve been advocating for the use of back-propagation in quantum computing. Today, by coincidence, I came across this very impressive paper and software applying back-propagation to tensor networks in physics.

https://arxiv.org/abs/1903.09650
“Differentiable Programming Tensor Networks”,
by Hai-Jun Liao, Jin-Guo Liu, Lei Wang, Tao Xiang

The authors are mostly from CAS (Chinese Academy of Science) in Beijing. Nice work like this convinces me that China is producing top quality work in AI and physics. I am very happy that Dr. Tao Yin, one of the cofounders of our startup, artiste-qb.net, is Chinese, currently living in Shenzhen. The paper in question has a very nice section explaining auto differentiation. Blogs make nice scrapbooks, so I copied that section and present it below.




March 25, 2019

New Idea For Doing Back-Propagation On a Quantum Computer

Filed under: Uncategorized — rrtucci @ 2:13 am

This weekend, I wrote a short article entitled: “Calculating the Gradient of a Cost Function for a Parametric Quantum Circuit in FIVE EASY PIECES” and I filed it in Qubiter’s repo. The article introduces some new ideas that I think might be very useful to the Hybrid Quantum Classical Programme. (Notice British/French spelling of last word for added allure) Check it out! The 1970’s movie “Five Easy Pieces”, featuring the scary Jack Nicholson, has some cosmic connection to my article, I think.

March 22, 2019

Life in the time of the Bayesian Wars: Qubiter can now do Back-Propagation via Autograd

Filed under: Uncategorized — rrtucci @ 12:57 am

The main purpose of this blog post is to announce that the quantum simulator that I manage, Qubiter (https://github.com/artiste-qb-net/qubiter), can now do minimizations of quantum expected values, using Back-Propagation. These minimizations are a staple of Rigetti’s cloud based service (https://rigetti.com/qcs), which I am proud and grateful to have been granted an account to. Woohoo!

Technically, what Rigetti Cloud offers that is relevant to this blog post is

Hybrid Quantum Classical NISQ computing to implement the VQE (Variational Quantum Eigensolver) algorithms.

Phew, quite the mouth full!

The back-prop in Qubiter is currently done automagically by the awesome software Autograd (https://github.com/HIPS/autograd), which is a simpler version of, and one of the primary, original inspirations for PyTorch. In fact, PyTorch contains its own version of Autograd, which is called, somewhat confusingly, PyTorch Autograd, as opposed to the the earlier HIPS Autograd that Qubiter currently uses.

I also plan in the very near future to retrofit Qubiter so that it can also do back-prop using PyTorch and Tensorflow. This would enable Qubiter to do something that Autograd can’t do, namely to do back-prop with the aid of distributed computing using GPU and TPU. I consider enabling Qubiter to do back-prop via Autograd to be a very instructive intermediate step, the bicycle training wheels step, towards enabling Qubiter to do distributed back-prop via PyTorch and TensorFlow.

AI/Neural Network software libs that do back-propagation are often divided into the build-then-run and the build-as-run types. (what is being built is a DAG, an acronym that stands for directed acyclic graph). Autograd, which was started before Pytorch, is of the build-as-run type. PyTorch (managed by Facebook & Uber, https://en.wikipedia.org/wiki/PyTorch) has always been of the b-as-run type too. Tensorflow (managed by Google, https://en.wikipedia.org/wiki/TensorFlow) was originally of the b-then-run type, but about 1.5 years ago, Google realized that a lot of people preferred the b-as-run to the b-then-run, so Google added to Tensorflow, a b-as-run version called Eager TensorFlow. So now Tensorflow can do both types.

PyTorch and TensorFlow also compete in an area that is near and dear to my heart, bayesian networks. The original PyTorch and TensorFlow both created a DAG whose nodes were only deterministic. This is a special case of bayesian networks. In bnets, the nodes are in general probabilistic, but a special case of a probabilistic node is a deterministic one. But in recent times, the PyTorch people have added also probabilistic nodes to their DAGs, via an enhancement called Pyro (Pyro is managed mainly by Uber). The TensorFlow people have followed suit by adding probabilistic nodes to their DAGS too, via an enhancement originally called Edward, but now rechristened TensorFlow Probability (Edward was originally written by Dustin Tran for his PhD at Columbia Uni. He now works for Google.) And, of course, quantum mechanics and quantum computing are all about probabilistic nodes. To paraphrase Richard Feynman, Nature isn’t classical (i.e., based on deterministic nodes), damnit!

In a nutshell, the Bayesian Wars are intensifying.

It’s easy to understand the build-as-run and build-then-run distinction in bnet language. The build-then-run idea is for the user to build a bnet first, then run it to make inferences from it. That is the approach used by my software Quantum Fog. The build-as-run approach is quite different and marvelous in its own right. It builds a bnet automagically, behind the scenes, based on the Python code for a target function with certain inputs and outputs. This behind the scenes bnet is quite fluid. Every time you change the Python code for the target function, the bnet might change.

I believe that quantum simulators that are autograd-pytorch-tensorflow-etc enabled are the future of the quantum simulator field. As documented in my previous blog posts, I got the idea to do this for Qubiter from the Xanadu Inc. software Pennylane, whose main architect is Josh Izaac. So team Qubiter is not the first to do this. But we are the second, which is good enough for me. WooHoo!

PennyLane is already autograd-pytorch-tensorflow enabled, all 3. So far, Qubiter is only autograd enabled. And Pennylane can combine classical, Continuous Variable and gate model quantum nodes. It’s quite general! Qubiter is only for gate model quantum nodes. But Qubiter has many features, especially those related to the gate model, that PennyLane lags behind in. Check us both out!

In Qubiter’s jupyter-notebooks folder at:

https://github.com/artiste-qb-net/qubiter/tree/master/jupyter-notebooks

all the notebooks starting with the string “MeanHamilMinimizer” are related to Qubiter back-prop

March 9, 2019

Current Plans for Qubiter and when is walking backwards a good thing to do?

Filed under: Uncategorized — rrtucci @ 6:21 pm

This post is to keep Qubiter fans abreast of my current plans for it.

As I mentioned in a previous post entitled “Welcome to the Era of TensorFlow and PyTorch Enabled Quantum Computer Simulators”, I have recently become a big fan of the program PennyLane and its main architect, Josh Izaak.

Did you know PennyLane has a Discourse? https://discuss.pennylane.ai/ I love Discourse forums. Lots of other great software (like PyMC, for instance) have discourses too.

I am currently working hard to PennyLanate my Qubiter. In other words, I am trying to make Qubiter do what PennyLane already does, to wit: (1)establish a feedback loop between my classical computer and the quantum computer cloud service of Rigetti (2) When it’s my computer’s turn to act in the feedback loop, make it do minimization using the method of back-propagation. A glib way of describing this process is: a feedback loop which does forward-propagation in the Rigetti qc, followed by backwards-propagation in my computer, followed by forward-propagation in the Rigetti qc, and on and on, ad nauseam.

I am proud to report that Qubiter’s implementation of (1) is pretty much finished. The deed is done. See my Qubiter module https://github.com/artiste-qb-net/qubiter/blob/master/adv_applications/MeanHamil_rigetti.py This code has not been tested on the Rigetti cloud so it is most probably still buggy and will change a lot, but I think it is close to working

To do (1), I am imitating the wonderful Pennylane Rigetti plugin, available at GitHub. I have even filed an issue at that github repo
https://github.com/rigetti/pennylane-forest/issues/7

So far, Qubiter does not do minimization by back-propagation, which is goal (2). Instead, it does minimization using the scipy function scipy.optimize.minimize(). My future plans are to replace this scipy function by back-propagation. Remember why we ultimately want back-propagation. It’s because, of all the gradient based minimization methods (another one is conjugate gradient), backprop is the easiest to do in a distributed fashion, which takes advantage of GPU, TPU, etc. The first step, the bicycle training wheels step, towards making Qubiter do (2) is to use the wonderful software Autograd. https://github.com/HIPS/autograd Autograd replaces each numpy function by an autograd evil twin or doppelganger. After I teach Qubiter to harness the power of Autograd to do back-prop, I will replace Autograd by the more powerful tools TensorFlow and PyTorch (These also replace each numpy function by an evil twin in order to do minimization by back-propagation. They also do many other things).

In doing back-propagation in a quantum circuit, one has to calculate the derivative of quantum gates. Luckily, it’s mostly one qubit gates so they are 2-dim unitaries that can be parametrized as

U=e^{i\theta_0}e^{i\sigma_k\theta_k}

where the k ranges over 1, 2, 3 and we are using Einstein summation convention. \theta_0, \theta_1,\theta_2, \theta_3 are all real. \sigma_k are the Pauli matrices. As the PennyLane authors have pointed out, the derivative of U can be calculated exactly. The derivative of U with respect to \theta_0 is obvious, so let us concentrate on the derivatives with respect to the \theta_k.

Let
U = e^{i\sigma_3\theta_3} = C + i\sigma_3 S
where
S = \sin\theta_3, C = \cos \theta_3.
Then
\frac{dU}{dt} = \dot{\theta}_3(-S + i\sigma_3 C)

More generally, let
U = e^{i\sigma_k\theta_k} = C +  i\sigma_k \frac{\theta_k}{\theta} S
where
\theta = \sqrt{\theta_k\theta_k}, S =  \sin\theta, C = \cos \theta
Then, if I’ve done my algebra correctly,

\frac{dU}{dt}=-S \frac{\theta_k}{\theta}  \dot{\theta_k}+ i\sigma_k\dot{\theta_r}  \left[\frac{\theta_k\theta_r}{\theta^2} C+   \frac{S}{\theta}(-\frac{\theta_k\theta_r}{\theta^2}    + \delta_{k, r})\right]

I end this post by answering the simple riddle which I posed in the title of this post. The rise of Trump was definitely a step backwards for humanity, but there are lots of times when stepping backwards is a good thing to do. Minimization by back propagation is a powerful tool, and it can be described as walking backwards. Also, when one gets lost in a forest or in a city and GPS is not available, I have found that a good strategy for coping with this mishap, is to, as soon as I notice that i am lost, back track, return to the place where I think I first made a mistake. Finally, let me include in this brief list the ancient Chinese practice of back walking. Lots of Chinese still do back-walking in public gardens today, just like they do Tai Chi. Both are healthy low impact exercises that are specially popular with the elderly. Back walking is thought to promote muscular fitness, because one uses muscles that are not used when walking forwards. Back walking is also thought to promote mental agility, because you have to think a little bit harder to do it than when walking forwards. (Just like counting backwards is a good test for sobriety and for detecting advanced Alzheimer’s)

March 5, 2019

The iSWAP, sqrt(iSWAP) and other up-and-coming quantum gates

Filed under: Uncategorized — rrtucci @ 8:28 am

The writing is on the wall. The engineers behind the quantum computers at Google, Rigetti, IonQ, etc., are using, more and more, certain variants of the simple SWAP gate, variants that are more natural than the SWAP for their devices, variants with exotic, tantalizing names like the iSWAP, and sqrt(iSWAP). In the last day or two, I decided to bring Qubiter up-to-date by adding to its arsenal of gates, a gate that I call the SWAY. SWAY is very general. It includes the humble SWAP and all its other variants too. So, what is this SWAY, you ask?

Let \sigma_X, \sigma_Y, \sigma_Z be the Pauli Matrices.

Recall that the swap of two qubits 0, 1, call it SWAP(1, 0), is defined by

SWAP = diag(1, \sigma_X, 1)

NOTE: SWAP is qbit symmetric, meaning that SWAP(0,1) = SWAP(1,0)

We define SWAY by

SWAY = diag(1, U2, 1)

where U2 is the most general 2-dim unitary matrix satisfying \sigma_X U2 \sigma_X=U2. If U2 is parametrized as

U2 = \exp(i[ \theta_0 + \theta_1\sigma_X + \theta_2\sigma_Y + \theta_3\sigma_Z])

for real \theta_j, then

\theta_2=\theta_3=0.

NOTE:SWAY is qbit symmetric (SWAY(0,1)=SWAY(1,0)) iff \sigma_X U2 \sigma_X=U2 iff \theta_2=\theta_3=0

The Qubiter simulator can now handle a SWAY with zero or any number of controls of type T or F. Very cool, don’t you think?

Here is a jupyter notebook that I wrote to test Qubiter’s SWAY implementation

https://github.com/artiste-qb-net/qubiter/blob/master/jupyter-notebooks/unusual_gates_like_generalized_swap.ipynb

February 26, 2019

Seth Lloyd invented PennyLane. It’s a well known fact.

Filed under: Uncategorized — rrtucci @ 6:01 pm

I forgot to mention that Seth Lloyd invented PennyLane. The other people at Xanadu are all identical worker ants faithfully following his brilliant instructions on what to do, according to a Xanadu press release

Excerpt from press release:

“Deep learning libraries like TensorFlow and PyTorch opened up artificial intelligence to the world by providing an interface to powerful GPU hardware. With PennyLane, Xanadu is now doing the same for machine learning on quantum hardware,” said Seth Lloyd, Xanadu’s chief scientific advisor, MIT professor and a founding figure in both quantum computing and quantum machine learning. “We’re going to see an explosion of ideas, now that everyone can train quantum computers like they would train deep neural networks.”

Xanadu is a company whose “chief scientific advisor, MIT professor and a founding figure in both quantum computing and quantum machine learning”, Prof. Seth Lloyd, has promised to build a “continuous values” quantum computer device, which is a device invented by Seth Lloyd, according to him. This would be a quantum computer that is more analog and more classical than DWave’s. DWave, a qc company which was founded in 1999, 20 years ago, has never been able to provide error correction for its qc, so many experts believe that Xanadu’s Lloydian qc will be very difficult to error correct too. But if anyone can solve this prickly conundrum, it’s Seth Lloyd, who, according to him, is the original inventor of DWave’s device too.

Addendum:

Wow wee! Seth Lloyd’s invention, an optical computer that runs Tensorflow, is really taking off at MIT. I just came across this news report on a company called LightMatter, funded by some heavyweights like Google Ventures, that proposes to do just that.

https://www.cnbc.com/2019/02/25/alphabet-gv-invests-in-lightmatter-optical-ai-chip-startup.html

Excerpts:


Lightmatter just picked up its first backing from a corporate investor: GV, a venture arm of Google parent company Alphabet.

In 2014, Nick Harris and Darius Bunandar were trying to combine optical technology with quantum computing at the Massachusetts Institute of Technology, where they were doing Ph.D. work in the same research group.

But in 2015, Harris and Bunandar began looking at fields beyond quantum computing, including AI. “Our feeling is that there are a huge number of challenges that remain to be solved” for their quantum approach, Harris said.

“There is a lot of effort that goes into making this kind of device plug and play and making it look a lot like the experience of an Nvidia GPU,” Harris said. The team wants to ensure the chips work with popular AI software such as the Google-backed open-source project TensorFlow.

February 24, 2019

Welcome to the Era of TensorFlow and PyTorch Enabled Quantum Computer Simulators

Filed under: Uncategorized — rrtucci @ 8:40 pm

In my previous blog post, I unveiled a new Jupyter notebook explaining how to use Qubiter (a quantum computing simulator managed by me) to do hybrid quantum-classical (HQC) quantum computing. In that prior blog post, I admitted that even though that meant that Qubiter could now do a naive type of HQC, Qubiter could not yet do fully fledged HQC, which I defined as (1) using distributed computing/back propagation driven by TensorFlow or PyTorch (2) using as backend a physical qc device such as those which are already accessible via the cloud, thanks to IBM and Rigetti. I pointed out that the wonderful software PennyLane by Xanadu can now do (1) and (2).

This blog post is to unveil yet another Jupyter notebook, this time showing how to use Qubiter to translate potentially any quantum circuit written in Qubiter’s language to the language of PennyLane, call it Pennylanese. This means Qubiter can now act as a front end to PennyLane, PennyLane can act as an intermediary link which is TensorFlow and PyTorch enabled, and Rigetti’s or IBM’s qc hardware can act as the backend.

So, in effect, Qubiter can now do (1) and (2). Here is the notebook

https://github.com/artiste-qb-net/qubiter/blob/master/jupyter-notebooks/Translating_from_Qubiter_to_Xanadu_PennyLane.ipynb

I, Nostradamucci, have been prognosticating the merging of quantum computing and TensorFlow for a long time in this blog

I, Nostradamucci, foresee that PennyLane will continue to improve and be adopted by many other qc simulators besides Qubiter. Those other qc simulators will be modified by their authors so that they too can act as frontends to PennyLane. Why not do it? It took me just a few days to write the Qubiter2PennyLane translator. You can easily do the same for your qc simulator!

I, Nostradamucci, also foresee that many competitors to PennyLane will crop up in the next year. It would be very naive to expect that everyone will adopt PennyLane as their method of achieving (1) and (2).

In particular, Google will want to write their own (1)(2) tool. Just like Google didn’t adopt someone else’s quantum simulator, they started Cirq instead, it would be naive to expect that they would adopt PennyLane as their (1) (2) tool, especially since TensorFlow is their prized, scepter of power. Just like Google rarely adopts someone else’s app for Android, they write their own, Google rarely adopts someone else’s app for TensorFlow (and Cirq, and OpenFermion), they write their own.

And of course, the Chinese (and the independence-loving French, Vive La France!) prefer to use software that is not under the control of American monopolies.

I see PennyLane as a brilliant but temporary solution that allows Qubiter to achieve (1) and (2) right now, today. But if Google provides a (1)(2) tool in the future, I will certainly modify Qubiter to support Google’s tool too.

In short, welcome to the era of TensorFlow and PyTorch Enabled Quantum Computer Simulators.

February 21, 2019

Qubiter can now do Hybrid Quantum-Classical Computation, kind of

Filed under: Uncategorized — rrtucci @ 11:33 am

Habemus papam…kind of. So here is the scoop. Qubiter can now do Hybrid Quantum-Classical Computation…kind of. It is not yet of the most general kind, but we are getting there. “The journey of a thousand miles begins with one step.” (a saying attributed to Chinese philosopher Laozi, 600 BC)

The most general, what the Brits would call The Full Monty, would be if Qubiter could
(1) use distributed computing and back-propagation supplied by TensorFlow, PyTorch, and

(2) run a hybrid quantum-classical simulation on a physical hardware backend such as those already available to the public via the cloud, thanks to the companies IBM and Rigetti.

At this point, Qubiter cannot do either (1) or (2). Instead of (1), it currently does undistributed computing executed by the Python function scipy.optimize.minimize. Instead of (2), it uses Qubiter’s own built-in simulator as a backend.

Amazingly, the wonderful open-source software Pennylane by Xanadu already does (1) and (2). So far, they are the only ones that have accomplished this amazing feat. None of the big 3: Google Cirq, IBM Qiskit, and Rigetti Pyquil can do (1) yet either so we are in good company. I am sure that eventually, the big 3 will succeed in coaxing their own software stacks to do (1) and (2) too. But probably not for a while because large companies often suffer from infighting between too many generals, so they tend to move more slowly than small ones. They also almost always shamelessly copy the good ideas of the smaller companies.

I too want to eventually add features (1) and (2) to Qubiter, but, for today, I am happy with what I already have. Here is a jupyter notebook explaining in more detail what Qubiter can do currently in the area of hybrid-quantum classical computation

https://github.com/artiste-qb-net/qubiter/blob/master/jupyter-notebooks/MeanHamilMinimizer_native_demo.ipynb

Next Page »

Blog at WordPress.com.

%d bloggers like this: