Quantum Bayesian Networks

May 14, 2019

Quantum simulator Qubiter now has a native TensorFlow Backend

Filed under: Uncategorized — rrtucci @ 2:05 am


I am pleased to announce that my quantum simulator Qubiter (available at GitHub, BSD license) now has a native TensorFlow Backend-Simulator (see its class `SEO_simulator_tf`, the `tf` stands for TensorFlow). This complements Qubiter’s original numpy simulator (contained in its class `SEO_simulator`). A small step for Mankind, a giant leap for me! Hip Hip Hurray!

This means that Qubiter can now calculate the evolution of a state vector using CPU, GPU or TPU. Plus it can do back-propagation on a quantum circuit. Here is a jupyter notebook that I wrote that uses Qubiter’s TF backend to do VQE (Variational Quantum Eigensolving). (I like to call VQE, mean Hamiltonian minimization)

https://github.com/artiste-qb-net/qubiter/blob/master/qubiter/jupyter_notebooks/MeanHamilMinimizer_native_with_tf.ipynb

Numpy is a tensor library in Python and TensorFlow (produced by Google) is another tensor library in Python. TensorFlow matches by name and functionality, almost 1 to 1, every function in numpy. But the TF namesake functions are much more powerful than their numpy counterparts. Besides accessing the usual CPU, they can access distributed computing resources like GPU and TPU. Furthermore, they can be asked to “tape” a history of their use, and then to replay that history in reverse, back-propagation mode so as to calculate derivatives of a list of user-designated variables. These derivatives can then be used to minimize a cost (aka loss or objective) function. Such minimizations are the bread and butter of classical and quantum neural nets.

These are exciting times for TF:

  • Just last March, Google released officially TF version 2.0.

  • In TF 2.0, the Eager mode (which is the mode that Qubiter’s TF backend uses) has been elevated to default. TF Eager uses dynamical graphs like the TF competitor, PyTorch, produced by Facebook, uses.

  • TF 2.0 also incorporates “Edward”, a software lib by Dustin Tran for doing calculations with Bayesian Networks. Edward in its TF incarnation is called TF Probability. Plus TF 2.0 is also incorporating Keras (a library for doing layered neural nets) and PyMC (a lib for doing Markov Chain Monte Carlo calculations)

  • Anyone can run TF 2.0 on the cloud via Google’s Colab. That is, in fact, how I developed and tested Qubiter’s TF backend.

In theory, all you have to do to convert a numpy simulator to a tensorflow one is to change all functions that start with the prefix `np.` by their namesakes that start with the prefix `tf.`. However, the devil is in the details. I, like most good programmers, hate repeating code. I try to avoid doing so at all costs, because I don’t want to have to correct the same bug in multiple places. So I built class `SEO_simulator_tf` by sub-classing the original numpy simulator class `SEO_simulator`. That is one of the beauties of “sub-classing” in OOP (Object Oriented Programming), of which I am a great fan: sub-classing helps you avoid repeating a lot of code. So `SEO_simulator_tf` ended up being a deceivingly short and simple class: all it does is basically turn on a bunch of flags that it passes to its parent class `SEO_simulator`. Most of the new code is in the parent class `SEO_simulator` and other classes like `OneBitGates` which have been modified to respond to all those flags that `SEO_simulator_tf` sets.

February 24, 2019

Welcome to the Era of TensorFlow and PyTorch Enabled Quantum Computer Simulators

Filed under: Uncategorized — rrtucci @ 8:40 pm

In my previous blog post, I unveiled a new Jupyter notebook explaining how to use Qubiter (a quantum computing simulator managed by me) to do hybrid quantum-classical (HQC) quantum computing. In that prior blog post, I admitted that even though that meant that Qubiter could now do a naive type of HQC, Qubiter could not yet do fully fledged HQC, which I defined as (1) using distributed computing/back propagation driven by TensorFlow or PyTorch (2) using as backend a physical qc device such as those which are already accessible via the cloud, thanks to IBM and Rigetti. I pointed out that the wonderful software PennyLane by Xanadu can now do (1) and (2).

This blog post is to unveil yet another Jupyter notebook, this time showing how to use Qubiter to translate potentially any quantum circuit written in Qubiter’s language to the language of PennyLane, call it Pennylanese. This means Qubiter can now act as a front end to PennyLane, PennyLane can act as an intermediary link which is TensorFlow and PyTorch enabled, and Rigetti’s or IBM’s qc hardware can act as the backend.

So, in effect, Qubiter can now do (1) and (2). Here is the notebook

https://github.com/artiste-qb-net/qubiter/blob/master/qubiter/jupyter_notebooks/Translating_from_Qubiter_to_Xanadu_PennyLane.ipynb

I, Nostradamucci, have been prognosticating the merging of quantum computing and TensorFlow for a long time in this blog

I, Nostradamucci, foresee that PennyLane will continue to improve and be adopted by many other qc simulators besides Qubiter. Those other qc simulators will be modified by their authors so that they too can act as frontends to PennyLane. Why not do it? It took me just a few days to write the Qubiter2PennyLane translator. You can easily do the same for your qc simulator!

I, Nostradamucci, also foresee that many competitors to PennyLane will crop up in the next year. It would be very naive to expect that everyone will adopt PennyLane as their method of achieving (1) and (2).

In particular, Google will want to write their own (1)(2) tool. Just like Google didn’t adopt someone else’s quantum simulator, they started Cirq instead, it would be naive to expect that they would adopt PennyLane as their (1) (2) tool, especially since TensorFlow is their prized, scepter of power. Just like Google rarely adopts someone else’s app for Android, they write their own, Google rarely adopts someone else’s app for TensorFlow (and Cirq, and OpenFermion), they write their own.

And of course, the Chinese (and the independence-loving French, Vive La France!) prefer to use software that is not under the control of American monopolies.

I see PennyLane as a brilliant but temporary solution that allows Qubiter to achieve (1) and (2) right now, today. But if Google provides a (1)(2) tool in the future, I will certainly modify Qubiter to support Google’s tool too.

In short, welcome to the era of TensorFlow and PyTorch Enabled Quantum Computer Simulators.

April 7, 2018

TensorFlow Versus TensorLayers (for both classical and quantum cases)

Filed under: Uncategorized — rrtucci @ 4:39 am

Click to enlarge.

April 6, 2018

PyMC and Edward/TensorFlow Merging?

Filed under: Uncategorized — rrtucci @ 9:09 pm

News bulletin: Edward is now officially a part of TensorFlow and PyMC is probably going to merge with Edward.

The python software library Edward enhances TensorFlow so that it can harness both Artificial Neural Nets and Bayesian Networks. The main architect of Edward, Dustin Tran, wrote its initial versions as part of his PhD Thesis at Columbia Univ. (Columbia is the home of the illustrious Andrew Gelman, one of the fathers of hierarchical models, which are a special case of Bayesian networks). Dustin now works at Google as part of a team merging Edward with TensorFlow.

One of the near term goals of artiste-qb.net is to produce a quantum generalization of Edward. This would not run on a quantum computer but would simulate on a distributed classical computer possible experiments that could in the future be conducted on a qc.

I highly recommend the following two Discourses for Edward and PyMC:

It looks like the python software libs PyMC3 and Edward may soon merge:

https://discourse.pymc.io/t/tensorflow-backend-for-pymc4/409

This is very good news, in my opinion, because I am in love with both programs. It’s interesting to note that the current Numpy is also the result of the fortuitous marriage of two separate complementary software libs.

One can now call PyMC3 and Edward from Quantum Fog, although not very smoothly yet. See here.

February 19, 2018

Elon Musk Predicts Quantum TensorFlow

Filed under: Uncategorized — rrtucci @ 6:22 pm


Just kidding. But it could very well happen, now that I have suggested it on the Internet, whispering it into the ears of a thousand bots.

In a 1926 article in Colliers magazine, Nikola Tesla, inventor of AC current, predicted the smart phone.

“When wireless is perfectly applied the whole earth will be converted into a huge brain, which in fact it is, all things being particles of a real and rhythmic whole. We shall be able to communicate with one another instantly, irrespective of distance. Not only this, but through television and telephony we shall see and hear one another as perfectly as though we were face to face, despite intervening distances of thousands of miles; and the instruments through which we shall be able to do his will be amazingly simple compared with our present telephone. A man will be able to carry one in his vest pocket.”

I personally find this prediction and the conviction and clarity with which he uttered it quite impressive.

Elon Musk must be a big fan of Tesla, as evinced by the fact that he named his car brand Tesla. (by the way, when water melon rots in your fridge, it smells godawful. I call the smell Melon Musk. So the guy even has his own fragrance). Even though Musk is not very good at predicting (after all, he hasn’t predicted Quantum TensorFlow…yet), he has conquered many a science nerd’s heart, including my own, by building the Tesla automobile, in a “Gigafactory” that looks like a giant spaceship, and by putting, on February 6, 2018, his own Tesla roadster into orbit for billions of years using his SpaceX company’s Falcon Heavy rocket. That scene of the 2 returning side rockets from Falcon Heavy landing side by side, and nearly simultaneously, on space pad 39 in Cape Canaveral… is a scene that will be emblazoned forever in the hearts and minds of every science fiction and science fact lover that has seen it or ever will see it, for the rest of human history.

Let me channel for a moment the spirit of Tesla, who wants to communicate with Mr. Musk across the great divide, to tell him that quantum Tensorflow is an inevitability, and that it will need a logo. The ghost of Tesla left a picture of a suitable logo, in my computer, last night, while I slept. It’s the logo at the beginning of this blog post. The quote in the logo is of course due to Richard Feynman.

December 3, 2017

You are invited to the wedding of Quantum Computing and TensorFlow

Filed under: Uncategorized — rrtucci @ 6:42 pm

The quantum computerization of TensorFlow (TF) is a quixotic dream that no doubt has crossed the minds of many, both technically and not technically savvy, people. Here at artiste-qb.net, we are very committed and well underway to achieving this goal. Since our company is fully committed to open source, it doesn’t really matter if we achieve this goal before anyone else. If someone else beats us to it, we will learn from their code and vice versa. That is, as long as they too deliver open source to the world. And if they don’t, we think that their software is doomed…quantum open source rules! How did Google vanquish, or at least de-fang, the Microsoft monopoly? To a large extent, by using Open Source. Open source rules AI, the cloud and mobile.

So, let me tell you how we are using TF for quantum computing.

When Google first open sourced TF a mere 2 years ago, I wrote a blog post to mark the occasion. In that post, I called TF a platform for Fakesian networks instead of Bayesian networks. My beef was that TF had only deterministic nodes, which are standard in the field of artificial neural nets. It had no probabilistic nodes, which are standard in the 2 fields of classical Bayesian networks (BNets) and Hierarchical models (HM). But this past year, the open source community has fallen into the breach, filled in the gap, with a software library called Edward built on top of TF, that adds probabilistic nodes (the buzz word is “Probabilistic Deep Thinking”) to TF. And Edwards has been approved for integration into TF, so soon it will be seamless integrated into TF. Thus, soon, TF will combine artificial neural nets and BNets seamlessly. It will have superpowers!

Of course, in quantum mechanics, one must use complex amplitudes instead of probabilities for the nodes, and one must use an L2 norm instead of an L1 one with those amplitudes, so you can’t use Edward to do quantum mechanics just yet. Edward will have to be made “quantum ready”. By that we mean that one will have to rewrite parts of Edward so that it has a “quantum switch”, i.e. a parameter called ‘is_quantum’ which when True gives a quantum BNet and when False gives a classical BNet. That is precisely what artiste-qb.net’s open source program Quantum Fog already does, so our company is uniquely placed to make a quantum version of Edward.

Another obstacle to marrying TF and quantum computers is that the quantum BNets will have to be compiled into a sequence of elementary operations (SEO) such as control nots and single qubit rotations. Once again, our company artiste-qb.net is uniquely placed to accomplish this task. Our open source quantum simulator Qubiter is the only one in the business that includes a “quantum csd complier”, which is a tool that will help express quantum BNets as a SEO.

HMs are really a subset of BNets. However, the BNet and HM communities have historically grown somewhat independently. The BNet community is centered around pioneers like Judea Pearl (at UCLA), inventor of some of the most important BNet methods, whereas the HM community is centered around pioneers like Andrew Gelman (at Columbia), author of many great books and a great blog in the HM field. The HM tribe only uses continuous distributions for node probabilities, and they are very keen on MCMC (Markov Chain Monte Carlo). The BNet community uses both discrete (expressed as matrices or tensors) and continuous distributions for their node probabilities, and they use MCMC and other methods too, like the junction tree method, to do inferences.

Edward has a distinguished pedigree in both the BNet and HM communities. Edward originated in Columbia Univ. One of its main and original authors is Dustin Tran, currently a PhD student at Columbia. So you can be sure that the Edward people are in close communication and receive useful feedback from the Gelman tribe. Another distinguished author of Edward is Kevin Murphy, who has been working on BNets for more than a decade. Murphy wrote the oldie but goodie Bayes Net toolbox for Matlab. He has also written several books on bnets and machine learning. He previously worked as a prof at the Univ. of British Columbia but he now works at Google. He is one of the main organizers of the young (2 year old) Bayesian Deep Learning conference, which, by the way, will have its annual meeting in less than a week (Dec. 9, 2017).

Classical BNets are a very active field, both in academic research and in commerce. Judea Pearl won a Turing award for them. BNets are very popular in bioinformatics, for example. Whereas no qc company has yet broken-even financially, there are classical BNet companies that have lasted and been profitable for almost 2 decades, such as Bayesia, Hugin and Norsys/Netica.

Oh, and one last thing. It’s called TensorFlow, not TensorNetwork, for a very good reason. If you try to use TF to implement the “tensor networks” used in quantum computing, you will fail, unless you start using BNets instead of Tensor Networks and pretend these 2 are the same thing, which is probably what the Tensor Networks people will do. In TF (and BNets), the lines emanating out of a node carry in them a full tensor that they pass along to other nodes. In a Tensor Network, a Tensor does not Flow into the arrows emanating out of its node. The tensor just sits in the node. For more discussion about the important differences between a quantum BNet and a Tensor Network, see this blog post.
https://qbnets.wordpress.com/2015/03/26/tensor-networks-versus-quantum-bayesian-networks-and-the-winner-is/

November 11, 2015

Google Open-sources TensorFlow (A Fakesian Networks Software Library). Microsoft, Tear Down This Infer.NET Wall

Filed under: Uncategorized — rrtucci @ 5:21 pm

Check out

On Nov. 10, Google announced to much fanfare that it was open-sourcing TensorFlow, their software library for “Large-Scale Machine Learning on Heterogeneous Distributed Systems”. At this point in time, I know very little about TensorFlow, but I can already see that it is not very Bayesian Networky. Me being such an avid B Net fan, I can’t deny that I was a little disappointed by how little B net stuff it contains. To me, TensorFlow is Fakesian Networks instead of Bayesian Networks 🙂

In my opinion, TensorFlow has VERY LITTLE in common, except for the name, with what quantum information theorists call “quantum tensor networks”, although I’m sure that some sleazy, opportunistic physicists and science journalists will claim that the two are co-joined twins. Unlike the classical, mostly deterministic, highly distributed calculations that TensorFlow performs, quantum computers have to deal mostly with quantum probabilistic instead of deterministic calculations, and distributed computing for QCs would be very different than its classical counterpart. I think when dealing with probabilistic calculations, either on a classical or quantum computer, Classical Bnets and Quantum Bnets are the most natural and productive framework/strategy, as I’ve explained before.

Despite TensorFlow being Fakesian Networks, I welcome Google’s move to open TensorFlow, because it certainly raises the level of visibility, cooperation, competition and tension/suspense in the AI arena. For example, I’m sure that right about now Microsoft is facing a lot of pressure to respond in kind to the news about TensorFlow. So what does Microsoft use instead of TensorFlow to do its Bing AI? Is it Infer.net? Whatever it is, will MS have to open source part of its Bing AI, to keep up with the Joneses and the Kardashians?

I like Infer.net. It looks much more Bayesian Networky to me than TensorFlow does. Unfortunately, so far MS has only released to the public infer.net’s binary and API, and it has forbidden non MS people from using infer.net for commercial purposes.

Told you so UPDATE1: Ta-tan, Nostradamucci’s predictions have turned into reality again. Nov 12, just 2 days after Google released TensorFlow, Microsoft announces the open-sourcing of DMTK (Distributed Machine Learning ToolKit). And of course, Facebook was the first, with its open-sourcing of Torch enhancements on Jan 16 of this year.

UPDATE2: Related news. After UPDATE1, I learned that IBM has also been busy open-sourcing distributed AI software. It has recently open-sourced SystemML (ML=Machine Learning), a small part of its Watson software. The GitHub repository of SystemML was started on Aug 17, 2015. According to this press article, circa Nov 23, 2015, SystemML was accepted into the Apache incubator (a preliminary step to being declared a saint, where sainthood means you have performed 2 miracles and you are declared officially integrated and compatible with the Apache libraries, especially Spark, which SystemML will rely heavily upon.)

June 17, 2019

Pip Qubiter, Pip Qubiter, Hurray!

Filed under: Uncategorized — rrtucci @ 3:34 am

The purpose of this brief blog post is to announce that the Qubiter team (me and my friend Dr. Tao Yin) are finally getting serious about pip installation of Qubiter. Previously, Tao had uploaded what is, by now, a very old version (0.0.0) of Qubiter, onto the PyPi servers that provide the pip installation service. But as of today, you can pip install the latest version 1.0.1 of Qubiter. Just type

pip install qubiter --user

in your shell command line. This new version (unlike 0.0.0) has the canonical, pip compliant folder structure at its github repository


folder1
....setup.py
....folder2
........app.py

V 1.0.1 of Qubiter also includes a myriad of small improvements over the version of just a few weeks ago. Qubiter changes at a very fast pace! Last time I blogged about Qubiter improvements, I touted its new native Tensorflow backend, and its new implementation of a novel multi-threaded algorithm for computing gradients of quantum cost functions.

Since then, I have added to Qubiter two simple yet very useful tools for doing Continuous Integration (CI). Usually, for CI, what the big boys like Rigetti PyQuil, Google Cirq and IBM Qiskit do, is to write a matching pytest (or unittest) module for each app module and run all those pytest modules in batch via a service like travis. What I do for CI of Qubiter is not as industrial strength as the pytest/travis route, but it is much less onerous to the programmer and almost as effective at catching bugs.

Most Qubiter py files have a main() method at the end that tests the methods defined in that file. Qubiter also has a large library of Jupyter notebooks that put Qubiter through its paces. The new version 1.0.1 of Qubiter includes two py scripts,

`run_all_nb.py`
`run_all_py.py`.

The first script batch runs all the Jupyter notebooks, and the second script batch runs all those py files with a main() method at the end. These 2 py scripts together constitute a homemade tool for doing rudimentary (but pretty effective!) CI.

May 26, 2019

Tucci’s Work Going “Almost” Viral (LOL)

Filed under: Uncategorized — rrtucci @ 3:36 am

Sometimes you find yourself losing faith in yourself and in your hard work of 20 years. But then some very kind people rise out of damn Twitter!, of all God forsaken places, to reassure you. At such times, you thank God for Twitter (LOL, I never thought I would say that. 99% of the time, I hate Twitter with a passion. I used to belong to Twitter, but no longer do. On those painful occasions when I peek into it, I access it via a prophylactic incognito tab of my web browser. )

My recent blog post entitled “Quantum simulator Qubiter now has a native TensorFlow Backend” has been shared on Twitter by a small, select group of super kind people. I want to store in this blog post the ID of those tweets before Twitter archives them and the Twitter search engine stops listing them. As of today, upon querying the Twitter search engine with the keywords (“quantum” or “quantumcomputing”) and “tensorflow”, I count 11 Tweets. In my book, even one retweet is like going “almost” viral!🙃

March 22, 2019

Life in the time of the Bayesian Wars: Qubiter can now do Back-Propagation via Autograd

Filed under: Uncategorized — rrtucci @ 12:57 am

The main purpose of this blog post is to announce that the quantum simulator that I manage, Qubiter (https://github.com/artiste-qb-net/qubiter), can now do minimizations of quantum expected values, using Back-Propagation. These minimizations are a staple of Rigetti’s cloud based service (https://rigetti.com/qcs), which I am proud and grateful to have been granted an account to. Woohoo!

Technically, what Rigetti Cloud offers that is relevant to this blog post is

Hybrid Quantum Classical NISQ computing to implement the VQE (Variational Quantum Eigensolver) algorithms.

Phew, quite the mouth full!

The back-prop in Qubiter is currently done automagically by the awesome software Autograd (https://github.com/HIPS/autograd), which is a simpler version of, and one of the primary, original inspirations for PyTorch. In fact, PyTorch contains its own version of Autograd, which is called, somewhat confusingly, PyTorch Autograd, as opposed to the the earlier HIPS Autograd that Qubiter currently uses.

I also plan in the very near future to retrofit Qubiter so that it can also do back-prop using PyTorch and Tensorflow. This would enable Qubiter to do something that Autograd can’t do, namely to do back-prop with the aid of distributed computing using GPU and TPU. I consider enabling Qubiter to do back-prop via Autograd to be a very instructive intermediate step, the bicycle training wheels step, towards enabling Qubiter to do distributed back-prop via PyTorch and TensorFlow.

AI/Neural Network software libs that do back-propagation are often divided into the build-then-run and the build-as-run types. (what is being built is a DAG, an acronym that stands for directed acyclic graph). Autograd, which was started before Pytorch, is of the build-as-run type. PyTorch (managed by Facebook & Uber, https://en.wikipedia.org/wiki/PyTorch) has always been of the b-as-run type too. Tensorflow (managed by Google, https://en.wikipedia.org/wiki/TensorFlow) was originally of the b-then-run type, but about 1.5 years ago, Google realized that a lot of people preferred the b-as-run to the b-then-run, so Google added to Tensorflow, a b-as-run version called Eager TensorFlow. So now Tensorflow can do both types.

PyTorch and TensorFlow also compete in an area that is near and dear to my heart, bayesian networks. The original PyTorch and TensorFlow both created a DAG whose nodes were only deterministic. This is a special case of bayesian networks. In bnets, the nodes are in general probabilistic, but a special case of a probabilistic node is a deterministic one. But in recent times, the PyTorch people have added also probabilistic nodes to their DAGs, via an enhancement called Pyro (Pyro is managed mainly by Uber). The TensorFlow people have followed suit by adding probabilistic nodes to their DAGS too, via an enhancement originally called Edward, but now rechristened TensorFlow Probability (Edward was originally written by Dustin Tran for his PhD at Columbia Uni. He now works for Google.) And, of course, quantum mechanics and quantum computing are all about probabilistic nodes. To paraphrase Richard Feynman, Nature isn’t classical (i.e., based on deterministic nodes), damnit!

In a nutshell, the Bayesian Wars are intensifying.

It’s easy to understand the build-as-run and build-then-run distinction in bnet language. The build-then-run idea is for the user to build a bnet first, then run it to make inferences from it. That is the approach used by my software Quantum Fog. The build-as-run approach is quite different and marvelous in its own right. It builds a bnet automagically, behind the scenes, based on the Python code for a target function with certain inputs and outputs. This behind the scenes bnet is quite fluid. Every time you change the Python code for the target function, the bnet might change.

I believe that quantum simulators that are autograd-pytorch-tensorflow-etc enabled are the future of the quantum simulator field. As documented in my previous blog posts, I got the idea to do this for Qubiter from the Xanadu Inc. software Pennylane, whose main architect is Josh Izaac. So team Qubiter is not the first to do this. But we are the second, which is good enough for me. WooHoo!

PennyLane is already autograd-pytorch-tensorflow enabled, all 3. So far, Qubiter is only autograd enabled. And Pennylane can combine classical, Continuous Variable and gate model quantum nodes. It’s quite general! Qubiter is only for gate model quantum nodes. But Qubiter has many features, especially those related to the gate model, that PennyLane lags behind in. Check us both out!

In Qubiter’s jupyter_notebooks folder at:

https://github.com/artiste-qb-net/qubiter/tree/master/qubiter/jupyter_notebooks

all the notebooks starting with the string “MeanHamilMinimizer” are related to Qubiter back-prop

March 9, 2019

Current Plans for Qubiter and when is walking backwards a good thing to do?

Filed under: Uncategorized — rrtucci @ 6:21 pm

This post is to keep Qubiter fans abreast of my current plans for it.

As I mentioned in a previous post entitled “Welcome to the Era of TensorFlow and PyTorch Enabled Quantum Computer Simulators”, I have recently become a big fan of the program PennyLane and its main architect, Josh Izaak.

Did you know PennyLane has a Discourse? https://discuss.pennylane.ai/ I love Discourse forums. Lots of other great software (like PyMC, for instance) have discourses too.

I am currently working hard to PennyLanate my Qubiter. In other words, I am trying to make Qubiter do what PennyLane already does, to wit: (1)establish a feedback loop between my classical computer and the quantum computer cloud service of Rigetti (2) When it’s my computer’s turn to act in the feedback loop, make it do minimization using the method of back-propagation. A glib way of describing this process is: a feedback loop which does forward-propagation in the Rigetti qc, followed by backwards-propagation in my computer, followed by forward-propagation in the Rigetti qc, and on and on, ad nauseam.

I am proud to report that Qubiter’s implementation of (1) is pretty much finished. The deed is done. See my Qubiter module https://github.com/artiste-qb-net/qubiter/blob/master/adv_applications/MeanHamil_rigetti.py This code has not been tested on the Rigetti cloud so it is most probably still buggy and will change a lot, but I think it is close to working

To do (1), I am imitating the wonderful Pennylane Rigetti plugin, available at GitHub. I have even filed an issue at that github repo
https://github.com/rigetti/pennylane-forest/issues/7

So far, Qubiter does not do minimization by back-propagation, which is goal (2). Instead, it does minimization using the scipy function scipy.optimize.minimize(). My future plans are to replace this scipy function by back-propagation. Remember why we ultimately want back-propagation. It’s because, of all the gradient based minimization methods (another one is conjugate gradient), backprop is the easiest to do in a distributed fashion, which takes advantage of GPU, TPU, etc. The first step, the bicycle training wheels step, towards making Qubiter do (2) is to use the wonderful software Autograd. https://github.com/HIPS/autograd Autograd replaces each numpy function by an autograd evil twin or doppelganger. After I teach Qubiter to harness the power of Autograd to do back-prop, I will replace Autograd by the more powerful tools TensorFlow and PyTorch (These also replace each numpy function by an evil twin in order to do minimization by back-propagation. They also do many other things).

In doing back-propagation in a quantum circuit, one has to calculate the derivative of quantum gates. Luckily, it’s mostly one qubit gates so they are 2-dim unitaries that can be parametrized as

U=e^{i\theta_0}e^{i\sigma_k\theta_k}

where the k ranges over 1, 2, 3 and we are using Einstein summation convention. \theta_0, \theta_1,\theta_2, \theta_3 are all real. \sigma_k are the Pauli matrices. As the PennyLane authors have pointed out, the derivative of U can be calculated exactly. The derivative of U with respect to \theta_0 is obvious, so let us concentrate on the derivatives with respect to the \theta_k.

Let
U = e^{i\sigma_3\theta_3} = C + i\sigma_3 S
where
S = \sin\theta_3, C = \cos \theta_3.
Then
\frac{dU}{dt} = \dot{\theta}_3(-S + i\sigma_3 C)

More generally, let
U = e^{i\sigma_k\theta_k} = C +  i\sigma_k \frac{\theta_k}{\theta} S
where
\theta = \sqrt{\theta_k\theta_k}, S =  \sin\theta, C = \cos \theta
Then, if I’ve done my algebra correctly,

\frac{dU}{dt}=-S \frac{\theta_k}{\theta}  \dot{\theta_k}+ i\sigma_k\dot{\theta_r}  \left[\frac{\theta_k\theta_r}{\theta^2} C+   \frac{S}{\theta}(-\frac{\theta_k\theta_r}{\theta^2}    + \delta_{k, r})\right]

I end this post by answering the simple riddle which I posed in the title of this post. The rise of Trump was definitely a step backwards for humanity, but there are lots of times when stepping backwards is a good thing to do. Minimization by back propagation is a powerful tool, and it can be described as walking backwards. Also, when one gets lost in a forest or in a city and GPS is not available, I have found that a good strategy for coping with this mishap, is to, as soon as I notice that i am lost, back track, return to the place where I think I first made a mistake. Finally, let me include in this brief list the ancient Chinese practice of back walking. Lots of Chinese still do back-walking in public gardens today, just like they do Tai Chi. Both are healthy low impact exercises that are specially popular with the elderly. Back walking is thought to promote muscular fitness, because one uses muscles that are not used when walking forwards. Back walking is also thought to promote mental agility, because you have to think a little bit harder to do it than when walking forwards. (Just like counting backwards is a good test for sobriety and for detecting advanced Alzheimer’s)

February 26, 2019

Seth Lloyd invented PennyLane. It’s a well known fact.

Filed under: Uncategorized — rrtucci @ 6:01 pm

I forgot to mention that Seth Lloyd invented PennyLane. The other people at Xanadu are all identical worker ants faithfully following his brilliant instructions on what to do, according to a Xanadu press release

Excerpt from press release:

“Deep learning libraries like TensorFlow and PyTorch opened up artificial intelligence to the world by providing an interface to powerful GPU hardware. With PennyLane, Xanadu is now doing the same for machine learning on quantum hardware,” said Seth Lloyd, Xanadu’s chief scientific advisor, MIT professor and a founding figure in both quantum computing and quantum machine learning. “We’re going to see an explosion of ideas, now that everyone can train quantum computers like they would train deep neural networks.”

Xanadu is a company whose “chief scientific advisor, MIT professor and a founding figure in both quantum computing and quantum machine learning”, Prof. Seth Lloyd, has promised to build a “continuous values” quantum computer device, which is a device invented by Seth Lloyd, according to him. This would be a quantum computer that is more analog and more classical than DWave’s. DWave, a qc company which was founded in 1999, 20 years ago, has never been able to provide error correction for its qc, so many experts believe that Xanadu’s Lloydian qc will be very difficult to error correct too. But if anyone can solve this prickly conundrum, it’s Seth Lloyd, who, according to him, is the original inventor of DWave’s device too.

Addendum:

Wow wee! Seth Lloyd’s invention, an optical computer that runs Tensorflow, is really taking off at MIT. I just came across this news report on a company called LightMatter, funded by some heavyweights like Google Ventures, that proposes to do just that.

https://www.cnbc.com/2019/02/25/alphabet-gv-invests-in-lightmatter-optical-ai-chip-startup.html

Excerpts:


Lightmatter just picked up its first backing from a corporate investor: GV, a venture arm of Google parent company Alphabet.

In 2014, Nick Harris and Darius Bunandar were trying to combine optical technology with quantum computing at the Massachusetts Institute of Technology, where they were doing Ph.D. work in the same research group.

But in 2015, Harris and Bunandar began looking at fields beyond quantum computing, including AI. “Our feeling is that there are a huge number of challenges that remain to be solved” for their quantum approach, Harris said.

“There is a lot of effort that goes into making this kind of device plug and play and making it look a lot like the experience of an Nvidia GPU,” Harris said. The team wants to ensure the chips work with popular AI software such as the Google-backed open-source project TensorFlow.

February 21, 2019

Qubiter can now do Hybrid Quantum-Classical Computation, kind of

Filed under: Uncategorized — rrtucci @ 11:33 am

Habemus papam…kind of. So here is the scoop. Qubiter can now do Hybrid Quantum-Classical Computation…kind of. It is not yet of the most general kind, but we are getting there. “The journey of a thousand miles begins with one step.” (a saying attributed to Chinese philosopher Laozi, 600 BC)

The most general, what the Brits would call The Full Monty, would be if Qubiter could
(1) use distributed computing and back-propagation supplied by TensorFlow, PyTorch, and

(2) run a hybrid quantum-classical simulation on a physical hardware backend such as those already available to the public via the cloud, thanks to the companies IBM and Rigetti.

At this point, Qubiter cannot do either (1) or (2). Instead of (1), it currently does undistributed computing executed by the Python function scipy.optimize.minimize. Instead of (2), it uses Qubiter’s own built-in simulator as a backend.

Amazingly, the wonderful open-source software Pennylane by Xanadu already does (1) and (2). So far, they are the only ones that have accomplished this amazing feat. None of the big 3: Google Cirq, IBM Qiskit, and Rigetti Pyquil can do (1) yet either so we are in good company. I am sure that eventually, the big 3 will succeed in coaxing their own software stacks to do (1) and (2) too. But probably not for a while because large companies often suffer from infighting between too many generals, so they tend to move more slowly than small ones. They also almost always shamelessly copy the good ideas of the smaller companies.

I too want to eventually add features (1) and (2) to Qubiter, but, for today, I am happy with what I already have. Here is a jupyter notebook explaining in more detail what Qubiter can do currently in the area of hybrid-quantum classical computation

https://github.com/artiste-qb-net/qubiter/blob/master/qubiter/jupyter_notebooks/MeanHamilMinimizer_native_scipy.ipynb

January 11, 2019

Qubiter now has Placeholders for gate angles

Filed under: Uncategorized — rrtucci @ 5:38 am

placeholder

The word “Placeholder” is used in Qubiter (we are in good company, Tensorflow uses this word in the same way) to mean a variable for which we delay/postpone assigning a numerical value (evaluating it) until a later time. In the case of Qubiter, it is useful to define gates with placeholders standing for angles. One can postpone evaluating those placeholders until one is ready to call the circuit simulator, and then pass the values of the placeholders as an argument to the simulator’s constructor. Placeholders of this type can be useful, for example, with quantum neural nets (QNNs): in some QNN algorithms, the circuit gate structure is fixed but the angles of the gates are varied many times, gradually, trying to lower a cost function each time.

This brief blog post is to announce that Qubiter now has such placeholders. Hurray! Placeholders is a nice feature that Google’s qc simulator Cirq and Rigetti’s qc simulator PyQuil, already have (they call them parametric or symbolic gates), so it has been irking me for some time that Qubiter didn’t have them too. Got to keep up with the Joneses and the Kardashians, you know.

Qubiter implements placeholders via a class called PlaceholderManager. You can already read an example of placeholder usage in the main() method at the end of that class. I also intend to write a Jupyter notebook, probably this weekend, illustrating their use.

December 16, 2018

Galactic Battle now going on in Quantum Space Between our Google OverLords and Dubai Inc.

Filed under: Uncategorized — rrtucci @ 6:41 pm

Recently, the company IonQ announced that they have already built and are operating an ion trap quantum computer with 160 FULLY CONNECTED high fidelity qubits. The device’s 1-qubit and 2-qubit gate fidelities are reported to be 99.97% and 99%, resp. See IonQ’s press release, and Gizmodo article.

The thing that may have escaped the attention of casual readers because it is rarely emphasized, although it isn’t hidden (see Wikipedia article on IonQ), is that IonQ’s main investor is GV=Google Ventures, which is a wholly owned subsidiary of Alphabet, the umbrella company that owns all Googlish subsidiaries. A tree diagram of all Alphabet companies is quite impressive (and scary). In other words, IonQ is really just a branch of Google in disguise. So Google is betting big on both an ion trap and a Josephson junction quantum computer. Don’t know about you, but in the last five years, I have grown highly dependent on Android (a Google product) with its mostly proprietary Google apps, Google search, Chrome, Gmail, Google Maps, Google Drive, YouTube, Google News, TensorFlow, … I seem to find Google under every rock these days. And they are now aiming to control what I call the ‘Holy Trinity’: cloud, AI and quantum computing.

Meanwhile, in another sector of quantum space, Rigetti computing is getting ready to attack our Google Over Lords. Rigetti started in 2014. At that time, it seemed to me to be a joke company started by a $2.5M seed investment from the controversial billionaire Tim Draper. But since then, Rigetti has been funded to the tune of $120M, according to Crunchbase. So where did this flash flood of money come from? According to Wikipedia, the main Rigetti investors are Vy Capital (> $40M) and Andreesen Horovitz (> $24M). Vy Capital is based in Dubai, United Arab Emirates. It is reasonable to assume that Vy Capital’s extravagant spending habits and penchant for high risk is fueled by super rich Arab investors. So Vy Capital, aka Dubai Inc., is the main investor of Rigetti.

So you see, there is indeed a galactic battle going on between 2 Google quantum clones and Dubai Inc. But it gets better:

Like I said before, at first I considered Rigetti to be somewhat of a joke, but now that so much money has been invested in them, it is much harder for me to be dismissive about them. Once Dubai Inc invested the $40M in Rigetti, many smaller “lemming” VC’s followed, because they realize that now Rigetti has become too big to fail as an investment. And Rigetti/Dubai Inc have promised some very attractive sweeteners this year: A 128 qubit qc by next year, a cloud service for hybrid quantum-classical AI, and a $1M prize for the first proof of a quantum advantage. My suspicion is that Dubai Inc. is prepping up Rigetti, making it attractive, so that Microsoft buys it for a few billion dollars. Microsoft might fall for it, because they won’t have a topological qc for many years, if ever, and their obscure quantum language q# is flopping badly, as I predicted. This is actually a common practice in Silicon Valley. They like to jack up the valuation of a startup, and then, instead of doing an IPO, sell it to one of the giant monopolies like Microsoft, Apple, Facebook, Google, Amazon (MAFGA, a portmanteau of MAFIA and MAGA)

Next Page »

Blog at WordPress.com.

%d bloggers like this: