Quantum Bayesian Networks

May 24, 2018

Quantum Computing and a new book, “The Book of Why”, by Judea Pearl and Dana Mackenzie

Filed under: Uncategorized — rrtucci @ 5:23 am

“Leaping the Chasm” (1886) by Ashley Bennett, son of photographer Henry Hamilton Bennett, jumping to “Stand Rock”. See http://www.wisconsinhistory.org


Judea Pearl, UCLA professor, winner of the Turing Prize in Computer Science, is a hero to all Bayesian Network fans like me. Pearl has several books on B nets, as you can see at his Amazon page.. This blog post is to alert my readers to his most recent book, written in collaboration with Dana Mackenzie, released about a week ago, mid May 2018, entitled “The Book of Why: The New Science of Cause and Effect”.

To commemorate the release of the new book, I also wrote, besides this blog post, a small comment about the new book at the Edward Forum, and Dustin Tran, main author of Edward, responded with a comment that cites a very nice paper, less than 6 months old, by Dustin and Prof. Blei, Dustin’s thesis advisor at Columbia Univ, about the use of Judea Pearl’s causality ‘do-calculus’ within Edward.

I’ve been interested in the do-calculus for a long time, and have written two arxiv papers on the subject:

  1. Introduction to Judea Pearl’s Do-Calculus, by Robert R. Tucci (Submitted on 26 Apr 2013)
  2. An Information Theoretic Measure of Judea Pearl’s Identifiability and Causal Influence, by Robert R. Tucci (Submitted on 21 Jul 2013)
    This paper is for classical Bayesian Networks, but it can easily be generalized to quantum Bayesian Networks, by replacing probability distributions by density matrices in the information measure proposed there.

There exist more than a dozen packages written in R that implement at least partially the do-calculus. They are available at CRAN (the main R repository, named after cranberries).
This 2017 paper
contains a nice table of various R packages dealing with do-calculus.

It’s also interesting to note that BayesiaLab, a commercial software package that I love and recommend, already implements some of Pearl’s do-calculus. (full disclosure: the B net company that I work at, artiste-qb.net, has no business connections with BayesiaLab.)

By the way, artiste-qb.net provides a nice cloud service that allows you to run all these open-source do-calculus R packages on your browser, without any installation hassles. How? you ask, and if not, I’m going to tell you anyway.

***Beep, Beep, Commercial Alert***

artiste-qb.net is a multilingual (R, Python, Java, C++, English, German, Spanish, Chinese, Italian, French, you name it, we speak it) quantum open source software company.

We offer an image on AWS (the Amazon cloud service) called BayesForge.com.

BayesForge.com comes fully loaded with the Python distribution Anaconda, all of R, etc.

Bayesforge comes with most major Artificial Intelligence/Bayesian Networks, open-source packages installed, both classical ones (eg. TensorFlow, Edward, PyMC, bnlearn, etc) and quantum ones (eg., IBM Qiskit, DWave stuff, Rigetti and Google stuff, our own Quantum Fog, Quantum Edward, Qubiter, etc).

BayesForge allows you to run jupyter notebooks in Python, R, Octave (an open source matlab clone) and Bash. You can also combine Python and R within one notebook using Rmagic.

We have succeeded in dockerzing the BayesForge image and will be offering it very soon on other cloud services besides AWS, including a non-AWS cloud service in China, where AWS is so slow it is non-usable. One of our co-founders, Dr. Tao Yin, lives in ShenZhen, China, and is in charge of our China branch.

Advertisements

May 17, 2018

Help us program the NYC Matrix

Filed under: Uncategorized — rrtucci @ 1:59 am

Our company Artiste-qb.net does bleeding edge research into quantum computing. We are so advanced that our researchers think of Palantir and the NSA as a bunch of dotards and little Linux rocketmen.

One of our main projects is to program The NYC Matrix. We are almost done, thank you. In this picture, one of our elite programmers, Simón Bermúdez, is experimenting with quantum cloning of himself. Just like in The Matrix movies, but the real thing. Simón gets quite a lot of work done for us by slipping into this multiverse mode.

Google, IBM, Rigetti, DWave and Alibaba, you have been checkmated. artiste-qb.net is the only quantum computing firm that has mastered quantum cloning of employees.

Simón was born and raised in Venezuela, but now he lives in Toronto, Canada. He used to work at the illustrious Creative Destruction Lab (part of the U of Toronto) (hence the shirt in the photo). Now he works for artiste-qb.net. Thanks to Simón for letting me use this TOP SECRET photo. I lowered the resolution of the photo, the original one is even better.

May 9, 2018

BBVI in quantum computing, classical vs quantum supervised learning (classical vs quantum ELBO)

Filed under: Uncategorized — rrtucci @ 2:18 am

Version 1 of the computer program “Quantum Edward” that I released a few days ago uses the BBVI (Black Box Variational Inference, see Ref. 1 below) to train a qc by maximizing with respect to a parameter lambda, a “classical ELBO” (an ELBO defined in terms of classical probability distributions). I call that “classical supervised learning” by a qc (quantum computer).

But one can easily come up with a BBVI that trains a qc by maximizing with respect to a parameter lambda, a “quantum ELBO” (one defined by replacing the classical probability distributions of the classical ELBO by density matrices and sums by traces). I call this second strategy “quantum supervised learning” by a qc.

One more distinction. In Version 1 of Quantum Edward, we do C. Supervised Learning by a simulated (on a classical computer, analytical) qc. More generally, one could do (C. or Q.) Supervised Learning by a (real or simulated) qc

C. or Q. Supervised Learning by a simulated qc is immune to the quantum noise that plagues current qc’s which have almost no quantum error correction. So we definitely should explore that type of learning today.

It will be interesting to compare classification performance for various models (for either layered or DAG models with varying amounts of entanglement) for

  1. C. supervised learning by a classical computer (e.g., for Classical Neural Net layered models or for Bayesian network DAG models)
  2. (C. or Q.) supervised learning by (simulated or real) qc (e.g., for Quantum Neural Network models or for Quantum Bayesian Network models)

Nerd Nirvana will only be achieved once we can do Q. Supervised Learning by an error corrected real qc. 🙂

References:
1. R. Ranganath, S. Gerrish, D. M. Blei, “Black Box Variational
Inference”, https://arxiv.org/abs/1401.0118

May 5, 2018

Quantum Edward, First Commit

Filed under: Uncategorized — rrtucci @ 5:43 am

Today, I uploaded to GitHub the first commit of my “Quantum Edward” software. This blog is among other things a scrapbook of my quantum computing adventures. In this blog post, I want to save a copy of the first README of Quantum Edward. The software is exploratory and therefore will change a lot in the future and its README will change to mirror the changes in the software. So this first README will have a sentimental and comic value for me in years to come. Here it goes:

# Quantum Edward

Quantum Edward at this point is just a small library of Python tools for
doing classical supervised learning on Quantum Neural Networks (QNNs).

An analytical model of the QNN is entered as input into QEdward and the training
is done on a classical computer, using training data already available (e.g.,
MNIST), and using the famous BBVI (Black Box Variational Inference) method
described in Reference 1 below.

The input analytical model of the QNN is given as a sequence of gate
operations for a gate model quantum computer. The hidden variables are
angles by which the qubits are rotated. The observed variables are the input
and output of the quantum circuit. Since it is already expressed in the qc’s
native language, once the QNN has been trained using QEdward, it can be
run immediately on a physical gate model qc such as the ones that IBM and
Google have already built. By running the QNN on a qc and doing
classification with it, we can compare the performance in classification
tasks of QNNs and classical artificial neural nets (ANNs).

Other workers have proposed training a QNN on an actual physical qc. But
current qc’s are still fairly quantum noisy. Training an analytical QNN on a
classical computer might yield better results than training it on a qc
because in the first strategy, the qc’s quantum noise does not degrade the
training.

The BBVI method is a mainstay of the “Edward” software library. Edward uses
Google’s TensorFlow lib to implement various inference methods (Monte Carlo
and Variational ones) for Classical Bayesian Networks and for Hierarchical
Models. H.M.s (pioneered by Andrew Gelman) are a subset of C.B. nets
(pioneered by Judea Pearl). Edward is now officially a part of TensorFlow,
and the original author of Edward, Dustin Tran, now works for Google. Before
Edward came along, TensorFlow could only do networks with deterministic
nodes. With the addition of Edward, TensorFlow now can do nets with both
deterministic and non-deterministic (probabilistic) nodes.

This first baby-step lib does not do distributed computing. The hope is that
it can be used as a kindergarten to learn about these techniques, and that
then the lessons learned can be used to write a library that does the same
thing, classical supervised learning on QNNs, but in a distributed fashion
using Edward/TensorFlow on the cloud.

The first version of Quantum Edward analyzes two QNN models called NbTrols
and NoNbTrols. These two models were chosen because they are interesting to
the author, but the author attempted to make the library general enough so
that it can accommodate other akin models in the future. The allowable
models are referred to as QNNs because they consist of ‘layers’,
as do classical ANNs (Artificial Neural Nets). TensorFlow can analyze
layered models (e.g., ANN) or more general DAG (directed acyclic graph)
models (e.g., Bayesian networks).

References
———-

1. R. Ranganath, S. Gerrish, D. M. Blei, “Black Box Variational
Inference”, https://arxiv.org/abs/1401.0118

2. https://en.wikipedia.org/wiki/Stochastic_approximation
discusses Robbins-Monro conditions

3. https://github.com/keyonvafa/logistic-reg-bbvi-blog/blob/master/log_reg_bbvi.py

4. http://edwardlib.org/

5. https://discourse.edwardlib.org/

Blog at WordPress.com.

%d bloggers like this: