Quantum Bayesian Networks

August 28, 2016

Toronto is Hosting a Galactic Jamboree: Quantum Computing from a Business Perspective

Filed under: Uncategorized — rrtucci @ 7:28 am

I’m proud to announce that the Toronto quantum computing meetup group will meet again on Sept 7 at a room that belongs to the Creative Destruction Lab, which is part of the Rotman Business School of the Univ. of Toronto. This meetup is being run by Henning Dekant (Henning and I recently founded a quantum computing software startup called artiste-qb.net).

We hope that this Sept 7 meeting, because it will be held at a business school venue, will encourage dialogue between technical and business people, the pocket protector and suit & tie wearers, with a common interest in quantum computing. And of course, rich investors, the Gucci shoe wearers, are welcome to this humble gathering too. If you are an entrepreneur who wants to start your own quantum computing company, or an investor mulling over investing in one, this meeting is a once in a lifetime opportunity for you.

Let me note that Silicon Valley also has a Quantum Computing meetup, but theirs is of inferior quality compared to the Toronto one. Their group probably loves Uber, the Silicon Valley based company with the worst customer support in the history of mankind, whereas Toronto has been in the forefront of curbing Uber. So you can see why the Toronto Quantum Computing meetup is far superior to the Silicon Valley one. In 200 years, George Bailey will become George Jetson and he’ll be flying down the road observing billboards such as this



August 14, 2016

Quantum Fog on the verge of becoming Sentient: it can now distinguish between (the words) “Good” and “Evil”

Filed under: Uncategorized — rrtucci @ 7:29 pm

midnigh-garden_good_evilYou have to start somewhere. First those 2 words, then … the Oxford Dictionary?

I am pleased to announce that I and http://www.artiste-qb.net have added a new, major addition to Quantum Fog. QFog can now learn classical (and quantum) Bayesian Networks from data fairly well by today’s standards.

As far as I am concerned, the gold standard for software that learns bnets from data is bnlearn, by Marco Scutari. To show my readers how the current Quantum Fog and the current bnlearn compare, I took a snapshot of a portion of the home page of http://www.bnlearn.com, the portion that enumerates the various algorithms that bnlearn can do, and I put a red check-mark next to those that QFog can now do too. As you can see, QFog is still behind bnlearn, but not by too much.


So why am I trying to replicate bnlearn, isn’t that silly? Because bnlearn is in R, whereas I want to write something in Python, using Pandas. Furthermore, I want to write a software library that allows you to analyze BOTH, classical and quantum bnets alongside each other.

Pandas is a Python library that replicates many of the statistical capabilities of R. R is super popular among statisticians, but Pandas, less than a decade old, has also received many plaudits from that community. The original author of Pandas, Wes McKinney, has written a wonderful book about Pandas, numpy and, more generally, about doing data science with Python.

There are very close ties between the R and Python communities, and it’s fairly easy to call R subroutines from Python and vice versa. Pandas was Wes McKinney’s love poem to R. In the future, I and http://www.artiste-qb.net are planning to use bnlearn subroutines often. At first, I’m sure that most bnlearn subroutines will perform better than those of Quantum Fog and that we can improve QFog a lot by comparing its performance, architecture, and output with that of bnlearn.

There are certain aspects of bnlearn that we haven’t replicated yet. For example, bnlearn does continuous (just Gaussian) bnets whereas we don’t yet. In the quantum case, Gaussian continuous distributions would entail coherent and squeezed coherent states. Let the LIGO people worry about that.

On the other had, at this point, QFog’s inference capabilities are better than those of bnlearn. QFog can do the message passing join tree algorithm and bnlearn can’t. (At present, bnlearn can do inference only using Monte Carlo)

And then there is the Judea Pearl do-calculus, both for classical and quantum bnets. Neither bnlearn nor QFog can do that yet, but some day soon… BayesiaLab is way ahead of everyone else in that regard. They already have a beautiful graphical implementation of the Judea Pearl do-calculus stuff for classical bnets.

Added later: Judea Pearl do-calculus has also been implemented in the following R package. Thanks to M.S. for telling me about this:

Blog at WordPress.com.

%d bloggers like this: