Quantum Bayesian Networks

August 14, 2016

Quantum Fog on the verge of becoming Sentient: it can now distinguish between (the words) “Good” and “Evil”

Filed under: Uncategorized — rrtucci @ 7:29 pm

midnigh-garden_good_evilYou have to start somewhere. First those 2 words, then … the Oxford Dictionary?

I am pleased to announce that I and http://www.artiste-qb.net have added a new, major addition to Quantum Fog. QFog can now learn classical (and quantum) Bayesian Networks from data fairly well by today’s standards.

As far as I am concerned, the gold standard for software that learns bnets from data is bnlearn, by Marco Scutari. To show my readers how the current Quantum Fog and the current bnlearn compare, I took a snapshot of a portion of the home page of http://www.bnlearn.com, the portion that enumerates the various algorithms that bnlearn can do, and I put a red check-mark next to those that QFog can now do too. As you can see, QFog is still behind bnlearn, but not by too much.

qfog-bnlearn

So why am I trying to replicate bnlearn, isn’t that silly? Because bnlearn is in R, whereas I want to write something in Python, using Pandas. Furthermore, I want to write a software library that allows you to analyze BOTH, classical and quantum bnets alongside each other.

Pandas is a Python library that replicates many of the statistical capabilities of R. R is super popular among statisticians, but Pandas, less than a decade old, has also received many plaudits from that community. The original author of Pandas, Wes McKinney, has written a wonderful book about Pandas, numpy and, more generally, about doing data science with Python.

There are very close ties between the R and Python communities, and it’s fairly easy to call R subroutines from Python and vice versa. Pandas was Wes McKinney’s love poem to R. In the future, I and http://www.artiste-qb.net are planning to use bnlearn subroutines often. At first, I’m sure that most bnlearn subroutines will perform better than those of Quantum Fog and that we can improve QFog a lot by comparing its performance, architecture, and output with that of bnlearn.

There are certain aspects of bnlearn that we haven’t replicated yet. For example, bnlearn does continuous (just Gaussian) bnets whereas we don’t yet. In the quantum case, Gaussian continuous distributions would entail coherent and squeezed coherent states. Let the LIGO people worry about that.

On the other had, at this point, QFog’s inference capabilities are better than those of bnlearn. QFog can do the message passing join tree algorithm and bnlearn can’t. (At present, bnlearn can do inference only using Monte Carlo)

And then there is the Judea Pearl do-calculus, both for classical and quantum bnets. Neither bnlearn nor QFog can do that yet, but some day soon… BayesiaLab is way ahead of everyone else in that regard. They already have a beautiful graphical implementation of the Judea Pearl do-calculus stuff for classical bnets.

Added later: Judea Pearl do-calculus has also been implemented in the following R package. Thanks to M.S. for telling me about this:
https://cran.r-project.org/web/packages/pcalg/index.html

1 Comment »

  1. That particular kind of knowledge has been often accused for instantiating the state of fall of mankind…you could as well have tried a more Zen flavoured one

    http://www.saybrook.edu/newexistentialists/posts/11-14-11/

    Comment by Kuhulcan — August 15, 2016 @ 8:07 am


RSS feed for comments on this post. TrackBack URI

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Create a free website or blog at WordPress.com.

%d bloggers like this: