Quantum Bayesian Networks

August 15, 2017

Resistance is Futile: Jupyter Borg Threatens to Assimilate Quantum Computing Universe

Filed under: Uncategorized — rrtucci @ 5:00 pm

A week ago, IBM announced at its Quantum Experience usegroup that it had uploaded to github a large collection of jupyter notebooks exhibiting the use of their gate model quantum computer (previously 5 qubits, currently 16 qubits). I consider this an excellent addition to the quantum open source and free jupyter notebook universe and ecosystem. I’ve advocated for quantum open source and jupyter notebooks many times before in this blog, so it’s a pleasure for me to echo their announcement.

Pow! Right in the kisser of Microsoft’s Liqui|> software. Liqui|> is closed source software.

Google has announced that it will deliver by year’s end a 49 qubit gate model qc with accompanying open source software and cloud service. The jupyter ball is now in your court, Google.

Artiste-qb.net, the company that I work for, already provides a large and ever growing library of jupyter notebooks for both of its quantum open source offerings, Qubiter and Quantum Fog.

Rigetti’s PyQuil and ProjectQ are two other gate model qc simulators analogous to IBM quantum experience. So far these two have very few jupyter notebooks. Wimps! Laggards! Let them eat cake!


Borg Cake


Jupyter Cake


September 20, 2016

In Love With Jupyter Notebooks, Post-Processing Your Lab Notebook

Filed under: Uncategorized — rrtucci @ 6:16 pm

I am pleased to announce on behalf of http://www.artiste-qb.net artiste-logothat our open source BSD licensed programs Qubiter and Quantum Fog now have some Jupyter Notebooks (JN’s), the first of hopefully many JN’s to come in the future. So far, Qubiter has 2 notebooks explaining Teleportation and the IBM Quantum Experience, whereas Quantum Fog has a notebook testing some ideas on how best to plot a quantum density matrix.

The way I see it, JN’s represent a method of using software that seems better suited for scientific investigations than the older method of “GUI (graphical user interface) rich” software.

Typically, GUI rich programs allow you to save some files with the fruits of your labor, but there are often several of those files, perhaps written in different formats, some human readable text formats and some propietary non-human readable ones. A JN, on the other hand, merges all those files into a single one that is stored in an open, very common, multimedia, browser readable format called JSON. The JN also records the commands that led to each of the files that are being merged plus it allows you to insert rich text comments between those files. All this makes JN’s, in my opinion, a much more unified, clear and complete way of documenting your thought process, both for yourself, and for others who might be interested in following your work.

Consider the lab or work notebooks of famous scientists (DaVinci, Darwin, Newton, Feynman, …). I for one find those brain storming and data recording documents endlessly fascinating and hope they continue to be written on paper and by hand till the end of humanity, but even those historic documents would have benefitted from some post-processing using the full panoply of modern computer tools now available to us. Imagine a Leonardo or a Darwin or a Feynman notebook with simulations and some plots and statistical analysis. Raw data can be post-processed using statistical packages. Thorny equations can be post-processed too, with symbolic manipulation programs, numeric algorithms programs and plotting programs. Such post-processing is what JN’s allow us to do.

The idea of writing software for creating such notebooks is not new. Although probably not the first software to use them, Wolfram’s Mathematica did much to popularize them. Even if you have never heard of JN’s, you probably have encountered Mathematica notebooks by now, wonderful multimedia files that can contain and execute Mathematica code, plots, animations, text with Latex equations embedded in it, etc. JN apps do all of that too, but they are open source under BSD license. They are much more adaptable to other platforms. They also rely more on browser and internet software resources (HTML, JavaScript, MathJax for LaTeX rendering, JSON format…) so they are ideally suited for an application running on the cloud, although they can also be run autonomously on a single PC.

JN’s were originally built as an app that ran on top of IPython, a command shell for Python, but the app has been carefully written so that it can be easily assimilated by other computer languages. 30 to 40 computer languages already have JN’s, including many languages that are not interpreted languages. Interpreted languages are languages like Python and Mathematica that are designed to run one line at a time.

Home of project Jupyter:

The original programmer of IPython and JN is Fernando Perez. Here is blog post by him describing JN history.

April 29, 2017

Miss Quantum Computing, may I introduce to you Miss Bayesian Hierarchical Models and Miss MCMC?

Filed under: Uncategorized — rrtucci @ 5:49 pm

Warning: Intense talk about computer software ahead. If you are a theoretical computer scientist, you better stop reading this now. Your weak constitution probably can’t take it.

When you enter the nerd paradise and secret garden that is Bayesforge.com (a free service on the Amazon cloud), you will see one folder named “Classical” and another named “Quantum”. Here is a screenshot of this taken from Henning Dekant’s excellent post in Linkedin

The “Quantum” folder contains some major open source quantum computing programs: Quantum Fog, Qubiter, IBM-QisKit (aka kiss-kit), QuTip, DWave, ProjectQ, Rigetti

The “Classical” folder contains some major Bayesian analysis open source programs: Marco Scutari’s bnlearn (R), Kevin Murphy’s BNT (Octave/matlab), OpenPNL (C++/matlab), PyMC, PyStan.

The idea is to promote cross fertilization between “Quantum” and “Classical” Bayesian statisticians.

Today I want to talk mostly about PyMC and PyStan. PyMC and PyStan deal with “Hierarchical Models” (Hmods). The other programs in the “Classical” folder deal with “Bayesian Networks”(Bnets).

Bnets and Hmods are almost the same thing. The community of people working on Bnets has Judea Pearl as one of its distinguished leaders. The community of people working on Hmods has Andrew Gelman as one of its distinguished leaders. You might know Gelman (Prof. at Columbia U.) from his great blog “Statistical Modeling, Causal Inference, and Social Science” or from one of his many books

Both PyStan and PyMC do MCMC (Markov Chain Monte Carlo) for Hmods. They are sort of competitors but also complementary.

PyStan (its GitHub repo here) is a Python wrapper of a computer program written in C++ called Stan. According to Wikipedia, “Stan is named in honour of Stanislaw Ulam, pioneer of the Monte Carlo method.” Prof. Gelman is one of the fathers of Stan (I mean the program, of course).

PyMC comes in 2 incompatible versions: 2.X and 3.X. Version 3 is more advanced and intends to replace Ver 2. PyMC2’s best sampler is a Metropolis-Hastings (MH) sampler. PyMC3 contains an MH sampler, but it also contains the “No U turns” or “NUTS” sampler that is supposed to be much faster than MH for large networks. Currently, Bayesforge contains only PyMC2, but the next version will contain both PyMC2 and PyMC3. As an added bonus, PyMC3 comes with Theano, one of the leading deep neural networks frameworks.

Check out this really cool course:

Sta-663 “Statistical Programming” , available at GitHub, taught at Duke U. by Prof. Chi Wei Cliburn Chan.

This wonderful course has some nice jupyter notebooks illustrating the use of PyMC2, PyMC3 and PyStan. Plus it contains discussions of many other statistical programmimg topics. I love it. It has a similar philosophy to BayesForge, namely to do statistical programming with jupyter notebooks because they are great for communicating your ideas to others and allow you to combine seamlessly various languages like Python, R, Octave, etc

April 8, 2017

Welcome to BayesForge, a free service on the Amazon cloud

Filed under: Uncategorized — rrtucci @ 7:08 am

Our company artiste-qb.net is proud to announce BayesForge.com (BF), our new service on AWS (Amazon Web Services). BayesForge is not yet open for business but will open in a week or less. The purpose of this blog post is to give an overview and teaser of what is coming.

Henning Dekant will give a tour of BF to those who attend the meeting on April 21 of the Toronto Quantum Computing Meetup.

Why AWS?

The Amazon cloud is one of the largest and Amazon offers an incredible deal to all its customers. Anybody with a credit card can get an AWS Free Tier account. An AWS Free Tier account gives you one full year of free cloud use (there are some upper bounds on usage but very generous ones). Furthermore, you can open a separate AWS Free Tier account in the name of your spouse and each of your children. Amazon doesn’t seem to care!! Only Jeff Bezos could be so crazy generous. Once you have an AWS Free Tier account, you can use it to play with BayesForge for a whole year, free of charge. You don’t have to be a student to do this.

What can you do with BayesForge?

BF allows you to write Jupyter notebooks on your web browser in a wide variety of languages and run/store those notebooks on the cloud. (Besides the web browser, no additional software needs to be installed on your computer). We have installed Jupyter kernels for Python-3, Python-2, R, Octave (clone of matlab) and bash. This means you can write/run a notebook in which you use one of those languages exclusively. We have also installed Rmagic and Octavemagic which allows you to write a notebook in Python but dip temporarily into R or Octave.

What software packages does BayesForge include?

BF comes with 2 folders called “Classical” and “Quantum”. The “Quantum” folder contains a large selection of open source quantum computing software, including our own Quantum Fog and Qubiter, and also open source qc software from DWave, IBM, Rigetti, etc. The “Classical” folder contains a large selection of open source software for doing classical bayesian statistical analysis.

This is the logo of BayesForge (graphics by our inhouse artiste and Gimp wizard, Henning Dekant. BayesForge name suggested by Gavin Dekant)

“True gold is not afraid of fire.”
(Chinese Proverb)

“Incus robust malleum non timet”
“A strong anvil need not fear the hammer.”
(Latin Proverb)

April 5, 2017

Tao Yin teleports (the old fashioned way) 2 entangled cats from Germany to China

Filed under: Uncategorized — rrtucci @ 5:11 pm

In this photo you can see Tao Yin at the Frankfurt airport, before boarding an airplane to China, transporting cat #1. Cat #2 was teleported by his wife on same flight. One small pet per passenger allowed. Cats 1 and 2 are friends that were entangled at a distance during the flight.

Dr. Tao Yin obtained a Ph.D. in Physics at the end of last year from
The Goethe University in Frankfurt. He started at artiste-qb.net as a long-distance intern last summer, but he is now CTO and part owner of artiste-qb.net . He has quickly become an important part of our company. He has just moved from Frankfurt-Germany to Shenzhen-China, where he will continue to represent us. Shenzhen, population ~ 11 million, is one of the 5 largest cities in China, a technology power house located immediately north of Hong Kong.

Tao brings to artiste-qb.net excellent computer skills and knowledge of physics. We are also relying on him to translate to Mandarin Chinese some of our software docs. For example, he translated the Jupyter notebooks quantum_compiler_intro.ipynb to quantum_compiler_intro_CN.ipynb. These jupyter notebooks explain how to use the quantum compiler in our open source software Qubiter. Qubiter has had a fully-functional quantum compiler since the first week of Jan of 2017. Currently, Qubiter is the only software to offer a quantum compiler of this kind.

March 20, 2017

BNT and PNL, two masterpieces of Bayesian Networks retro-art

Filed under: Uncategorized — rrtucci @ 8:58 pm

An update on the latest adventures of our company artiste-qb.net.

In previous blog posts, I waxed poetic about Marco Scutari’s open source software called bnlearn for learning the structure of bnets (Bayesian networks) from data. bnlearn is written in the language R whereas Quantum Fog is written in Python. But by using Jupyter notebooks with Rmagic installed, we have been able to write some notebooks running both QFog and bnlearn side by side in the same notebook for the same bnets, and to compare outputs of both programs. That is a good bench-marking exercise for the bnet learning side of QFog, but what about it’s bnet inference side?

Two open source programs that are very good at doing bnet inference (and many other things too) are BNT (Bayes Net Toolbox, by Kevin Murphy et al) and OpenPNL (PNL = Probabilistic Networks Library, written by Intel. I like to call it PaNeL to remember the letters quickly).

So our next adventure is to learn how to use BNT and PNL and to compare them to QFog.

BNT is written in Matlab. PNL is written in C++ but it includes Matlab wrappers for most of its functions. Both BNT and PNL are very mature and comprehensive programs. Since its core is written in C++ rather than Matlab, we expect PNL to be much faster than BNT for large networks.

As you already know if you’ve ever checked Matlab’s pricing, the software is very costly for everyone except students. However, this is one case when the open source gods have listened to our prayers. Octave is a free, open source program that can run 99% of a Matlab program and the few differences between Matlab and Octave (mostly in the plotting packages) are well documented. Furthermore, one can run Octave in a Jupyter notebook, either on an Octave kernel or on a Python kernel with octavemagic (oct2py) installed.

So in order to compare QFog to bnlearn, we’ve had to start using Jupyter notebooks on R kernel  or on Python kernel with Rmagic. And in order to compare QFog with BNT&PNL, we’ve had to start using Jupyter notebooks on Octave kernel or on Python kernel with octavemagic. We have seen the light and we are now believers in a holy trinity of computer languages (diversity and open source is our strength, Mr Trump):

Python, R, Octave
(our polyglot notebooks)

Curiously, Duke Univ. offers a course called “Computational Statistics in Python” that also advocates the use of Jupyter notebooks, and the languages Python, R and Matlab intermixed to do statistics. So when two cultures independently come up with the same idea, it’s probably a good one, don’t you think?

Since BNT is written in Matlab, running it does not require any compilation. PNL, on the other hand, is written in C++, so it does. Compiling PNL has proven a difficult task because the software is ten years old, but, after a lot of sweat and tears, our wiz Henning Dekant has managed to compile it (a few issues remain).

BNT was last changed in a bigly way circa 2007 (or even earlier) and PBL on 2006. (bnlearn, by comparison, is still very active). BNT and PNL belong to what I like to call the first bnet revolution (inference by junction tree) whereas bnlearn belongs to the second revolution (structure learning from Markov blankets). Even though PNL belongs to the first, not second, revolution, it is a major mystery to me why Intel abandoned it. PNL is a very impressive, large and mature piece of software. A lot of work, love and passion seems to have gone into it. Then sometime in mid 2006, it seems to have been abandoned in a hurry, and now it looks like a ghost town, or the deserted island in the video game Myst. I already know how the game Myst ends. If anyone knows why Intel stopped PNL development circa 2006, I would appreciate it if you would tell me, either in public in this blog or in private by email. Luckily, Intel had the wisdom to make PNL open source. PNL will go on, because 💍OPEN SOURCE IS FOREVER💍.

Sorry for the length of this blog post (almost as long as a Scott Aaronson blog post or a never ending New Yorker article).


February 22, 2017

Quantum Fog’s weight in bnlearn units

Filed under: Uncategorized — rrtucci @ 2:42 am

In a recent blog post entitled “R are Us. We are all R now”, I expressed my great admiration for the R statistical computer language, and I announced the addition to the Quantum Fog (QFog) GitHub repository of a Jupyter notebook called “Rmagic for dummies” which explains how something called Rmagic allows one to run both Python and R in the same Jupyter notebook.

In 2 other earlier blog posts, I also expressed great admiration for something else, bnlearn, an open source computer program written in R by Marco Scutari for learning classical Bayesian networks (cbnets) from data. I consider bnlearn the gold standard of bnet learning software.

The main purpose of this blog post is to announce that the QFog GitHub repo now has a folder of Jupyter notebooks comparing QFog to bnlearn. This is a perfect application of Rmagic to comparing two applications that can do some of the same things but one app is written in R while the other is written in Python. Pitting QFog against bnlearn is highly beneficial to us developers of QFog because it shows us what needs to be improved and suggests new features that would be worthwhile to add.

QFog can do certain things that bnlearn can’t (most notably, QFog can do both classical and quantum bnets, whereas bnlearn can only do classical bnets), and vice versa (for instance, bnlearn can do bnets with continuous (Gaussian) node probability distributions, whereas QFog can only handle discrete PDs), but there is much overlap between the 2 software packages in the area of structure and parameter learning of classical bnets from data.

A cool feature of the folder of Jupyter notebooks comparing bnlearn and QFog is that most notebooks in that folder can be spawned and run from a single “master” notebook. This amazing ability of the “master” notebook to create and direct a zombie horde of other notebooks is achieved thanks to an open source Python module called “nbrun” (notebook run).


February 11, 2017

R are us. We are all R now.

Filed under: Uncategorized — rrtucci @ 5:08 pm

I have long been an enthusiastic proponent of R, a computer language designed for doing statistical analysis. In fact, 7 years ago I wrote a post in this blog entitled “Addicted to R”.

Our company (artiste-qb.net) has been publishing software written mostly in Python. The Python ecosystem includes a very nice statistical package called Pandas, whose authors profess much love for R and unabashedly admit that they were trying to copy the best of R’s statistical functionality and bring it to Python users. This is fine and good, but is not enough, is it?

R has been around for a long time (since 1993 according to Wikipedia), and during that time it has managed to accumulate a formidable number of highly useful extension packages and a very large and passionately committed community of fans. As good as Pandas is, it would be a pity and outright foolish if our company and others in the same boat ignored R’s rich libraries and numerous users.

So I was elated when Tao Yin, a member of our company, introduced our company members to rpy2 and its extension Rmagic. Rmagic allows one to invoke R functions inside a Jupyter notebook running with a Python kernel. So in a single Jupyter notebook, you can call both Python functions and R functions in the same cell, or have some cells running just R and others running just Python. And of course, variables can be exchanged easily between R and Python within that notebook. So we are all R now. And Python too.

I’ve only known about Rmagic for about a week so I’m a newbie at it. Fortunately, even though rpy2/Rmagic is very sophisticated software under the hood, it’s API (Application Programming Interface) is quite simple and intuitive. I wrote a Jupyter notebook called “Rmagic for dummies” that I hope will convince you that Rmagic is very powerful yet easy to use.

January 31, 2017

Qubiter and IBM-QASM2 can now communicate via sign language

Filed under: Uncategorized — rrtucci @ 5:15 pm

I’ve always liked mime (Marcel Marceau, Charlie Chaplin,…), physical comedy (using body motions as a source of humor, like Italians do) and the closely related sign-language for the deaf. Sign language can be extremely clever, inventive and expressive. For example, this is how to say Donald Trump in sign-language:

But enough about Trump, who threatens to suck the air and joy out of every conversation. The official purpose of this blog post is to advertise the fact that now Qubiter (https://github.com/artiste-qb-net/qubiter) can convert quantum circuits from its native language to that of IBM, so that you can generate quantum circuits using Qubiter and then run them on the IBM hardware (assuming that those circuits have only 5 qubits and less than about 80 gates)

Recently, the folks at IBM Quantum Experience (IBM-QE) have introduced some very nice enhancements to their QC cloud service. The graphical user interface (GUI) of their website has been revamped. They have also opened two new repositories on GitHub,

  1. IBMQuantum/QASM
  2. IBMResearch/python-sdk-quantum-experience

Repo 1 introduces their new “intermediate level language” QASM2.0 with a paper in Latex/pdf that teaches the in and outs of their language. This repo also includes samples of qasm2 scripts of two types: some that can be run on their current hardware, and some that can’t be but can still be simulated using their numerical simulator.

Repo 2 gives some Python code for accessing the IBM-QE service via a python script or Jupyter notebook.

To keep up with these IBM enhancements, Qubiter now includes a new file called Qubiter_to_IBMqasm.py This file contains a class of the same name that translates Qubiter “English files” to IBM QASM files. You can write a simple Python script that reads the qasm file produced by the class Qubiter_to_IBMqasm and inputs that string into the code of Repo 2. That way, you don’t even have to visit the IBM-QE website to run your q circuit on their hardware. Alternatively, you can manually copy&paste the qasm file produced by the new Qubiter class into the “QASM Editor” at the IBM-QE website.

The current IBM-QE hardware doesn’t allow all possible CNOTs among its 5 qubits. Out of the 5 qubits 0, 1, …, 4, only qubits 1, 2 and 4 can be physical targets of an elementary CNOT. Also, some pairs of qubits cannot be the two ends of an elementary CNOT because they are physically disconnected. The class Qubiter_to_IBMqasm overcomes both of these limitations. It allows CNOTs among any pair of qubits. Every elementary CNOT that is disallowed is replaced by a compound CNOT; i.e., either 1 or 4 elementary CNOTs (and a bunch of Hadamards) that is equivalent to the original CNOT and is allowed.

December 1, 2016

Dumbing Down a Quantum Language, Sequel 1

Filed under: Uncategorized — rrtucci @ 8:04 pm

I am very happy to announce that I have added a class CktExpander to Qubiter At GitHub. The class reads any English file previously written by Qubiter and writes new English and Picture Files wherein every line of the original English file is expanded, if possible. A general Qubiter English file can have lines which denote U(2) matrices or swaps with 0, 1 or more controls attached. We say such a line is expanded if it is replaced by a sequence of lines each consisting of either (1) a qubit rotation or (2) a simple CNOT with only one control. Expander subroutines of this type are useful because quantum computers (for instance, IBM Quantum Experience) can only perform (1) or (2).

I have written a Jupyter Notebook that illustrates how to use this new Qubiter capability.

Actually, on June 2010, I published a blog post where I described a very similar effort: “Dumbing Down A Quantum Language“. Back then, I was using JAVA instead of Python. But afterwards, I came to the conclusion that JAVA support for numerics, linear algebra, plotting and statistics is inadequate for the purposes of writing Qubiter, whereas Python, with numpy, scipy, mathplotlib, pandas, etc., is almost perfect for the job. So when I was a java head, I wrote some classes that also expanded the lines of an English file into simpler operations. I am very happy that this is the second time that I try to write such subroutines, because practice makes perfect, even in programming. I feel that my Python expander subroutines are far better than my prior JAVA expander subroutines.

Blog at WordPress.com.

%d bloggers like this: