Quantum Bayesian Networks

February 6, 2016

Quantum Open Source 2016

Filed under: Uncategorized — rrtucci @ 7:31 am

big-bro-anim

Big Brother is saying: Microsoft’s “Liqui|> is closed source for your own good. Quantum open source is EVIL. Using it will harm you!”


Quantum Carly Fiorina

Quantum Carly Fiorina

Big Brother is saying: Vote for Quantum Carly Fiorina, my handpicked leader

Big Brother is saying: Vote for Quantum Carly Fiorina, my handpicked leader


apple1984-anim


quantum-open-source

On 2016, the quantum open source community will finally prevail, and you will see why future quantum computer software won’t be like “1984”

Advertisements

May 28, 2016

Don’t tell Microsoft. Qubiter, an open source platform, can now do quantum chemistry too

Filed under: Uncategorized — rrtucci @ 8:10 pm

I’ve uploaded to

https://github.com/rrtucci/my-chemistry

a small collection of classes called `my-chemistry’ that allow one to construct a gate model quantum circuit for calculating the ground state energy of molecules using Kitaev’s Phase Estimation Algorithm.

Notice that I am releasing my-chemistry under my name only. Although this is an add-on to the artiste-qb.net Qubiter project, the Artiste company is free of any legal liability, as I have not merged it with the main code branch, and the company is not deriving any use of it at this time.

The docstrings of each class describe in detail how it works. All my classes have a main method at the end with examples and tests of the class. In addition, I have written a pdf document describing the more technical details of the quantum circuit involved. The pdf document is part of the my-chemistry distribution.

Here is an excerpt from the Introduction section of that pdf document:

This paper describes the particular circuit used by a Python software package called “my-chemistry”, written by R.R.Tucci, and available at GitHub, Ref.[1]. The software can be used in conjunction with “Qubiter”, another Python software package available at GitHub, Ref.[2].

A quantum circuit that is very similar to the one presented in this paper has previously been implemented FIRST in Ref.[4] and more recently and exhaustively in the closed source software package called Liqui|\rangle produced by Microsoft, with Dave Wecker as main author.

Here is a super brief, by no means exhaustive review of some of the highlights in the history of this quantum computing approach to chemistry.

The person deserving the lion share of the credit for this method is A. Kitaev, who in 1995, Ref.[3], was the first to propose the PEA. Also very deserving are Trotter for his expansion, and Jordan/Wigner for their transformation.

The first paper to present an actual computer program for calculating the ground state energy of an $H_2$ molecule using PEA appears to be Ref.[4], by Whitfield, Biamonte and Aspuru-Guzik.

Researchers working for Microsoft applied the method to more complicated molecules and found some very clever optimization methods, such as using the identity (CNOT)^2 = 1. Here is their epiphany paper Ref.[5], and here is their most recent paper Ref.[6]. The latter is recommended for what appears to be a very fair and exhaustive list of references of this approach.

Finally, one should mention that Microsoft has several patents on this method, so it is possible that Microsoft will claim in the future that the software described in this paper infringes on one of their patents. Going to the USPTO website and using the query IN/wecker AND AN/Microsoft, I located 4 patents Refs.[7][8][9][10] on Liqui|\rangle. There might be more pending.

Patents alluded to

Quantum gate optimizations
https://patents.google.com/patent/US9064067B2/en

Optimizing quantum simulations by intelligent permutation
https://patents.google.com/patent/US8972237B2/en

Language integration via function redirection
https://patents.google.com/patent/US9292304B2/en

Quantum annealing simulator
https://patents.google.com/patent/US9152746B2/en

May 24, 2018

Quantum Computing and a new book, “The Book of Why”, by Judea Pearl and Dana Mackenzie

Filed under: Uncategorized — rrtucci @ 5:23 am

“Leaping the Chasm” (1886) by Ashley Bennett, son of photographer Henry Hamilton Bennett, jumping to “Stand Rock”. See http://www.wisconsinhistory.org


Judea Pearl, UCLA professor, winner of the Turing Prize in Computer Science, is a hero to all Bayesian Network fans like me. Pearl has several books on B nets, as you can see at his Amazon page.. This blog post is to alert my readers to his most recent book, written in collaboration with Dana Mackenzie, released about a week ago, mid May 2018, entitled “The Book of Why: The New Science of Cause and Effect”.

To commemorate the release of the new book, I also wrote, besides this blog post, a small comment about the new book at the Edward Forum, and Dustin Tran, main author of Edward, responded with a comment that cites a very nice paper, less than 6 months old, by Dustin and Prof. Blei, Dustin’s thesis advisor at Columbia Univ, about the use of Judea Pearl’s causality ‘do-calculus’ within Edward.

I’ve been interested in the do-calculus for a long time, and have written two arxiv papers on the subject:

  1. Introduction to Judea Pearl’s Do-Calculus, by Robert R. Tucci (Submitted on 26 Apr 2013)
  2. An Information Theoretic Measure of Judea Pearl’s Identifiability and Causal Influence, by Robert R. Tucci (Submitted on 21 Jul 2013)
    This paper is for classical Bayesian Networks, but it can easily be generalized to quantum Bayesian Networks, by replacing probability distributions by density matrices in the information measure proposed there.

There exist more than a dozen packages written in R that implement at least partially the do-calculus. They are available at CRAN (the main R repository, named after cranberries).
This 2017 paper
contains a nice table of various R packages dealing with do-calculus.

It’s also interesting to note that BayesiaLab, a commercial software package that I love and recommend, already implements some of Pearl’s do-calculus. (full disclosure: the B net company that I work at, artiste-qb.net, has no business connections with BayesiaLab.)

By the way, artiste-qb.net provides a nice cloud service that allows you to run all these open-source do-calculus R packages on your browser, without any installation hassles. How? you ask, and if not, I’m going to tell you anyway.

***Beep, Beep, Commercial Alert***

artiste-qb.net is a multilingual (R, Python, Java, C++, English, German, Spanish, Chinese, Italian, French, you name it, we speak it) quantum open source software company.

We offer an image on AWS (the Amazon cloud service) called BayesForge.com.

BayesForge.com comes fully loaded with the Python distribution Anaconda, all of R, etc.

Bayesforge comes with most major Artificial Intelligence/Bayesian Networks, open-source packages installed, both classical ones (eg. TensorFlow, Edward, PyMC, bnlearn, etc) and quantum ones (eg., IBM Qiskit, DWave stuff, Rigetti and Google stuff, our own Quantum Fog, Quantum Edward, Qubiter, etc).

BayesForge allows you to run jupyter notebooks in Python, R, Octave (an open source matlab clone) and Bash. You can also combine Python and R within one notebook using Rmagic.

We have succeeded in dockerzing the BayesForge image and will be offering it very soon on other cloud services besides AWS, including a non-AWS cloud service in China, where AWS is so slow it is non-usable. One of our co-founders, Dr. Tao Yin, lives in ShenZhen, China, and is in charge of our China branch.

February 9, 2018

Today Enrolled our Baby (Quantum Fog) in gambling school taught by famous Monte Carlo gamblers (PyMC3, Edward, Zhusuan)

Filed under: Uncategorized — rrtucci @ 4:57 am

In the beginning, there was Matlab, which grew out of the Fortran lib Lapack (Linear Algebra Package, still one of the main software libs used to benchmark supercomputers). Matlab’s tensor stuff was copied and improved by the Python borgs to produce numpy, which handles tensors really nicely but doesn’t do it in a distributed “parallel” fashion. Then, starting about 10 years ago, some guys from the University of Montréal had the brilliant idea of writing the Theano Python Library (Theano was a Greek mathematician thought to have been the wife of Pythagoras). Theano replaces most numpy functions with Theano functions that are namesakes of the numpy ones and do the same thing in a distributed fashion. Then Google came out with the TensorFlow Python Lib, which copied Theano and improved on it. TensorFlow can do most numpy operations using multiple CPUs, GPUs and TPUs. But TensorFlow and Theano are much more than tools for doing tensor operations in a distributed fashion. They also do differentiation in a distributed fashion (such differentiation is often used to train neural nets). They are also designed to help you do fast prototyping and distributed running of artificial neural nets. In the last year, some new Python libraries built on top of TensorFlow and Theano have appeared that allow you to do fast prototyping and distributed running of Bayesian networks. B nets are dear and near to my heart and I consider them even more powerful than artificial neural networks. And I’m far from being alone in my love of b nets. Judea Pearl won the prestigious Turing prize for his pioneering work on them. Those new Python libs that I alluded to are PyMC3 (built on top of Theano), Edward (on top of TensorFlow) and Zhusuan (on top of TensorFlow). Added later: Forgot to mention that Facebook & Uber have their own Theano equivalent called PyTorch and also an Edward equivalent called Pyro. But I haven’t used them yet.

The main architect of Edward is Dustin Tran, who wrote Edward as part of his PhD thesis at Columbia Univ. Dustin now works at Google, and the TensorFlow team is working with Dustin to integrate Edward with TensorFlow.

Zhusuan is the art of using an abacus. The word means literally “bead counting” in Chinese. The Zhusuan lib is a fine open-source (under MIT license) product of the Tsinghua University in Beijing, China. It demonstrates that China is already very advanced in AI.

According to Google Trends, “TensorFlow” is at least 10 times more popular than “quantum computing” as a search term, even though TensorFlow has many competitors that started before it did and it was open sourced for the first time only 2 years ago.
qc-tf-interest-April-2018

qc-interest-April-2018

tf-interest-April-2018

One of the aims of artiste-qb.net is to participate in the revolution of extending Edward & Tensorflow so that it can do both classical and quantum Bayesian Networks. Today we took a small, initial step in that direction. We added a folder

https://github.com/artiste-qb-net/quantum-fog/tree/master/jupyter-notebooks/inference_via_ext_software

which contains a file called ModelMaker.py and two jupyter notebooks. Both notebooks do MCMC for the classical Bayesian network WetGrass. One notebook does this by invoking the external software PyMC (a.k.a. PyMC2, the precursor of PyMC3), whereas the other does it via PyMC3. Both notebooks start by loading a .bif file for the WetGrass bnet. From that alone, they construct an X native model and analyze that model using X, where X = PyMC2, PyMC3. In the near future, we will also add a notebook that does the same thing for X=Edward, Zhusuan.
Addendum(Feb.16, 2018): Added support for Edward

climbing-mt-qc(Image by Henning Dekant)

December 3, 2017

You are invited to the wedding of Quantum Computing and TensorFlow

Filed under: Uncategorized — rrtucci @ 6:42 pm

The quantum computerization of TensorFlow (TF) is a quixotic dream that no doubt has crossed the minds of many, both technically and not technically savvy, people. Here at artiste-qb.net, we are very committed and well underway to achieving this goal. Since our company is fully committed to open source, it doesn’t really matter if we achieve this goal before anyone else. If someone else beats us to it, we will learn from their code and vice versa. That is, as long as they too deliver open source to the world. And if they don’t, we think that their software is doomed…quantum open source rules! How did Google vanquish, or at least de-fang, the Microsoft monopoly? To a large extent, by using Open Source. Open source rules AI, the cloud and mobile.

So, let me tell you how we are using TF for quantum computing.

When Google first open sourced TF a mere 2 years ago, I wrote a blog post to mark the occasion. In that post, I called TF a platform for Fakesian networks instead of Bayesian networks. My beef was that TF had only deterministic nodes, which are standard in the field of artificial neural nets. It had no probabilistic nodes, which are standard in the 2 fields of classical Bayesian networks (BNets) and Hierarchical models (HM). But this past year, the open source community has fallen into the breach, filled in the gap, with a software library called Edward built on top of TF, that adds probabilistic nodes (the buzz word is “Probabilistic Deep Thinking”) to TF. And Edwards has been approved for integration into TF, so soon it will be seamless integrated into TF. Thus, soon, TF will combine artificial neural nets and BNets seamlessly. It will have superpowers!

Of course, in quantum mechanics, one must use complex amplitudes instead of probabilities for the nodes, and one must use an L2 norm instead of an L1 one with those amplitudes, so you can’t use Edward to do quantum mechanics just yet. Edward will have to be made “quantum ready”. By that we mean that one will have to rewrite parts of Edward so that it has a “quantum switch”, i.e. a parameter called ‘is_quantum’ which when True gives a quantum BNet and when False gives a classical BNet. That is precisely what artiste-qb.net’s open source program Quantum Fog already does, so our company is uniquely placed to make a quantum version of Edward.

Another obstacle to marrying TF and quantum computers is that the quantum BNets will have to be compiled into a sequence of elementary operations (SEO) such as control nots and single qubit rotations. Once again, our company artiste-qb.net is uniquely placed to accomplish this task. Our open source quantum simulator Qubiter is the only one in the business that includes a “quantum csd complier”, which is a tool that will help express quantum BNets as a SEO.

HMs are really a subset of BNets. However, the BNet and HM communities have historically grown somewhat independently. The BNet community is centered around pioneers like Judea Pearl (at UCLA), inventor of some of the most important BNet methods, whereas the HM community is centered around pioneers like Andrew Gelman (at Columbia), author of many great books and a great blog in the HM field. The HM tribe only uses continuous distributions for node probabilities, and they are very keen on MCMC (Markov Chain Monte Carlo). The BNet community uses both discrete (expressed as matrices or tensors) and continuous distributions for their node probabilities, and they use MCMC and other methods too, like the junction tree method, to do inferences.

Edward has a distinguished pedigree in both the BNet and HM communities. Edward originated in Columbia Univ. One of its main and original authors is Dustin Tran, currently a PhD student at Columbia. So you can be sure that the Edward people are in close communication and receive useful feedback from the Gelman tribe. Another distinguished author of Edward is Kevin Murphy, who has been working on BNets for more than a decade. Murphy wrote the oldie but goodie Bayes Net toolbox for Matlab. He has also written several books on bnets and machine learning. He previously worked as a prof at the Univ. of British Columbia but he now works at Google. He is one of the main organizers of the young (2 year old) Bayesian Deep Learning conference, which, by the way, will have its annual meeting in less than a week (Dec. 9, 2017).

Classical BNets are a very active field, both in academic research and in commerce. Judea Pearl won a Turing award for them. BNets are very popular in bioinformatics, for example. Whereas no qc company has yet broken-even financially, there are classical BNet companies that have lasted and been profitable for almost 2 decades, such as Bayesia, Hugin and Norsys/Netica.

Oh, and one last thing. It’s called TensorFlow, not TensorNetwork, for a very good reason. If you try to use TF to implement the “tensor networks” used in quantum computing, you will fail, unless you start using BNets instead of Tensor Networks and pretend these 2 are the same thing, which is probably what the Tensor Networks people will do. In TF (and BNets), the lines emanating out of a node carry in them a full tensor that they pass along to other nodes. In a Tensor Network, a Tensor does not Flow into the arrows emanating out of its node. The tensor just sits in the node. For more discussion about the important differences between a quantum BNet and a Tensor Network, see this blog post.
https://qbnets.wordpress.com/2015/03/26/tensor-networks-versus-quantum-bayesian-networks-and-the-winner-is/

November 18, 2017

Strange Connection Between ATOS corporation and Oak Ridge National Laboratory (ORNL) Quantum Computing Division

Filed under: Uncategorized — rrtucci @ 2:30 pm

The ATOS corporation, based in Bezons, France, has a really bad reputation in some countries like the UK. To see this, check out, for example, the “Controversy” section of the Wikipedia entry on ATOS.

On October 20, 2017, ORNL put out a press release entitled:
“Two ORNL-led research teams receive $10.5 million to advance quantum computing for scientific applications”

On Nov 13, 2017, ATOS put out the following press release: click here

This is very odd because a quantum simulator of 30 qubits is not considered leading edge nowadays. Google/ProjectQ have simulated 45 qubits and IBM has simulated 56 qubits. Furthermore, ORNL owns the Titan supercomputer, one of the largest supercomputers in the world. ORNL is at the forefront of HPC, so why did they buy an expensive albeit not very powerful French machine to do quantum simulations? Like selling coals to Newcastle, isn’t it? ATOS claims to offer a “Quantum Learning Machine” that runs a quantum language called aQasm, but their language is closed source. I haven’t found any tutorial, or examples, or open source software for it, no matter how hard I’ve looked. Why did ORNL, a DOE weapons lab in the very red #MAGA state of Tennessee, do something so un-MAGA and unpatriotic by MAGA standards, namely, to buy expensive French closed-source software and hardware when American companies like IBM and Google already provide an open-source quantum language and development kit and cloud usage of it for quantum simulations? Is this evidence of government waste, abuse or corruption, and does the government care?

October 23, 2017

Vicious Game of Battleship Currently Being Played in Quantum Computing Software World Between IBM and Google

Filed under: Uncategorized — rrtucci @ 4:48 pm

The competition between IBM and Google to hog the quantum computing limelight has been fairly intense this year. The following 2 high stakes games of Battleship are currently being played. Yikes. (Footnote: Google seems to have decided to marry the opaque, poorly designed software of ProjectQ so I lump them together below.)

original-battleship-game

Thanks son. You have crushed my spirit and now I will divorce your mom and leave you to her with no alimony.

August 15, 2017

Resistance is Futile: Jupyter Borg Threatens to Assimilate Quantum Computing Universe

Filed under: Uncategorized — rrtucci @ 5:00 pm

A week ago, IBM announced at its Quantum Experience usegroup that it had uploaded to github a large collection of jupyter notebooks exhibiting the use of their gate model quantum computer (previously 5 qubits, currently 16 qubits). I consider this an excellent addition to the quantum open source and free jupyter notebook universe and ecosystem. I’ve advocated for quantum open source and jupyter notebooks many times before in this blog, so it’s a pleasure for me to echo their announcement.

Pow! Right in the kisser of Microsoft’s Liqui|> software. Liqui|> is closed source software.

Google has announced that it will deliver by year’s end a 49 qubit gate model qc with accompanying open source software and cloud service. The jupyter ball is now in your court, Google.

Artiste-qb.net, the company that I work for, already provides a large and ever growing library of jupyter notebooks for both of its quantum open source offerings, Qubiter and Quantum Fog.

Rigetti’s PyQuil and ProjectQ are two other gate model qc simulators analogous to IBM quantum experience. So far these two have very few jupyter notebooks. Wimps! Laggards! Let them eat cake!

borg-cake

Borg Cake

jupyter-cake

Jupyter Cake

April 29, 2017

Miss Quantum Computing, may I introduce to you Miss Bayesian Hierarchical Models and Miss MCMC?

Filed under: Uncategorized — rrtucci @ 5:49 pm

Warning: Intense talk about computer software ahead. If you are a theoretical computer scientist, you better stop reading this now. Your weak constitution probably can’t take it.

When you enter the nerd paradise and secret garden that is Bayesforge.com (a free service on the Amazon cloud), you will see one folder named “Classical” and another named “Quantum”. Here is a screenshot of this taken from Henning Dekant’s excellent post in Linkedin

The “Quantum” folder contains some major open source quantum computing programs: Quantum Fog, Qubiter, IBM-QisKit (aka kiss-kit), QuTip, DWave, ProjectQ, Rigetti

The “Classical” folder contains some major Bayesian analysis open source programs: Marco Scutari’s bnlearn (R), Kevin Murphy’s BNT (Octave/matlab), OpenPNL (C++/matlab), PyMC, PyStan.

The idea is to promote cross fertilization between “Quantum” and “Classical” Bayesian statisticians.

Today I want to talk mostly about PyMC and PyStan. PyMC and PyStan deal with “Hierarchical Models” (Hmods). The other programs in the “Classical” folder deal with “Bayesian Networks”(Bnets).

Bnets and Hmods are almost the same thing. The community of people working on Bnets has Judea Pearl as one of its distinguished leaders. The community of people working on Hmods has Andrew Gelman as one of its distinguished leaders. You might know Gelman (Prof. at Columbia U.) from his great blog “Statistical Modeling, Causal Inference, and Social Science” or from one of his many books

Both PyStan and PyMC do MCMC (Markov Chain Monte Carlo) for Hmods. They are sort of competitors but also complementary.

PyStan (its GitHub repo here) is a Python wrapper of a computer program written in C++ called Stan. According to Wikipedia, “Stan is named in honour of Stanislaw Ulam, pioneer of the Monte Carlo method.” Prof. Gelman is one of the fathers of Stan (I mean the program, of course).

PyMC comes in 2 incompatible versions: 2.X and 3.X. Version 3 is more advanced and intends to replace Ver 2. PyMC2’s best sampler is a Metropolis-Hastings (MH) sampler. PyMC3 contains an MH sampler, but it also contains the “No U turns” or “NUTS” sampler that is supposed to be much faster than MH for large networks. Currently, Bayesforge contains only PyMC2, but the next version will contain both PyMC2 and PyMC3. As an added bonus, PyMC3 comes with Theano, one of the leading deep neural networks frameworks.

Check out this really cool course:

Sta-663 “Statistical Programming” , available at GitHub, taught at Duke U. by Prof. Chi Wei Cliburn Chan.

This wonderful course has some nice jupyter notebooks illustrating the use of PyMC2, PyMC3 and PyStan. Plus it contains discussions of many other statistical programmimg topics. I love it. It has a similar philosophy to BayesForge, namely to do statistical programming with jupyter notebooks because they are great for communicating your ideas to others and allow you to combine seamlessly various languages like Python, R, Octave, etc

February 22, 2017

Quantum Fog’s weight in bnlearn units

Filed under: Uncategorized — rrtucci @ 2:42 am

In a recent blog post entitled “R are Us. We are all R now”, I expressed my great admiration for the R statistical computer language, and I announced the addition to the Quantum Fog (QFog) GitHub repository of a Jupyter notebook called “Rmagic for dummies” which explains how something called Rmagic allows one to run both Python and R in the same Jupyter notebook.

In 2 other earlier blog posts, I also expressed great admiration for something else, bnlearn, an open source computer program written in R by Marco Scutari for learning classical Bayesian networks (cbnets) from data. I consider bnlearn the gold standard of bnet learning software.

The main purpose of this blog post is to announce that the QFog GitHub repo now has a folder of Jupyter notebooks comparing QFog to bnlearn. This is a perfect application of Rmagic to comparing two applications that can do some of the same things but one app is written in R while the other is written in Python. Pitting QFog against bnlearn is highly beneficial to us developers of QFog because it shows us what needs to be improved and suggests new features that would be worthwhile to add.

QFog can do certain things that bnlearn can’t (most notably, QFog can do both classical and quantum bnets, whereas bnlearn can only do classical bnets), and vice versa (for instance, bnlearn can do bnets with continuous (Gaussian) node probability distributions, whereas QFog can only handle discrete PDs), but there is much overlap between the 2 software packages in the area of structure and parameter learning of classical bnets from data.

A cool feature of the folder of Jupyter notebooks comparing bnlearn and QFog is that most notebooks in that folder can be spawned and run from a single “master” notebook. This amazing ability of the “master” notebook to create and direct a zombie horde of other notebooks is achieved thanks to an open source Python module called “nbrun” (notebook run).

qfog-bnlearn-scales

January 6, 2017

Pythonic Qubiter now has a quantum compiler. The Death Star doesn’t have one yet.

Filed under: Uncategorized — rrtucci @ 12:33 am

quantumsoftwaretwotypesAs I’ve mentioned many times before in this blog, Henning Dekant (in Toronto) and I (in Boston) founded about a year ago a quantum computer software company called Artiste-qb.netartiste-logo. Our two main products so far are the open source Python programs Quantum Fog and Qubiter. Both programs were originally written in C++, but now they have been rewritten in Python and improved in many ways. This blog post is to announce that the Pythonic version of Qubiter now has a quantum compiler. Hurray for the Rebel Alliance!

Quantum compilers have long been an interest of mine. As far as I know, I was the first one to use the term “quantum compiler” in a paper in the field of quantum computing. I did so in the following 1999 paper

https://arxiv.org/abs/quant-ph/9902062

By a quantum compiler I mean a software program that takes an ARBITRARY unitary matrix as input and decomposes it into a sequence of qubit rotations and CNOTs. In my opinion, quantum compilers that fit this description are useful, even necessary, if one wants to use quantum computers to do artificial intelligence. Because, whereas for most physics applications one can assume unitaries of the form U = e^{-itH} where H, the Hamiltonian, has a very special structure, in AI, the unitaries can have an ARBITRARY structure (not known a priori) that doesn’t come from a Hamiltonian with an a priori known structure.

That 1999 paper was the first one to propose using the CS decomposition of Linear Algebra to do quantum compiling. The C++ program Qubiter, which was first released open source simultaneously with the 1999 paper, was the first computer program that used the CS decomposition to do quantum compiling.

Since that 1999 paper, many papers have been written using the same CS decomposition algorithm as my paper, but not citing my work, because, truth be told, dishonesty is rampant among academic researchers.

Other papers have copied the term “quantum compiling” to refer to a task that is related but much less general than what I call quantum compiling, namely, the task of decomposing a single qubit rotation into a sequence of operations from a finite set of gates. The latter task is necessary for fault tolerant quantum computing and was first tackled by Solovay and Kitaev, but they did not refer to it as “quantum compiling”. Nobody did prior to my 1999 paper.

This blog, which is now 8-9 years old, has featured several posts about quantum compilers. These are my favorites:

And now, the latest gossip about quantum compilers.

Up to now, Microsoft’s main and only quantum computer program was Liqui|>, a heavily patented, closed source computer program written in an unpopular language called F#. For many years now, the main writers of Liqui|>, Dave Wecker and Krysta Svore, have been promising to be THE FIRST to provide a quantum compiler, and of course, never mentioning my work. In some papers, KS uses the term quantum compiler to mean the same thing as me, in other papers she uses it to mean software that does the Solovay-Kitaev decomposition. But I can state unequivocally that Liqui|> has no CS decomposition quantum compiler at the present time. So Qubiter is way ahead of them there.

But the plot thickens. Matthias Troyer is a professor at ETH Zurich that works part time, for big bucks, for Microsoft. He has written papers about Liqui|> with Dave Wecker and Krysta Snore. Hence many people were extremely surprised that on Dec. 23, 2016, Matthias and two of his students at ETH Zurich, Minion A and Minion B, released a computer program called ProjectQ that duplicates most of what Liqui|> does and is therefore in direct competition with it. However, ProjectQ is open source, written in Python, and under the Apache License. The suspense is unbearable. Does this mean that Liqui|> will be laid to rest, R.I.P., meaning Dave Wecker’s and Krysta Svore’s work for the last 5 years will be completely ditched? Stay tuned. Microsoft will be holding QIP 2017 on Jan 14-20. That QIP is going to be Microsoft’s 1936 Nazi Olympics, where they intended to dazzle the world with their superiority in quantum computing. Maybe then we will find out, by watching carefully the placement of the people on the main dais during the weapons parade, who is up and who is down.

Henning and I welcome all open source code, even if it is in competition with Qubiter.

IMHO, Qubiter is currently much better than ProjectQ. For one thing, they claim they have a quantum compiler, but they don’t, at least not currently. Not in the sense that I defined it at the beginning of this post. What they do seem to have and call a quantum compiler are some subroutines that expand a single qubit gate with multiple controls attached into a sequence of single qubit rotations and CNOTs. But Qubiter has that already too. Look at its CktExpander.py file.

Duel Between Microsoft’s ProjectQ and Liqui|> software

duello_allultimo_sangue

July 14, 2016

Quantum Fog, a quantum computer simulator based on quantum Bayesian networks, can now Think (at least better than a rock)

Filed under: Uncategorized — rrtucci @ 6:51 pm

Today, I added a folder called “learning” to Quantum Fog. QFog is a quantum computer simulator based on Bayesian Networks (bnets). Classical Bayesian networks are what earned Judea Pearl a Turing Prize. Quantum Fog implements seamlessly both classical Bayesian networks and their quantum generalization, quantum Bayesian networks.

The way I see it, the field of classical Bayesian networks has had 2 Springs.

The first Spring was about 20 years ago and it was motivated by the discovery of the join tree message passing algo which decreased significantly the complexity of doing inference with bnets. That complexity is exponential regardless, but the join tree algo makes it exponential in the size of the largest clique of the graph.

The second Spring is occurring right now, and it is motivated by the discovery of various algorithms for learning bnets by computer from the data. Immediately after the first Spring, bnet inference could be done fairly quickly, but the bnet had to be divined manually by the user, a formidable task for bnets with more than a handful of nodes. But nowadays, that situation has improved considerably, as you can see by looking at my 2 favorite open source libraries for learning bnets from data:

  1. bnlearn. Very polished, R language. Written by Marco Scutari
  2. neuroBN, a less polished but very pedagogically helpful to me. Python language. Written by Nicholas Cullen

Its new ”learning” folder gives Quantum Fog a rudimentary capability for learning both classical bnets and quantum bnets from data.(so far QFog can only do Naive Bayes and Chow Liu Tree algos. Soon will add Hill Climbing, Tabu, GrowShrink, IAMB and PC algos) Previous workers like Scutari and Cullen only consider cbnets. Quantum Fog aims to cover both cbnets and qbnets seamlessly. We hope QFog can eventually generate ”programs” (instruction sequences) that can be run on real quantum computer hardware.

Quantum Fog goes to pet school

Canine-Cognition-Center3

March 29, 2016

Qubiter on the brink of doing Quantum Chemistry

Filed under: Uncategorized — rrtucci @ 6:35 pm

On behalf of the artiste-qb.net company, I am pleased to announce that Qubiter now has a new class called PhaseEstSEO_writer.py that endows it with the superpower of being able to generate and simulate quantum circuits for quantum phase estimation.

The quantum Phase Estimation circuit (PEC) was invented in 1995 by Kitaev. Since then, it has found many applications in quantum computing. Microsoft Liqui|>, the main competitor of Qubiter, uses PEC to find the ground state of molecules.

In fact, Liqui|> can generate many other circuits besides PEC, but most of them are not very commercially viable. For example, Liqui|> can generate quantum error correction circuits, but error correction will most likely be done by the hardware manufacturers, so QC programmers won’t need to implement it via Liqui|> in their circuits. Liqui|> also does Shor’s algo, but I don’t think Shor’s algo will be a big money maker, because it requires thousands of qubits, so it will be one of the last applications to be implemented on a QC. By the time it is implemented, post quantum crypto, which is impervious to it, will have been in general use for many years.

So most of the circuits that Liqui|> can generate & simulate will never be commercially viable, but those for doing quantum chemistry probably will be.

The Liqui|> team claims to have used experimentation with Liqui|> to reduce by a factor of a thousand the size of their QC circuit for finding the ground state of molecules with PEC. They claim that their new, optimized circuit will give an answer in an hour of running on a gate model QC, once those beasts arrive. Such calculations would take billions of years for a classical computer to perform, they claim.

The Liqui|> authors like Krysta Svore seem well aware that Quantum Chemistry is one of the most commercially viable and potentially lucrative applications of Liqui|>. You can almost see the dollar signs in their beady eyes when they speak about quantum chemistry.

But Qubiter is throwing a spanner into their nefarious plans.

Microsoft Liqui|> is closed source and heavily patented. Also, it is written in a very unpopular language F#. Qubiter, on the other hand, is open source and written in the super popular language Python.

And now Qubiter allows everyone, not just Microsoft egg heads, to do QC quantum chemistry too.

March 22, 2016

First version of Qubiter (a quantum computer simulator) is out. It says: “Welcome, my navigator. Where do you want me to take you next?”

Filed under: Uncategorized — rrtucci @ 4:44 pm

artiste-qb.net has just uploaded at GitHub its first version of Qubiter, a quantum computer simulator, under the BSD license and written in Python.

It’s saying to you: “Welcome, Navigator. Where do you want me to take you next?”

dog-amazed

nerd-destiny

Over the last 20 years, dozens of quantum computer simulators (for gate model, aka quantum circuit QC) have been released. Here is a partial list. So what makes this one special?

Let me compare Qubiter with Microsoft’s very famous quantum simulator called Liqui|>.

  • Liqui|> is closed source (and heavily patented), Qubiter is open source under BSD license
  • Liqui|> is written in F#, Qubiter is written in Python. User base of Python is ~ 100 times bigger than that of F#. And believe me, programmers are very savvy consumers of programming languages.
  • Dave Wecker (rhymes with wrecker), chief architect of Liqui|> , has been quoted as saying that he estimates Liqui|> has about 30,000 lines of code. Qubiter currently has less than a 1000, and does all the basics and much more. So tell me, what would you prefer to have to grok, 30,000 or 1,000 lines of code?

I hope you enjoy it. Or as my grandma used to say,

Stai zitto e mangia!!

February 17, 2016

Our baby (Quantum Fog) can now read, write and draw

Filed under: Uncategorized — rrtucci @ 12:06 am

In a special Feb. 2 , 2016, ground hog day blog post, I announced the first large release by our company, artiste-logoartiste-qb.net, of Pythonic open-source Quantum Fog. As you may or may not know, Quantum Fog was originally (almost 20 years ago) a Mac-only application written in C++. The GitHub page for Quantum Fog contains the legacy C++ code of yee ole Quantum Fog, plus the shining new Python code which will attempt to reproduce all the functionality of the old application plus much more. So are we there yet, you ask. Not quite, but making steady progress.

These are some of the things the new QFog can do so far:

  • It can do inference with evidence using 3 methods: Join Tree, MCMC (Monte Carlo) and brute force (enumeration of all Feynman paths).
  • It can do inference by those 3 methods for BOTH, quantum Bayesian networks (QBnets) and classical Bayesian networks (CBnets).
  • It can read and write CBnets and QBnets in two formats .dot and .bif (bif stands for Bayesian Interchange Format).
  • It can draw the CBnet and QBnet graphs using only matplotlib and networkx, included in the usual Python installation. This is fine for most purposes, but if you want a super high quality plot of your graph, you should use the .dot file that Quantum Fog generates and fine-tune that with GraphViz.

QFog integrates CBnets and QBnets seamlessly. You can use all subroutines for either classical or quantum analysis simply by changing a Boolean input parameter called is_quantum

Footnotes

  • .dot and .bif files are both just .txt files. The .dot format is used by a really wonderful, free software called GraphViz. There are several very helpful Bayesian network repositories that store Bnets in the .bif format. The .dot and .bif formats are complementary. The .dot format is good for storing visual layout info, not good for storing the numerical tables associated with each node. The .bif format is good in the opposite way.

  • The “join tree” (or junction tree, or clique tree or belief) propagation method is an exact method (Other methods, like MCMC, are approximate). The Join Tree method caused a mini revolution in the Bayesian networks field starting from 1990 or so. Before then, people had been discouraged by a proof that calculating probabilities from a Bnet by brute force is NP hard. But the Join Tree method takes advantage of the graph structure. If I understand it correctly, its complexity is exp(k) whereas brute force is exp(n), where n is the total number of nodes and k is the number of nodes in the fattest clique (k is called the width of the join tree). The Join Tree algorithm still has exponential complexity, but is much better than the brute force algo. The join tree algo used by the new Qfog is the one described in the following very detailed and clear, cookbook paper:

    Inference in Belief Networks, A Procedural Guide, by C. Huang and A. Darwiche (1996)

I end by waxing poetic in a nerdy way. Here are 3 things that remind me of quantum fog:


fog-machine
Fog Machines, very cool. An essential prop in the shooting of moody films, in rock concerts, in serious Halloween home decorations and in nerd experiments. They work by either (1) pushing a mixture of water and (glycol or glycerin or mineral oil) over a heated surface, or (2) dropping dry ice, i.e., solid CO2, into water heated near boiling point. Some use solid N2 or O2 instead to get a different kind of fog.


Aerogel.
Aerogels, very cool too.

They are very good thermal insulators. You can put your hand on one side and a Bunsen Burner flame on the other side of a ½ inch thick layer of aerogel and not feel the heat.

The ones that are almost transparent and ethereal looking are the silica aerogels. They are kind of expensive though, like $50 for a 1 x 1 x 0.5 in. specimen.

Wikipedia quotes:
“Aerogel was first created by Samuel Stephens Kistler in 1931, as a result of a bet with Charles Learned over who could replace the liquid in “jellies” with gas without causing shrinkage.”

“The first aerogels were produced from silica gels. Kistler’s later work involved aerogels based on alumina, chromia and tin dioxide. Carbon aerogels were first developed in the late 1980s.”


GoldenGateFog
And of course, the Golden Gate Bridge shrouded in fog.

Next Page »

Create a free website or blog at WordPress.com.

%d bloggers like this: