Quantum Bayesian Networks

July 26, 2016

Toronti, la nuova Roma di computazione quantistica

Filed under: Uncategorized — rrtucci @ 6:40 am

Sono un principiante in Italiano, ma lo amo.

Toronto is Canada’s most populous city with 2.6 million people. It is a 1.5 hour drive from Waterloo, Canada, home of the IQC (Institute for Quantum Computing, a misnomer, should be Institute for Quantum Cryptography) and PI (Perimeter Institute, where Justin Trudeau works). The University of Toronto, Canada’s largest university, is well known for its expertise in AI and Bayesian networks too. Geoffrey Hinton, a famous Deep Learning, Neural Networks researcher, worked at the Univ. of Toronto before he was hired by Google in 2013. Also, IBM, a mayor competitor in the race to build a gate model qc, has substantial research facilities in the Toronto area.

Henning Dekant (in Toronto) and I (in Boston) recently started a quantum computer software company called artiste-qb.net (our company logo artiste-logo)

I am writing this blog post to announce that

  • artiste-qb.net is now hiring 2 student interns (at almost minimum wage, sorry) to write software, open source of course
  • Henning is starting a quantum computing Meetup in Toronto

    Quantum Computing and Data Science

    Toronto, ON
    1 Members

    The future of computing has come out of the labs. Software development for quantum computing is happening in the GTA, and this meetup aims at bringing people from this fledgli…

    Check out this Meetup Group →

Some say the most powerful weapon in the Galaxy is the Death Star, but this is better.
quantum-open-source

July 14, 2016

Quantum Fog, a quantum computer simulator based on quantum Bayesian networks, can now Think (at least better than a rock)

Filed under: Uncategorized — rrtucci @ 6:51 pm

Today, I added a folder called “learning” to Quantum Fog. QFog is a quantum computer simulator based on Bayesian Networks (bnets). Classical Bayesian networks are what earned Judea Pearl a Turing Prize. Quantum Fog implements seamlessly both classical Bayesian networks and their quantum generalization, quantum Bayesian networks.

The way I see it, the field of classical Bayesian networks has had 2 Springs.

The first Spring was about 20 years ago and it was motivated by the discovery of the join tree message passing algo which decreased significantly the complexity of doing inference with bnets. That complexity is exponential regardless, but the join tree algo makes it exponential in the size of the largest clique of the graph.

The second Spring is occurring right now, and it is motivated by the discovery of various algorithms for learning bnets by computer from the data. Immediately after the first Spring, bnet inference could be done fairly quickly, but the bnet had to be divined manually by the user, a formidable task for bnets with more than a handful of nodes. But nowadays, that situation has improved considerably, as you can see by looking at my 2 favorite open source libraries for learning bnets from data:

  1. bnlearn. Very polished, R language. Written by Marco Scutari
  2. neuroBN, a less polished but very pedagogically helpful to me. Python language. Written by Nicholas Cullen

Its new ”learning” folder gives Quantum Fog a rudimentary capability for learning both classical bnets and quantum bnets from data.(so far QFog can only do Naive Bayes and Chow Liu Tree algos. Soon will add Hill Climbing, Tabu, GrowShrink, IAMB and PC algos) Previous workers like Scutari and Cullen only consider cbnets. Quantum Fog aims to cover both cbnets and qbnets seamlessly. We hope QFog can eventually generate ”programs” (instruction sequences) that can be run on real quantum computer hardware.

Quantum Fog goes to pet school

Canine-Cognition-Center3

Blog at WordPress.com.

%d bloggers like this: