Sono un principiante in Italiano, ma lo amo.
Toronto is Canada’s most populous city with 2.6 million people. It is a 1.5 hour drive from Waterloo, Canada, home of the IQC (Institute for Quantum Computing, a misnomer, should be Institute for Quantum Cryptography) and PI (Perimeter Institute, where Justin Trudeau works). The University of Toronto, Canada’s largest university, is well known for its expertise in AI and Bayesian networks too. Geoffrey Hinton, a famous Deep Learning, Neural Networks researcher, worked at the Univ. of Toronto before he was hired by Google in 2013. Also, IBM, a mayor competitor in the race to build a gate model qc, has substantial research facilities in the Toronto area.
Henning Dekant (in Toronto) and I (in Boston) recently started a quantum computer software company called artiste-qb.net (our company logo )
I am writing this blog post to announce that
- artiste-qb.net is now hiring 2 student interns (at almost minimum wage, sorry) to write software, open source of course
- Henning is starting a quantum computing Meetup in Toronto
Some say the most powerful weapon in the Galaxy is the Death Star, but this is better.
Today, I added a folder called “learning” to Quantum Fog. QFog is a quantum computer simulator based on Bayesian Networks (bnets). Classical Bayesian networks are what earned Judea Pearl a Turing Prize. Quantum Fog implements seamlessly both classical Bayesian networks and their quantum generalization, quantum Bayesian networks.
The way I see it, the field of classical Bayesian networks has had 2 Springs.
The first Spring was about 20 years ago and it was motivated by the discovery of the join tree message passing algo which decreased significantly the complexity of doing inference with bnets. That complexity is exponential regardless, but the join tree algo makes it exponential in the size of the largest clique of the graph.
The second Spring is occurring right now, and it is motivated by the discovery of various algorithms for learning bnets by computer from the data. Immediately after the first Spring, bnet inference could be done fairly quickly, but the bnet had to be divined manually by the user, a formidable task for bnets with more than a handful of nodes. But nowadays, that situation has improved considerably, as you can see by looking at my 2 favorite open source libraries for learning bnets from data:
- bnlearn. Very polished, R language. Written by Marco Scutari
- neuroBN, a less polished but very pedagogically helpful to me. Python language. Written by Nicholas Cullen
Its new ”learning” folder gives Quantum Fog a rudimentary capability for learning both classical bnets and quantum bnets from data.(so far QFog can only do Naive Bayes and Chow Liu Tree algos. Soon will add Hill Climbing, Tabu, GrowShrink, IAMB and PC algos) Previous workers like Scutari and Cullen only consider cbnets. Quantum Fog aims to cover both cbnets and qbnets seamlessly. We hope QFog can eventually generate ”programs” (instruction sequences) that can be run on real quantum computer hardware.
Quantum Fog goes to pet school