Quantum Bayesian Networks

February 19, 2018

Elon Musk Predicts Quantum TensorFlow

Filed under: Uncategorized — rrtucci @ 6:22 pm

Just kidding. But it could very well happen, now that I have suggested it on the Internet, whispering it into the ears of a thousand bots.

In a 1926 article in Colliers magazine, Nikola Tesla, inventor of AC current, predicted the smart phone.

“When wireless is perfectly applied the whole earth will be converted into a huge brain, which in fact it is, all things being particles of a real and rhythmic whole. We shall be able to communicate with one another instantly, irrespective of distance. Not only this, but through television and telephony we shall see and hear one another as perfectly as though we were face to face, despite intervening distances of thousands of miles; and the instruments through which we shall be able to do his will be amazingly simple compared with our present telephone. A man will be able to carry one in his vest pocket.”

I personally find this prediction and the conviction and clarity with which he uttered it quite impressive.

Elon Musk must be a big fan of Tesla, as evinced by the fact that he named his car brand Tesla. (by the way, when water melon rots in your fridge, it smells godawful. I call the smell Melon Musk. So the guy even has his own fragrance). Even though Musk is not very good at predicting (after all, he hasn’t predicted Quantum TensorFlow…yet), he has conquered many a science nerd’s heart, including my own, by building the Tesla automobile, in a “Gigafactory” that looks like a giant spaceship, and by putting, on February 6, 2018, his own Tesla roadster into orbit for billions of years using his SpaceX company’s Falcon Heavy rocket. That scene of the 2 returning side rockets from Falcon Heavy landing side by side, and nearly simultaneously, on space pad 39 in Cape Canaveral… is a scene that will be emblazoned forever in the hearts and minds of every science fiction and science fact lover that has seen it or ever will see it, for the rest of human history.

Let me channel for a moment the spirit of Tesla, who wants to communicate with Mr. Musk across the great divide, to tell him that quantum Tensorflow is an inevitability, and that it will need a logo. The ghost of Tesla left a picture of a suitable logo, in my computer, last night, while I slept. It’s the logo at the beginning of this blog post. The quote in the logo is of course due to Richard Feynman.


February 9, 2018

Today Enrolled our Baby (Quantum Fog) in gambling school taught by famous Monte Carlo gamblers (PyMC3, Edward, Zhusuan)

Filed under: Uncategorized — rrtucci @ 4:57 am

In the beginning, there was Matlab, which grew out of the Fortran lib Lapack (Linear Algebra Package, still one of the main software libs used to benchmark supercomputers). Matlab’s tensor stuff was copied and improved by the Python borgs to produce numpy, which handles tensors really nicely but doesn’t do it in a distributed “parallel” fashion. Then, starting about 10 years ago, some guys from the University of Montréal had the brilliant idea of writing the Theano Python Library (Theano was a Greek mathematician thought to have been the wife of Pythagoras). Theano replaces most numpy functions with Theano functions that are namesakes of the numpy ones and do the same thing in a distributed fashion. Then Google came out with the TensorFlow Python Lib, which copied Theano and improved on it. TensorFlow can do most numpy operations using multiple CPUs, GPUs and TPUs. But TensorFlow and Theano are much more than tools for doing tensor operations in a distributed fashion. They also do differentiation in a distributed fashion (such differentiation is often used to train neural nets). They are also designed to help you do fast prototyping and distributed running of artificial neural nets. In the last year, some new Python libraries built on top of TensorFlow and Theano have appeared that allow you to do fast prototyping and distributed running of Bayesian networks. B nets are dear and near to my heart and I consider them even more powerful than artificial neural networks. And I’m far from being alone in my love of b nets. Judea Pearl won the prestigious Turing prize for his pioneering work on them. Those new Python libs that I alluded to are PyMC3 (built on top of Theano), Edward (on top of TensorFlow) and Zhusuan (on top of TensorFlow). Added later: Forgot to mention that Facebook & Uber have their own Theano equivalent called PyTorch and also an Edward equivalent called Pyro. But I haven’t used them yet.

The main architect of Edward is Dustin Tran, who wrote Edward as part of his PhD thesis at Columbia Univ. Dustin now works at Google, and the TensorFlow team is working with Dustin to integrate Edward with TensorFlow.

Zhusuan is the art of using an abacus. The word means literally “bead counting” in Chinese. The Zhusuan lib is a fine open-source (under MIT license) product of the Tsinghua University in Beijing, China. It demonstrates that China is already very advanced in AI.

According to Google Trends, “TensorFlow” is at least 10 times more popular than “quantum computing” as a search term, even though TensorFlow has many competitors that started before it did and it was open sourced for the first time only 2 years ago.

One of the aims of artiste-qb.net is to participate in the revolution of extending Edward & Tensorflow so that it can do both classical and quantum Bayesian Networks. Today we took a small, initial step in that direction. We added a folder


which contains a file called ModelMaker.py and two jupyter notebooks. Both notebooks do MCMC for the classical Bayesian network WetGrass. One notebook does this by invoking the external software PyMC (a.k.a. PyMC2, the precursor of PyMC3), whereas the other does it via PyMC3. Both notebooks start by loading a .bif file for the WetGrass bnet. From that alone, they construct an X native model and analyze that model using X, where X = PyMC2, PyMC3. In the near future, we will also add a notebook that does the same thing for X=Edward, Zhusuan.
Addendum(Feb.16, 2018): Added support for Edward

climbing-mt-qc(Image by Henning Dekant)

February 6, 2018

The Toronto Quantum Meetup Supremos invite you to our next Meetup entitled “Blockchain & Quantum Computing”

Filed under: Uncategorized — rrtucci @ 6:56 pm

The Toronto Quantum computing Meetup is the largest meetup in the world dedicated to quantum computing, so we claim the title of Quantum Meetup Supremacy, at least for now. (currently we have 997 Supremos as members. The second biggest club is in London with a paltry 706 Brexiters members).

We cordially invite you to our next meeting on Monday, February 26, 2018 at the “Bergeron Centre for Engineering Excellence”, a beautiful building that belongs to York Univ.

This meeting is being held jointly with the “Blockchain Hub”, another Toronto Meetup (1604 members currently).

The speaker will be Henning Dekant, the CEO of artiste-qb.net (the company I work for) and 3 other distinguished panelists.

Depressed Bitcoin investors are especially welcomed. Let us cheer you up! We think we are funny. And we have good taste for music too! Supremos is Spanish for Supremes. Ah, The Supremes, the better angels of our (American) nature

Try singing that when the Bitcoin price is going down!

Our company artiste-qb.net is cultivating relationships with standard VC firms via the CDL (Creative Destruction Lab, part of the Rotman School of Business of the Univ. of Toronto), which is an incubator that we are currently participating in, and also with Chinese investors via networking done by our intrepid CTO and part owner, Dr. Tao Yin, who lives in ShenZhen, China.

However, as a side project, we are also contemplating a less traditional source of funding, an ICO. To quote Henning, “There are pitfalls with ICOs as well, the SEC has been clamping down on some, arguing that the tokens represent securities. China outlawed them completely etc. But this company aims at bringing them into regulatory compliance within all of North America, and they are the first to launch a regulator-approved ICO. I know the founders,..” The company Henning was referring to is TokenFunders, another proud Canadian company like us.

January 21, 2018

Life in the time of the Bayesian Wars: Clippy Strikes Back

Filed under: Uncategorized — rrtucci @ 7:42 pm

You can now add Clippy to any website, with his full panoply of adorable responses.


(Thanks to Gavin Dekant for pointing this website to us.) First lines of website:

Add Clippy or his friends to any website for instant nostalgia. Our research shows that people love two things: failed Microsoft technologies and obscure Javascript libraries. Naturally, we decided to combine the two.

So what does this have to do with Bayesian Networks?

In a previous blog post, I linked to a 1996 newspaper article that quotes Bill Gates revealing his strong interest in Bayesian Networks. But I didn’t mention in that blog post that this was not just a platonic love affair with Bnets which never went any further. By 1996, Gates was already actively channeling his love of B nets into backing real MS products, such as the Lumiere project, which brought to the world the first office assistant, in Office 97. Nowadays, Alexa, Siri, Google’s Office Assistant and Cortana are well known, commonplace office assistants, but MS was there first. Sadly, but characteristically, MS has fumbled the OA ball since then, and today, Cortana ranks last in usage among the big 4 OA’s. In fact, Cortana is almost dead today, now that MS has all but pulled out of the mobile phone OS and hardware businesses.

Next, I want to say more about the Lumiere project, a project remarkable for its great foresight, technical prowess and creativity, and for its instructive, amusing history with a dramatic, poignant ending.

Here is a nice article which backs up much of the history that I will recount next:

The Lumiere project: The origins and science behind Microsoft’s Office Assistant By Awesome-o | August 31, 2009

The office assistant created by the Lumiere project and first offered by MS in Office 97 was called Clippy. Clippy predicted its user’s next moves and needs so poorly that Office users soon started to hate and dread it, so much so that MS decided to turn it off starting with Office 2007. So poor Clippy was born about 20 years ago and he was killed when he was 10 years old.

Microsoft’s Lumiere project, headed by Eric Horovitz, produced the first in-house versions of Clippy, based on Bayesian Networks. This original Clippy learned from the user, it was trainable, so it was truly Bayesian. By all accounts, it worked really well. However, for the commercial version that appeared in Office 97 and thereafter, upper management insisted that the Bayesian heart of Clippy be replaced by a rule based system that could not learn from the user. The reason Clippy was crippled was not out of palace intrigue or corporate malice but simply that Office 97 already occupied too much space and the Office designers had to choose between including a full fledged Clippy or including some new, mundane word processing features, but not both, and they chose the latter. Hence, the original, by many accounts brilliant Clippy, was lobotomized before it was first released to the public.

But it gets better. Because of the Clippy fiasco, many people in 2007 considered the idea of an office assistant that could be trained by the user to be an impossible dream. How wrong they were! Look at the present. OA’s have not gone away. They keep getting better and more ubiquitous.

And, Clippy is back!

Bayesianism will never die. Author Sharon McGrayne has the same opinion and has written a wonderful popular science book explaining why:

“The Theory That Would Not Die: How Bayes’ Rule Cracked the Enigma Code, Hunted Down Russian Submarines, and Emerged Triumphant from Two Centuries of Controversy”, by Sharon Bertsch McGrayne

January 10, 2018

Trend Graphs for quantum computer programming related topics

Filed under: Uncategorized — rrtucci @ 3:45 am

Programmers like me often fret over whether they are learning the programming language du jour, because employers usually only want to hire those who already know a specific p. language. And the more popular a p language is, the more jobs are available using it. It’s an annoying fact of life for programmers that there are numerous p languages available, and that they tend to peak in popularity and then become less and less popular over time. In the old days, I used to go to a Barnes & Noble book store and there gauge the popularity of each p language by the relative number of books available for it on their shelves. Nowadays, to gauge such popularity, I go instead to the webpages at Google and StackOverflow that generate trend graphs.

The purpose of this blog post is to gather in one spot links to trend graphs comparing topics that I personally think are important and related to quantum computer programming.

StackOverflow trend graph for various computer languages used in quantum computing, big data, ai, statistics (javascript > python > java > C#… Haskell used by Quipper is insignificant. C# getting less popular over time)

Google trend graph comparing: Cortana, Alexa, Siri, Google Assistant (Alexa peaking, Microsoft Cortana lowest)

Google trend graph comparing: TensorFlow, PyTorch, quantum computing, quantum computer, bayesian (TensorFlow superpopular, especially in China)

January 1, 2018

New tools for dealing with limited couplings of a quantum computer chip

Filed under: Uncategorized — rrtucci @ 11:37 pm

The main purpose of this blog post is to announce that Qubiter now has two simple, but hopefully useful, new classes that facilitate writing qc programs for the new crop of qc’s with 20-50 qubits but such that those qubits are not fully connected (coupled):

  1. Class ForbiddenCNotExpander reads an English file and writes a new English file. Any CNot in the old file that is not physically allowed by a pre-specified qc chip is replaced in the new file by an expansion (sequence) of allowed CNots.

  2. Class ChipCouplingsFitter reads an English file and writes a new English file. The new file permutes the qubits of the old file so that the old CNots are mapped into new ones that are allowed for a pre-specified qc chip. Of course, this is not always possible, so the class only promises to find such a mapping and to tell you what it is if it finds one.

I hope these 2 new classes will make it easier for Qubiter users to interact with real qc hardware of any brand.

We are striving to make Qubiter interact with as many qc hardware brands as possible. So far, we have targeted the IBM chips with a class called Qubiter_to_IBMqasm2 that translates a Qubiter English file (i.e., Qubiter’s version of qasm) to IBM qasm2. As soon as Google comes out with its 49 qubit qc and accompanying cloud service, which Martinis has promised for Jan 2018, we will target Google devices too, by writing a translator from Qubiter English files to Google’s version of qasm. This IBM or Google qasm can be sent by a Jupyter notebook via the internet to the IBM or Google quantum computers sitting in the cloud, which will then run the program and send back the results to the original Jupyter notebook.

We are trying to design Qubiter so that it facilitates communication not only between a user and multiple hardware devices, but also among multiple users. Indeed, Qubiter already makes it easy for users of different qc devices, for example Alice who uses an IBM chip and Bob who uses a Google chip, to be on friendly terms with each other, maybe even go out on a date. Qubiter facilitates such dalliances because it saves quantum circuits as simple yet clear text files. This makes it as easy as pie to modify those text files, save them to improve them at a later date, and share them with other qc engineers. I you want to collect quantum circuits like you would collect baseball cards, comic books, stamps, butterflies or whatever, Qubiter’s English and Picture files found in Qubiter’s io_folder are a great way of doing it.

December 17, 2017

I Left My Heart in Nanjing, at the Quantum AI Conference, AIQP-2017

Filed under: Uncategorized — rrtucci @ 8:31 pm

Nanjing University will be holding a Quantum AI conference AIQP-2017 on Dec 20-22, 2017.

Our company artiste-qb.net has its headquarters in Toronto-Canada and is currently a participant in the CDL (Creative Destruction Lab of the Univ. of Toronto) quantum AI incubator.

Dr. Tao Yin, currently living in ShenZhen-China, is the CTO of our company artiste-qb.net. Our man Tao will not be speaking at the conference, but he will be attending and meeting people. Also attending, and speaking!, will be 2 other persons associated with CDL, Dr. Jacob Biamonte and Dr. Peter Wittek. Not speaking, (Whew!) are any IBM, Google or Microsoft representatives.

Quote from conference website:

The workshop is sponsored by the Nanjing University AIQ fund. The AIQ fund is donated by the Founder and President of ECOVACS Robotics, Mr. Qian Dongqi, who is also an alumni of NJU (Bachelor in Physics 1981, and Master in Philosophy 1987).

The Ecovacs company sells excellent robotics vacuum cleaners all over the world.

I end with a “Make China Great Again” Tweet that I have been using recently to advertise our company:

December 16, 2017

In Love With Jupyter Widgets

Filed under: Uncategorized — rrtucci @ 11:56 pm

Many old fashioned computer programs that allow a lot of pointing and clicking, what I like to call “GUI-rich” software (GUI=Graphical User Interface), provide you with only a limited “scripting and documenting” ability, i.e., ability to save and restore your work history (including the buttons you clicked and in what order, the plots you generated, your comments, etc.). Before the advent of Jupyter widgets, Jupyter notebooks got an A++ grade in scripting ability, but an F in GUI. Old fashioned, GUI-rich software got the opposite grades. So it might have been hard back then to decide which of these two paths, GUI-rich or Jupyter, to choose. But this is no longer a hard choice. With Jupyter widgets, Jupyter notebooks get an A++ in both categories.

In my opinion, scripting ability is very desirable when doing work related to statistics, Bayesian networks, Big Data, AI, numerical research, etc. A GUI is desirable too, especially if you want normal people, i.e. non-geeks, ever to use your software! So I am glad to announce that Quantum Fog now has some Jupyter notebooks with widgets.

In the following folder, you will find notebooks for doing inference with the (classical, not quantum) WetGrass and Asia networks. This is a link to GitHub, so of course the GUI is displayed but doesn’t work properly there. It only works properly if you are running the notebook in a Python environment.

December 5, 2017

The Toronto Quantum Meetup Supremos invite you to their next Meetup on Bill Gates and Bayesian networks

Filed under: Uncategorized — rrtucci @ 4:30 pm

The Toronto Quantum computing Meetup is the largest meetup in the world dedicated to quantum computing, so we claim the title of Quantum Meetup Supremacy, at least for now. (currently we have 873 Supremos as members. The second biggest club is in Austin with a paltry 630 Fajitas members)

We cordially invite you to our next meeting on Dec 15 . This meeting will start by addressing the burning question: “Was Bill Gates lying to the American people when he stated to the press that he doesn’t understand quantum computing?”. This question has cruelly split the American nation along party lines.

Consider the evidence:

  • Here are the beginning lines of an Oct 1996 article in the Los Angeles Times

    When Microsoft Senior Vice President Steve Ballmer first heard his company was planning to make a huge investment in an Internet service offering movie reviews and local entertainment information in major cities across the nation, he went to Chairman Bill Gates with his concerns.

    After all, Ballmer has billions of dollars of his own money in Microsoft stock, and entertainment isn’t exactly the company’s strong point.

    But Gates dismissed such reservations. Microsoft’s competitive advantage, he responded, was its expertise in “Bayesian networks.”

    Asked recently when computers would finally begin to understand human speech, Gates began discussing the critical role of “Bayesian” systems.

    Ask any other software executive about anything “Bayesian” and you’re liable to get a blank stare.

    Is Gates onto something? Is this alien-sounding technology Microsoft’s new secret weapon?

    Quite possibly.

  • But here are the beginning lines of a Nov 2017 article in MSN.com

    Bill Gates doesn’t completely understand quantum computing, the Microsoft Corp. co-founder and verified computer whiz told the Wall Street Journal Monday.

    “I know a lot of physics and a lot of math. But the one place where they put up slides and it is hieroglyphics, it’s quantum,” Gates said.

The host of the meeting, Henning Dekant, CEO of the company artiste-qb.net that I work for, will try to argue that Bill Gates is wrong. If only Gates realized that quantum computing languages can be written very intuitively as Bayesian Networks, the scales would fall from his eyes and he would recognize that the countenance of Quantum Computing is the same as that of his old love, Bayesian Networks.

Jesting aside, Henning will also speak about artiste-qb.net’s ongoing work to use quantum bayesian networks in quantum computing.

Mr. Gates, I never met Steve Jobs but I read his fricking biography, and you, Mr. Gates, are no Steve Jobs, sir!

December 3, 2017

You are invited to the wedding of Quantum Computing and TensorFlow

Filed under: Uncategorized — rrtucci @ 6:42 pm

The quantum computerization of TensorFlow (TF) is a quixotic dream that no doubt has crossed the minds of many, both technically and not technically savvy, people. Here at artiste-qb.net, we are very committed and well underway to achieving this goal. Since our company is fully committed to open source, it doesn’t really matter if we achieve this goal before anyone else. If someone else beats us to it, we will learn from their code and vice versa. That is, as long as they too deliver open source to the world. And if they don’t, we think that their software is doomed…quantum open source rules! How did Google vanquish, or at least de-fang, the Microsoft monopoly? To a large extent, by using Open Source. Open source rules AI, the cloud and mobile.

So, let me tell you how we are using TF for quantum computing.

When Google first open sourced TF a mere 2 years ago, I wrote a blog post to mark the occasion. In that post, I called TF a platform for Fakesian networks instead of Bayesian networks. My beef was that TF had only deterministic nodes, which are standard in the field of artificial neural nets. It had no probabilistic nodes, which are standard in the 2 fields of classical Bayesian networks (BNets) and Hierarchical models (HM). But this past year, the open source community has fallen into the breach, filled in the gap, with a software library called Edward built on top of TF, that adds probabilistic nodes (the buzz word is “Probabilistic Deep Thinking”) to TF. And Edwards has been approved for integration into TF, so soon it will be seamless integrated into TF. Thus, soon, TF will combine artificial neural nets and BNets seamlessly. It will have superpowers!

Of course, in quantum mechanics, one must use complex amplitudes instead of probabilities for the nodes, and one must use an L2 norm instead of an L1 one with those amplitudes, so you can’t use Edward to do quantum mechanics just yet. Edward will have to be made “quantum ready”. By that we mean that one will have to rewrite parts of Edward so that it has a “quantum switch”, i.e. a parameter called ‘is_quantum’ which when True gives a quantum BNet and when False gives a classical BNet. That is precisely what artiste-qb.net’s open source program Quantum Fog already does, so our company is uniquely placed to make a quantum version of Edward.

Another obstacle to marrying TF and quantum computers is that the quantum BNets will have to be compiled into a sequence of elementary operations (SEO) such as control nots and single qubit rotations. Once again, our company artiste-qb.net is uniquely placed to accomplish this task. Our open source quantum simulator Qubiter is the only one in the business that includes a “quantum csd complier”, which is a tool that will help express quantum BNets as a SEO.

HMs are really a subset of BNets. However, the BNet and HM communities have historically grown somewhat independently. The BNet community is centered around pioneers like Judea Pearl (at UCLA), inventor of some of the most important BNet methods, whereas the HM community is centered around pioneers like Andrew Gelman (at Columbia), author of many great books and a great blog in the HM field. The HM tribe only uses continuous distributions for node probabilities, and they are very keen on MCMC (Markov Chain Monte Carlo). The BNet community uses both discrete (expressed as matrices or tensors) and continuous distributions for their node probabilities, and they use MCMC and other methods too, like the junction tree method, to do inferences.

Edward has a distinguished pedigree in both the BNet and HM communities. Edward originated in Columbia Univ. One of its main and original authors is Dustin Tran, currently a PhD student at Columbia. So you can be sure that the Edward people are in close communication and receive useful feedback from the Gelman tribe. Another distinguished author of Edward is Kevin Murphy, who has been working on BNets for more than a decade. Murphy wrote the oldie but goodie Bayes Net toolbox for Matlab. He has also written several books on bnets and machine learning. He previously worked as a prof at the Univ. of British Columbia but he now works at Google. He is one of the main organizers of the young (2 year old) Bayesian Deep Learning conference, which, by the way, will have its annual meeting in less than a week (Dec. 9, 2017).

Classical BNets are a very active field, both in academic research and in commerce. Judea Pearl won a Turing award for them. BNets are very popular in bioinformatics, for example. Whereas no qc company has yet broken-even financially, there are classical BNet companies that have lasted and been profitable for almost 2 decades, such as Bayesia, Hugin and Norsys/Netica.

Oh, and one last thing. It’s called TensorFlow, not TensorNetwork, for a very good reason. If you try to use TF to implement the “tensor networks” used in quantum computing, you will fail, unless you start using BNets instead of Tensor Networks and pretend these 2 are the same thing, which is probably what the Tensor Networks people will do. In TF (and BNets), the lines emanating out of a node carry in them a full tensor that they pass along to other nodes. In a Tensor Network, a Tensor does not Flow into the arrows emanating out of its node. The tensor just sits in the node. For more discussion about the important differences between a quantum BNet and a Tensor Network, see this blog post.

November 24, 2017

The Era of the Gay Particle (a.k.a. Majorana Quasi Particle and Anyon) Begins

Filed under: Uncategorized — rrtucci @ 12:06 am

I first reported in this blog about a possible sighting of a Majorana quasi particle on Oct, 2014. In that blog post, I referred to them as gay particles because they are neither bosons nor fermions but something in between**. (I was making fun of Leon Lederman, who is credited with being the first person to refer to the Higgs boson particle as the God particle)

Check out this recent, excellent (very knowledgeable, clear and well written) article

Majorana Update by Stevan Nadj-Perge

recounting the amazing progress that has been made in the last 5 years on the gay particle front, going from an uncertain sighting of the elusive particle in 2012, to almost certain evidence of its existence just 2 weeks ago.

Most scientists believe that (gate model) quantum computing done using gay particles will have much lower error rates than quantum computing done using transmon superconductive circuits (the latter is the strategy being pursued by IBM, Google and many others).

The bad news is that Microsoft is the only major company in the world that has invested bigly in this technology and they have patented it to the hilt. The good news is that the MS quantum software is really crappy and closed source. Quantum open source will crush them like Linux and Android did before. As to the hardware, I predict that MS will try to use its patents to monopolize this technology in the US and Europe, but China will quickly duplicate the MS hardware, ignore the patents to a large extent, and offer this technology worldwide via the cloud.

**In case you are wondering, I am not gay. And the term “gay particle” is definitely not meant to be a homophobic slur. As I have said before in this blog, I have a lot of respect and admiration for many gay people, including two of mankind’s greatest heroes, Leonardo DaVinci and Alan Turing. And maybe Ettore Majorana too?

November 22, 2017

Dueling Quantum Computing Conferences, not to be missed by anyone who is anyone

Filed under: Uncategorized — rrtucci @ 1:41 am

These two “beauty contests” conferences are more exclusive than Davos. John Preskill, Paris Hilton, Kim Kardashian and Ivanka Trump are expected to fly in by helicopter. (We, the commoners, will be permitted to observe the proceedings from behind a cyclone fence)

West Coast – Dec 4-6, q2b Conference, http://www.q2b.us

East Coast – Dec 6-8, ThinkQ Conference,

November 18, 2017

Strange Connection Between ATOS corporation and Oak Ridge National Laboratory (ORNL) Quantum Computing Division

Filed under: Uncategorized — rrtucci @ 2:30 pm

The ATOS corporation, based in Bezons, France, has a really bad reputation in some countries like the UK. To see this, check out, for example, the “Controversy” section of the Wikipedia entry on ATOS.

On October 20, 2017, ORNL put out a press release entitled:
“Two ORNL-led research teams receive $10.5 million to advance quantum computing for scientific applications”

On Nov 13, 2017, ATOS put out the following press release: click here

This is very odd because a quantum simulator of 30 qubits is not considered leading edge nowadays. Google/ProjectQ have simulated 45 qubits and IBM has simulated 56 qubits. Furthermore, ORNL owns the Titan supercomputer, one of the largest supercomputers in the world. ORNL is at the forefront of HPC, so why did they buy an expensive albeit not very powerful French machine to do quantum simulations? Like selling coals to Newcastle, isn’t it? ATOS claims to offer a “Quantum Learning Machine” that runs a quantum language called aQasm, but their language is closed source. I haven’t found any tutorial, or examples, or open source software for it, no matter how hard I’ve looked. Why did ORNL, a DOE weapons lab in the very red #MAGA state of Tennessee, do something so un-MAGA and unpatriotic by MAGA standards, namely, to buy expensive French closed-source software and hardware when American companies like IBM and Google already provide an open-source quantum language and development kit and cloud usage of it for quantum simulations? Is this evidence of government waste, abuse or corruption, and does the government care?

November 8, 2017

Volkswagen’s Entangled Quantum State

Filed under: Uncategorized — rrtucci @ 5:26 am

Long, long ago (May 2013), in a galaxy far away (Silicon Valley), Google together with NASA bought a DWave quantum computer.

And then, just 7 months ago (Mar 2017), Volkswagen proudly announced that it was using a Dwaver quantum computer for traffic flow optimization in Beijing.

But yesterday, Volkswagen put out the following press release


announcing a grand partnership between Volkswagen and Google’s quantum computer group.

Some excerpts from yesterday’s Volkswagen press release: (Dwave was not mentioned directly.)

Neven: Google already has a high-performance quantum computer and the software it takes to run it.

Hofmann: One key focal point of our partnership will be optimizing traffic with the help of quantum computing. In this work, we will draw on the findings of our first research project with 10,000 taxis in China’s capital, Beijing.


Hmm, has Google swiped their friend’s biggest client?

(art by Frankes )

November 6, 2017

Artiste-qb.net CEO will be a Featured Speaker at mini-summit to be held on Tuesday by the Toronto AI Meetup

Filed under: Uncategorized — rrtucci @ 5:33 am

The Toronto AI meetup, which currently has a whopping 1,694 members, will be holding a ‘mini-summit’ on Tuesday (1 day from now).

AI Summit 2017 – a micro-conference

Tuesday, Nov 7, 2017, 6:45 PM

5th floor, SAS Institute (Canada) Inc.
280 King Street East Toronto, ON

75 manifolds Attending

Welcome to the AI Summit.  This is a micro-conference on AI featuring some really great minds.  We are partnering with SAS on this event and co-hosting with the Toronto Women’s Data Group.  Hope to see you there – we may even give away a t-shirt at the event, maybe :)Helen Ngo – Ensemble methodsJane Illarionova – Deep Learning applied to brain …

Check out this Meetup →

The Univ. of Toronto, may I remind you, is the home of Geoffrey Hinton, one of world’s leading researchers in the field of artificial neural networks (he and 3 others invented, in a 1986 paper, the backpropagation method)

Tuesday’s summit will feature 5 speakers, 2 women and 3 men. One of the men will be Henning Dekant, the CEO of artiste-qb.net (the company I work for). Henning will be speaking on quantum computing AI. The venue for the meeting will be the 5th floor of the SAS Institute (Canada) Inc. Henning is scary ugly (he dresses as himself for Halloween). He isn’t as beautiful and brainy as the two women speakers, but he does have the home court advantage, having worked at the SAS Institute for many years before leaving it to lead artiste-qb.net.

Next Page »

Blog at WordPress.com.

%d bloggers like this: