Quantum Bayesian Networks

June 27, 2020

My Pinned Tweet at Twitter

Filed under: Uncategorized — rrtucci @ 9:28 pm

This is the pinned Tweet on my company’s (www.ar-tiste.xyz) Twitter account

November 23, 2020

Scam Alert. Zapata Computing and the Quantum Ponzi Era.

Filed under: Uncategorized — rrtucci @ 2:25 am

zapatillas-zapata

According to Crunchbase, Zapata computing has just raised another $31M in Series B financing. It has now raised a total of $91.3M in financing. 

This software-only company has yet to make a net profit, and won’t be able to make one for the foreseeable future, because fault tolerant quantum error correction is nowhere in sight. I joined Zapata’s cloud service this year to see what it was all about, and what I found was an impenetrable thicket of software for producing a quantum disadvantage, instead of an advantage. I can’t wait to see what Zapata is going to do with $90M. Pay themselves and their VCs $300,000/yr salaries? Write 90 slight variations of the same NISQ quantum algorithm that everyone else is writing, and charge $1M per algorithm clone? 

Welcome to the Quantum Ponzi Era. The NISQ quantum disadvantage Era. The Quantum Winter.

Mind you, I have a PhD in physics, and have worked in quantum computing for almost 2 decades, so when I say that there is no quantum advantage,  I know what I am talking about. But just in case you doubt my word, here is some supporting evidence. First, note that on Oct 2018, Rigetti computing offered a $1M prize for achieving a quantum advantage, and nobody has ever claimed that prize.  Second, check out the low opinion that the famous quantum scientist Scott Aaronson had about Zapata Computing‘s algorithms about one year ago.  And I don’t see Zapata having changed course since last year. If anything, they have doubled down on the hype. 

Zapata’s main source of revenue is a quantum cloud service. I see no benefits to using their service compared with using any of the MANY other proxy quantum cloud services. If you want a proxy quantum cloud service, I advice you to set up your own using Berkeley University’s free open source software z2jk

Zapata claims that they also get some revenue from qc consulting. I doubt that that revenue is enough to cover the costs of Zapata’s office rent, let alone pay the salaries of its ~60 employees. Zapata is far from being a monopoly.  There are precious few  companies that currently pay for qc consulting, and Zapata has to share those  few with 50 other qc startups, and with giant companies like Google, Microsoft, Intel, Honeywell, etc. Currently, there is  a huge over-supply and infinitesimal demand for qc consulting services, and I don’t see that changing in the foreseeable future.

Every Sunday, I listen to an NPR show called How I built this, with Guy Raz. All these shows are available online as podcasts. In that radio show,  Guy Raz interviews entrepreneurs that have built highly profitable multi-million dollar companies from scratch. I find the entrepreneurs interviewed by Raz to be fascinating people, and in a different league from Zapata’s flaky founders. Time and time again, you hear the creme de la creme entrepreneurs in that radio show say what I will say next.

 A startup should only look for VC funds after it has a product with a clear path to profitability in 2-3 years, or if it is already making a profit and wants to scale up. Anything else is going to lead to the 90% of startups that are Ponzi schemes, like Zapata. Startups that never make any profit and go broke after they burn their VC money on high salaries and ridiculous nonsensical products. Startups with CEO’s like Chad Thunderrock, CEO of Zapata-Mail Industries, LLC.

October 30, 2020

Structure Learning for Bayesian Networks and the myth of ARACNE/Arachne, as told by the volcano book

Filed under: Uncategorized — rrtucci @ 10:15 am

arachne

My FREE book on Bayesian Networks, Bayesuvius , the volcano book, now has 45 chapters. The most recent batch of chapters is on the subject of learning the structure of a Bayesian Network (bnet) from data. Some of the titles in that batch include:

  • ARACNE-structure learning
  • Chow-Liu Trees and Tree Augmented Naive Bayes (TAN)
  • Scoring the nodes of a learned bnet
  • Structure and Parameter Learning for Bnets

Structure learning is a computationally intensive NP-complete task, so the best one can hope for is for heuristic algorithms that solve this problem approximately. A huge number of such algorithms have been tried and continue to be tried. Luckily, there exists a free open source software library called bnlearn that covers many of them. My goal in writing these chapters on structure learning was to give a brief overview of the subject, after  which I recommend to those who want to pursue this subject further, to learn bnlearn.

bnlearn  (free, open source) is very comprehensive and well maintained. It is  written mostly in C with an R front-end. It was developed by Marco Scutari and collaborators over a time period of more than 10 years, and is still under active development.

Our company http://www.ar-tiste.xyz, the first COOP of Bayesian network programmers, is a big fan and user of bnlearn. All the other large bnet companies use their own proprietary software for structure learning, but their software is much less powerful. That is one of the reasons why we think http://www.ar-tiste.xyz is different and better than those other companies. We leverage free open source software like bnlearn whereas they don’t.

The Bayesuvius chapter entitled “ARACNE-structure learning” is very short, but was a lot of fun to write. ARACNE is a structure learning algorithm, but Arachne also means “spider” in Greek, and the word has a very nice Greek myth behind it. Archne was a human who was converted into a spider by the Greek goddess Athena for the sin of hubris. In my chapter on the ARACNE algorithm, I also sin of hubris by daring to propose a modification of the standard ARACNE algorithm so that it breaks 4 node cliques instead of 3 node ones. Read the chapter if you want to understand better what I am talking about. I end with a YouTube video that tells the myth of Arachne with some beautiful animation.

 

October 29, 2020

Portrayals in the movies of Quantum Computing Physicists

Filed under: Uncategorized — rrtucci @ 6:08 pm

For my money, the definitive portrayal in the movies of quantum computing physicists was given in the first Jurassic Park movie by actor Dennis Nedry (which looks a lot like Nerdy, if you have trouble remembering the name) . https://www.youtube.com/watch?v=me8ICrOghQw

Emory Univ. Biostatistics Course

Filed under: Uncategorized — rrtucci @ 5:39 pm

I recently came across a course in applied biostatistics taught at Emory Univ. (Atlanta, Georgia) with  course notes that I really like. This blog post serves the multi-purpose of: saving the link for me so that I can find it in the future, sharing my treasure find with my readers, and praising  the teachers of the course for a job well done.

BIOS731: Advanced Statistical Computing

InstructorsHao Wu (hao.wu at emory dot edu), Zhaohui Steve Qin (zhaohui.qin at emory dot edu).

Emory Univ. is a world class institution in Public Health (which includes biostatistics). The legendary CDC (Center for Disease Control and Prevention) was built on land donated by Emory, and is adjacent to the university campus, as you can see from this aerial photo taken from the Wikipedia article on Emory.

Emory_Campus_Aerial_Image

Aerial view of Emory University’s main campus (bottom) and the Centers for Disease Control and Prevention (top), Atlanta, Georgia

October 19, 2020

Causally Correct Bayesian Network

Filed under: Uncategorized — rrtucci @ 1:35 am

Given an arbitrarily large dataset of samples for  the random variables (\underline{x}_i)_{i=0, 1, \ldots, N-1}, there may be several Bayesian Networks (bnets)  that fit the data well, but  only one is used by Nature. That single one is called a causal (or causally correct) Bayesian Network. Whenever I speak of causal issues in my FREE book Bayesuvius, I assume, often without mentioning it, that the correct bnet is being used.

Identifying the correct bnet is a difficult task. Doing so might require some causality probing experiments (“do interventions”) such as those envisioned by Judea Pearl.  Designing causality probing experiment is itself a delicate task, as the following Russian joke, told to me by my friend Anna M.,  illustrates:

Two scientists caught a cockroach. They tore off one of its legs and clapped their hands – the cockroach ran away. They caught it again. They tore off another leg and clapped their hands – the cockroach ran away. Another leg – the cockroach went away. Another leg – the cockroach crawled away. Another leg – the cockroach stayed still.
Conclusion: a cockroach cannot hear without legs.

If you are afraid of making mistakes in your causal AI programming, hire us at http://www.ar-tiste.xyz. We are the first COOP of Bayesian Networks programmers. We are trained professionals that never ever make causal mistakes. Except when we concluded that cockroaches cannot bite without legs.

October 18, 2020

Expectation Maximization Algorithm in Pictures

Filed under: Uncategorized — rrtucci @ 7:59 pm

emax-square

The Expectation Maximization algorithm is commonly used in Data Science to find the maximum of a likelihood function which depends on some hidden parameters (e.g., missing data ). The EM algorithm alternates steps which change  the maximum of a curve and steps which change its expected value, while  keeping the other one fixed. Here is a simplified geometrical picture of such a process. Suppose you draw a Gaussian probability distribution y=f(x) in the xy-plane with unit underlying area. For simplicity in explaining things, let us replace the Gaussian by a rectangular graph with unit underlying area. Let 

  • MY = the height of the rectangle and
  • EX = the position on the X axis of the center of the rectangle

shape-changing step: This step changes MY at fixed EX. It changes the height of the rectangle, making it fatter or skinnier,  but keeping its EX fixed.  This step changes shape without doing an x-translation.

x-translation step: This step changes EX at fixed MY. It  translates the rectangle in the x direction without changing its shape. This step does an x-translation without changing shape.

By alternating incremental shape-changing and x-translation steps, one can move a rectangle from any initial values of EX, MY, to any final values thereof.

October 9, 2020

No Causation without representation!

Filed under: Uncategorized — rrtucci @ 4:06 pm

My FREE book Bayesuvius now has 39 chapters (like the Hitchcock film, The 39 Steps). I had been postponing writing the chapters on Pearl causality until now, because I consider them to be the most important chapters in the whole book, and I wanted to nail them, to the best of my limited abilities. Well, I finally bit the bullet and wrote them. Please check them out, and send me feedback. I would especially like your opinion on  the chapters entitled

  • D-Separation
  • Do-Calculus
  • Counterfactual Reasoning

Here is a picture from the Counterfactual Reasoning chapter to pique your interest:

do-imagine-ops

Please keep in mind that this is the **first** released version of these chapters. I intend to improve and expand them in the future.

I truly believe that causal AI is an essential, fundamental part of AI, and that current AI will never progress beyond dumb curve fitting unless it embraces causal AI fully. That is why I am so delighted to see the recent meteoric rise of  the mega startup Insitro, which is applying (causal) AI to finding and testing new drugs. Causal reasoning is ubiquitous in human thinking. Even cats do it when they investigate a water faucet. And yet current AI doesn’t use it! I end this post with my causal chain of the day:

No causal AI without Bayesian Networks representation. No Bayesuvius without causal AI.

bnet representation ——>causal AI——>Bayesuvius

September 16, 2020

Time travel & the importance of being Causal AI Earnest

Filed under: Uncategorized — rrtucci @ 3:41 pm

September 10, 2020

Amazon Braket, ka-ching, ka-ching

Filed under: Uncategorized — rrtucci @ 8:55 pm

About a month ago, Amazon announced the opening of their much anticipated quantum cloud service called Braket. It’s quite funny. Amazon has hired a bunch of greedy, selfish, narcissistic, dishonest, amoral quantum physicists from Caltech, who know nothing about programming or business, to make their quantum service profitable. Good luck with them, Jeff Bezos!

In a previous blog post about 9 months ago, I listed 17 quantum clouds (including the new ones by Google and Microsoft, and the longstanding one by IBM). By now there are probably a few more. So the quantum cloud field is super-saturated already.

Add to that the fact that proxy-quantum clouds like Amazon’s and Microsoft’s introduce a middle-man in the exchange of information. Hence, they are certain to slow things down at the user end compared with non-proxy, native quantum clouds like IBM’s and Google’s.

If I wanted to use a proxy quantum cloud, I would install my own private one. It would be cheaper, more flexible and private. There is already available, excellent, well maintained, free, open-source software produced by Berkeley Univ. that allows a computer ignoramus to install, in minutes, a private, Kubernetes driven, highly scalable, proxy quantum cloud.

Oh, and one more thing… user fees.

At Amazon-Braket, they do give you quantum simulator usage with <25 qubits for free, but you can do that on your own computer. Furthermore, AWS cloud fees seem to be charged separately from AWS-Braket fees. AWS cloud fees begin to be charged after your free-tier year is over, and these can be quite substantial compared to using your own computer for free.

At Amazon-Braket, none of the usage of the qc hardware is for free.

  • 30 cents per task plus a per shot fee. ka-ching.
  • IonQ feels it costs 50 times more than D-Wave per shot. IonQ wants to charge 1 penny per shot. ka-ching, ka-ching  Good luck, guys! IBM salesman Dario Gil claims (*) that in the Quantum Challenge that IBM had in May, they had 10^9 shots per day. IonQ/AWS would have charged users $10^7/day for that affair.

AWS-Braket is going to be an interesting quantum physics experiment. It will answer the age-old physics question: How much qc hobbyists that are not being funded by a company are willing to pay for their vice? I suspect the answer is: not much. As to a large number of companies paying for qc services, I doubt that will happen either, because qc technology won’t yield a quantum over classical advantage for many years. What I believe is going to happen is that a lot of people will join out of curiosity, but will leave after they spend their first $10. I just don’t think that Amazon-Braket has any stickiness. We’ll see if I’m right.

Brought to you by http://www.ar-tiste.xyz

(*)Quote from Dario Gil article in Scientific American: “In early May, during IBM’s Digital Think conference, nearly 2,000 people from 45 countries took part in our Quantum Challenge—and using 18 IBM Quantum systems through the IBM Cloud, ran more than a billion circuits a day on real quantum hardware.”

September 8, 2020

The plagiarist J. Ignacio Cirac (El plagiario J. Ignacio Cirac)

Filed under: Uncategorized — rrtucci @ 9:51 pm

The following paper has been called to my attention today. Somehow, I had totally missed it:

From probabilistic graphical models to generalized tensor networks for supervised learning, by Ivan Glasser, Nicola Pancotti, J. Ignacio Cirac

J. Ignacio Cirac (the senior author and a very famous man in quantum computing) and his unethical associates  first used Tensor Networks in 2005. I first used Bayesian Networks in my 1995 paper entitled “Quantum Bayesian Nets” and I’ve written 50 papers in arXiv, this 12 year old blog called “Quantum Bayesian Networks”, several patents, and a ton of open source software about quantum Bayesian Networks. And yet they write a paper in 2019 about quantum Bayesian networks and never mention my work. Do they claim they missed 20 years of my work, and 40 years of work by Judea Pearl, who won the Turing Award, widely considered the Nobel Prize of CS, for his work on Bayesian Networks (not tensor networks)? It would be very funny if they did. The truth is that they are unethical plagiarists; there is no other possible explanation. 

P.S.: Note. I am not claiming that Cirac and co-thiefs copied part of my work verbatim. What I am claiming is that: it is clear, beyond a doubt, that sleazy Cirac and co-workers intentionally failed to mention highly relevant, widely available, and copious prior art. This is considered unethical and illegal in Science and in the patent office too.

The shaking table caused the vase to break!

Filed under: Uncategorized — rrtucci @ 12:54 am

I love this picture. It makes the importance of Causal AI clear to me. The importance of being Causal AI Earnest 🙂 And it touches on so many topics: Bayesian networks, Pearl d-separation, Pearl causality, and squashed quantum entanglement. It comes from the wonderful book:

Learning Bayesian Networks, by Richard E. Neapolitan

September 6, 2020

Neven’s talk at 2020 Summer Symposium

Filed under: Uncategorized — rrtucci @ 9:59 pm

Check out this talk that Hartmut Neven gave 2 days ago

https://t.co/sQgSI8HRaT?amp=1

So, Neven’s future plans for Google Quantum AI are 2 fold-

  1. New Google quantum cloud
  2. 10 year plan to build an error corrected machine.

It would be interesting to hear Martinis’ opinion about 2. I suspect that Neven’s plan for 2 is not very feasible, and that Martinis’ plan for 2 was quite different, and that is a large part of why he left. Martinis said as much as he was leaving Google. He said that his plans for wiring an error corrected qc were very different from those of Neven and Neven’s trusted advisors.

So whose opinion do you believe, Neven’s or Martinis’s?

This picture, taken from the above video, is Neven’s plan for an error corrected qc. Does it look like it would work to you? It doesn’t to me. To me, it looks like a picture taken from a superheroes comic book, of a very large turbine, or rather, half of it, a turbine which is definitely not at micro-kelvin temperatures. Miniaturization and refrigeration do not seem to be part of Neven’s folly.

Martinis has 40 years of experience building superconductive devices, and he has an impeccable track record of delivering what he promises. As far as hands-on experience building superconductive devices, Neven has zero. As far as intuition for what physical theories will and will not ultimately work, Neven has almost zero: He has a firm belief in the nonsense, pseudo-scientific Everett Multi-World interpretation of quantum mechanics and that qc’s work in many world’s at the same time, a firm early belief in the unrealistic DWave and Geordie Rose, who promised us Rose’s Law (qc’s “faster than the universe” by 2015), a firm belief in the utterly unrealistic, surreal, Neven double exponential law, … Some will say, but Bob, Neven has a staff of 100 bright people working for him. Well, yes, but I think Neven runs that outfit like a king, surrounded by 100 handpicked yes-men. He has the final word about everything. He ultimately rebelled against sharing power with Martinis, thus proving, to my mind, that he doesn’t share power gladly.

Good News, Xanadu AI jumps the shark

Filed under: Uncategorized — rrtucci @ 4:30 pm

Check out Xanadu AI’s latest press release:

Xanadu Releases World’s First Photonic Quantum Computer in the Cloud

Some very bold claims are made by Xanadu in that article.  Here are some excerpts:

Photonics based quantum computers have many advantages over older platforms. Xanadu’s quantum processors operate at room temperature. They can easily integrate into existing fiber optic-based telecommunication infrastructure, enabling a future where quantum computers are networked.

“We believe that photonics offers the most viable approach towards universal fault-tolerant quantum computing with Xanadu’s ability to network a large number of quantum processors together. “

“We believe we can roughly double the number of qubits in our cloud systems every six months,”

“In addition to the computing market, the company is also targeting secure communication and quantum networking, an area that photonics is poised to dominate. “We are laying the groundwork for our vision of the future: a global array of photonic quantum computers, networked over a quantum internet.”

Just because Xanadu’s pseudo-qc is photonic does not mean that it can be networked any better than  a non-photonic qc, with other computers, over the existing networks. So that is a misrepresentation. Besides, the quantum internet is a boondoggle that is highly unlikely to ever be built.

Up to now, Xanadu has provided little or no experimental evidence to support any of the claims of that article. Here is what Scott Aaronson had to say about Xanadu 9 months ago, and note that no one from Xanadu defended the company in the comments; they totally ignored Scott’s criticism.

https://qbnets.wordpress.com/2019/12/30/scott-aaronson-excoriates-two-quantum-startups-xanadu-and-zapata/

I think that this time, Elizabeth Holmes, or whatever is the name of Xanadu’s CEO, has jumped the shark. This is good news for all truth loving people, because it makes it easier to expose Xanadu’s false promises now that those promises are clearly stated and now that the performance of their pseudo-qc can be analyzed by impartial observers. Let’s see how the quality of their qubits and error correction compares with that of other qc’s such as IBM’s. Let’s see if they can really double the number of **high quality** qubits every 6 months (low quality qubits are practically useless. DWave has 5,600 low quality qubits already and yet it is probably on the verge of bankruptcy after 20 profitless years.)

I can’t wait to see what the people at IBM, Google, Microsoft, Intel,  Rigetti, DWave, IonQ, Honeywell & PsiQuantum have to say about this. If they don’t say anything, it’s as if they were conceding that they’ve lost the qc race, and investors will flock en masse to Xanadu. Is funding of qc, a zero-sum game? I suspect it is.

Recently, I read another press release where the CEO of Xanadu was quoted as saying that he and his Olmers accomplices are seeking $100 million for Xanadu’s next funding round, on top of about $40 million received on previous rounds!! Investors beware of this Canadian Ponzi scheme! Xanadu used to refer on press releases to ex MIT professor Seth Lloyd as their “main scientific advisor”. Now that Seth Lloyd has been put on leave by MIT for accepting almost $300,000 from pedophile and owner of a child prostitution ring, Jeffrey Epstein, you would think that investors would have some misgivings about Xanadu’s credibility and ability to judge character. They say that birds of a feather, like Xanadu and Lloyd, hang together.

Xanadu AI CEO, Elizabeth Holmes (aka Christian Weedbrook) and Xanadu’s main scientific advisor and pedophile enabler, Seth Lloyd.

September 1, 2020

Belief Propagation (Message Passing) for Classical and Quantum Bayesian Networks

Filed under: Uncategorized — rrtucci @ 7:22 pm

My FREE book about Bayesian Networks, Bayesuvius, continues to grow. It currently has 33 chapters. The purpose of this blog post is to announce the release of a new Bayesuvius chapter on Belief Propagation (BP).

Belief Propagation (BP) (aka Message Passing) was first proposed in 1982 by Judea Pearl to simplify the exact evaluation of probability marginals of Bayesian Networks (bnets). It gives exact results for trees and polytrees (i.e. for bnets with a single connected component and no acyclic loops). For bnets with loops, it gives approximate results (loopy belief propagation), and it has been generalized to the junction tree (JT) algorithm which gives exact results for general bnets with loops.

The JT algo starts by clustering the loops of a bnet into bigger nodes so as to transform the bnet into a tree bnet. Then it applies BP to the ensuing tree. The first breakthrough paper to achieve this agenda in full was by Lauritzen, and Spiegelhalter (LS) in 1988. When it first came out, the LS algorithm caused quite a stir, and led to the creation of many bnet companies, many of which continue to exist and flourish today.

So why is BP important?

BP yields a huge reduction in the number of operations (additions and subtractions) necessary to calculate the marginals P(x_i)=\sum_{x_j: j\neq i}P(x_1, x_2, \ldots, x_N) of the probability distribution P(x_1, x_2, \ldots, x_N) associated with a classical bnet with N nodes. BP also works for quantum bnets. In the quantum case, quantum bnets have a complex probability amplitude A(x_1, x_2, \ldots, x_N) associated with them, and one seeks to calculate coherent sums A(x_i)=\sum_{x_j: j\neq i}A(x_1, x_2, \ldots, x_N).

My open source program Quantum Fog has implemented BP for both classical and quantum bnets since its first release at github in Dec. 2015. Quantum Fog implements the junction tree algorithm (which uses BP) for both probabilities and probability amplitudes.

It is also possible, but so far Quantum Fog doesn’t do it, to implement BP directly from the message passing equations invented by Judea Pearl, and then to use that approach to do loopy belief propagation for both classical and quantum bnets. This should be feasible because the message passing recursive equations of BP do not care if the messages being passed are complex valued (quantum bnet case) or real valued (classical bnet case). They only care about the graph structure of the bnet.

I won’t encumber the reader of this blog post with an exact statement of those recursive equations. For that level of technicality, I refer the reader to Bayesuvius’s chapter on BP. What I will do is to show the graphic that I give in that chapter to motivate those equations. Here it is, with its caption, which reads like a short story:

The yellow node is a gossip monger. It receives messages from all the green nodes, and then it relays a joint message to the red node. Union of green nodes and the red node = full neighborhood of yellow node. There are two possible cases: the red node is either a parent or a child of the yellow one. As usual, we use arrows with dashed (resp., dotted) shafts for downstream (resp., upstream) messages.

Examples of Causal Thinking, from Judea Pearl’s “The Book of Why”

Filed under: Uncategorized — rrtucci @ 2:37 am

BayesiaLab is putting out an excellent series of blog posts featuring examples of Causal AI taken directly from Judea Pearl’s “The Book of Why”. Check it out! Extraordinarily beautiful stuff. 4 examples so far (Breast Cancer, Firing Squad, Smallpox Vaccine, Tea House).

Brought to you by http://www.ar-tiste.xyz, a COOP of purveyors of high quality and low cost Bayesian Network Software & Services

Next Page »

Blog at WordPress.com.

%d bloggers like this: