Quantum Bayesian Networks

November 30, 2020

My Free Open Source Book “Bayesuvius” on Bayesian Networks and Causal Inference

Filed under: Uncategorized — rrtucci @ 3:08 pm

THIS BOOK IS CONTINUOUSLY BEING IMPROVED AND EXPANDED. MAKE SURE YOU HAVE THE LATEST VERSION FROM GITHUB FOR MAXIMUM SATISFACTION.

See also my software “JudeasRx” that implements many ideas in causal inference https://github.com/rrtucci/JudeasRx

See also “Famous uses of Bayesian Networks

June 27, 2020

My Pinned Tweet at Twitter

Filed under: Uncategorized — rrtucci @ 9:28 pm

This is the pinned Tweet on my company’s (www.ar-tiste.xyz) Twitter account

November 28, 2022

Democritus and Causal Inference

Filed under: Uncategorized — rrtucci @ 9:26 am

“I would rather discover a single causal relationship than be king of Persia.”
Democritus

November 21, 2022

Control Theorists Have Been Using SCM with feedback loops since God Knows When

Filed under: Uncategorized — rrtucci @ 7:35 am

A few days ago, I decided to write a chapter on Control Theory (CT) for my free, open source book Bayesuvius. (700 pages). My reason for doing this is that CT studies feedback, and feedback is highly relevant to causal inference. Besides, feedback is just plain cool and widely applicable in many areas of engineering. For example, I’m sure that the designers of the Boston Dynamics Atlas robot, used CT numerous times in designing Atlas. How much cooler can you get than robots that can do gymnastics?

Note that feedback can be represented both by

  1. a “DAG” (directed acyclic graph) with added arrows which create cycles in it so that it is no longer acyclic.
  2. A DAG which repeats a “slice” over and over again (i.e., a “dynamic Bayesian Network”).

(2) is an “unrolled” version of  (1).

After a long search for a nice, online book on CT, I found one that I like very much. (a wikibook, 367 pages)

Click to access Control_Systems.pdf

So far, I have read half of the book, and I am enjoying it very much. One of many fascinating things that I have learned which is very relevant to causal inference, is that control theorists have been using Judea Pearl’s beloved SCM’s since God knows when (since Laplace, Fourier, Bode, Nyquist, Kalman, Wiener, Shannon?), and their SCM are very sophisticated, often containing feedback loops. Here is a jpg of page 170 of the book to prove my point. Sure looks familiar to and warms the heart of a Causal Inference, BookOfWhy fan like me.

ADDENDUM: The SCMs in question appear to have been invented by Shannon in 1942. I googled the term “signal flow diagram” (i.e., the heading of page 170), and was pleasantly surprised to find the following Wikipedia entry with history and theory

https://en.wikipedia.org/wiki/Signal-flow_graph

control-theory-scm

November 14, 2022

Causal Inference and Storyline Methods, my new find

Filed under: Uncategorized — rrtucci @ 4:13 pm

Just discovered on Twitter some work that predates mine but is similar. Here is the Tweet

Random Medical News

Filed under: Uncategorized — rrtucci @ 3:36 pm

I dream of using Causal Inference to find the causal pathways and personalized cures for Cancer, Alzheimer’s, Arthritis and many other human and animal diseases.

random_medical_news

November 13, 2022

Napkin Problem in Judea Pearl’s BookOfWhy

Filed under: Uncategorized — rrtucci @ 11:57 pm

The widely accepted Adjustment Formula (AF) for the Napkin Problem in the BookOfWhy, is as follows:

napkin1

See here for a derivation using Do-Calculus.

We evaluate this AF for random probabilities, and find the not widely known result that it simplifies to

P(y|do(x)) = P(y|x)

So the accepted AF seems overly complicated. See the following jupyter notebook for the details:

https://github.com/rrtucci/napkin-do-calc/blob/master/napkin1.ipynb

DAG Lie Detector

Filed under: Uncategorized — rrtucci @ 11:50 pm

Check out my latest Python app, called “DAG_Lie_Detector”. It evaluates a causal fitness score (a.k.a., Goodness of Causal Fit, GCF) for every DAG (directed acyclic graph)  in a set of DAGs. It requires as input, for every edge A—B whose direction is unknown, P(a), P (b), P(a|do(b)) and P(b|do(a)).

https://github.com/rrtucci/DAG_Lie_Detector

A DAG is like a beautiful woman that might be lying 🙂

lie-detector-blond

October 16, 2022

My new paper entitled: Causal DAG extraction from a library of books or videos/movies

Filed under: Uncategorized — rrtucci @ 12:56 am

Abstract:

Determining a causal DAG (directed acyclic graph) for a problem under consideration, is a major roadblock when doing Judea Pearl’s Causal Inference (CI) in Statistics. The same problem arises when doing CI in Artificial Intelligence (AI) and Machine Learning (ML). As with many problems in Science, we think Nature has found an effective solution to this problem. We argue that human and animal brains contain an explicit engine for doing Causal Inference, and that such an engine uses as input an atlas (i.e., collection) of causal DAGs. We propose a simple algorithm for constructing such an atlas from a library of books or videos/movies. We illustrate our method by applying it to a database of randomly generated Tic-Tac-Toe games. The software used to generate this Tic-Tac-Toe example is open source and available at GitHub.

I’ve created a public github repo with a pdf of the paper

Click to access deft1.pdf

I haven’t uploaded the paper to arXiv yet, but I intend to do so soon. Before uploading it to arXiv, I am posting this blog on Twitter, seeking advice and criticism.

UPDATE: paper now available at arXiv.

library-of-congress

US Library of Congress

October 13, 2022

Discoball ion trap quantum computer

Filed under: Uncategorized — rrtucci @ 3:20 pm

Just had a crazy idea that I would like to share.

It seems to me that a good geometry for an ion trap quantum computer would be to constrain the ions to lie on the surface of a sphere. Their mutual repulsion would space them out evenly. If the ions were positively charged, a negative charge at the center of the sphere could put them all in a “spherical well potential”. The force on each ion would be the outward radial force of the sphere substrate on the ion plus the inward radial force produced by the Coulomb force between the ion and the central charge. One could achieve full connectivity between the qubits by putting them all in the same energy level of that spherical well potential. For each ion, one could place a microlaser at the same polar angles but at a slightly larger radius than the ion. This way, one could address each ion separately.

discoball

October 12, 2022

Serious Causal Inference error being made by Uber’s “CausalML” software and Uplift Marketers: Using Tree methods to calculate CATE/ATE

Filed under: Uncategorized — rrtucci @ 3:50 pm

shap-uber

SHAP diagram taken from the home page at github of Uber’s CausalML

In this brief blog post, I want to address a fundamental and glaring error being made by many Causal Inference (CI) practitioners, including Uber, Uplift Marketers, and many economists.

The error I am referring to is to use decision tree methods (e.g., random forests, XGBoost, etc.) to calculate CATE/ATE, without ever drawing a DAG. To make the error even greater, this is followed by using SHAP to allegedly endow the results with explainability.

So why is this an error? Sidestepping and deprecating the use of a DAG, and conditioning on every covariate in sight, as is advocated by Rubin and his economist acolytes (Joshua Angrist, Guido Imbens, Susan Athey, …), is a terrible CI doctrine. It sweeps the problem of good and bad controls under the rug. But hiding and ignoring this problem does not make it go away. Chances are that if you condition on everything in sight, you will condition on a collider and introduce a bias into your ATE calculation. It also makes you condition on many more covariates than is necessary. That is why I consider all CATE/ATE values calculated without a DAG to be highly suspect.

So what is wrong with SHAP? SHAP is a game theory inspired software app that produces graphs that look like  red and blue gummy worms on a skewer. These graphs are supposed to add explainability to the results of any machine learning algorithm. For me, SHAP is fake explainabilty. True explainability is given by the DAG.

So who has been doing this?

  1. Uber’s “CausalML”  software which has 3.5K stars at github. All you have to do is skim the CausalML’s home page at github to verify that almighty Uber has been promoting this error for the last 2 years..
  2. Famous economist Susan Athey et al (see https://arxiv.org/abs/1610.01271 and https://grf-labs.github.io/grf/)
  3. Uplift Marketers. Besides CausalML, which is being promoted at its github home page as an Uplift Marketing tool, see, for example, UpliftML by booking.com and PyLift by Wayfair.com.

October 7, 2022

Simpson’s Paradox Video

Filed under: Uncategorized — rrtucci @ 3:39 am

Nice video. If you prefer to learn from a book and equations, check out the chapter entitled “Simpson’s Paradox” in my book Bayesuvius.

October 5, 2022

Feynman’s Thesis and Causality at the Microscopic Level

Filed under: Uncategorized — rrtucci @ 9:05 pm

feynman-glory

way of thinking= Feynman diagrams and their close cousins, DAGs.

In a recent Tweet, Yann LeCun asserted that

“All of microphysics is time reversible (under CPT symmetry). So technically, causality is an illusion”

This is totally incorrect. Yann has no clue what he is talking about. Check out the book

Feynman’s Thesis, A New Approach to Quantum Theory, (Laurie Brown) (freely available on the internet)

This book contains Feynman’s PhD Thesis, with a prelude by Laurie Brown. I invite you to download it and search for the keyword “Causality”. If you do, you will find out that the need to preserve Causality at the microscopic level was one of the primary motivations, the seed if you will, for Feynman PhD thesis at Princeton under Wheeler. His thesis led to his invention shortly thereafter of QED (Quantum Electrodynamics), for which he earned a Nobel prize. It is no exaggeration to say that the need to preserve Causality is one of the most sacred goals of microscopic quantum physics. If you are curious to see how physicists define Causality, see this blog post.

September 3, 2022

2017 Video of Donald Rubin on History of Causality (especially, missing data approach)

Filed under: Uncategorized — rrtucci @ 9:00 pm

Doesn’t mention Pearl’s towering contributions to Causal Inference, but, nonetheless, fascinating. Explains how he arrived at his missing data approach, and recounts some interesting highlights of the history of RCTs.

September 1, 2022

Bayesuvius now has a chapter on Variational Bayesian Approximation for Medical Diagnosis

Filed under: Uncategorized — rrtucci @ 8:56 pm

bayesuvius-cover.-small

This blog post is to announce that my free open source book Bayesuvius now has a chapter on “Variational Bayesian Approximation for Medical Diagnosis”. Medical diagnosis belongs entirely to rung 1 of Judea Pearl’s ladder of causality, but rung 1 is cool too, and Bayesuvius covers all 3 rungs.

The new chapter is based solely on this paper (https://arxiv.org/abs/1105.5462) that I’ve liked very much for a long time. It was written by the the legendary Berkeley Univ. Prof. Michael I. Jordan, and one of his students.

A Variational Bayesian Approximation (VBA) is when we approximate a probability distribution by another probability distribution that depends on a continuous “variational parameter”. This parameter is adjusted within its range of possible values, to make the approximation as good as possible. There are many VBA methods. VBA methods are inspired by ancient methods used in Calculus of Variations applied to Physics and Engineering problems. Calculus of Variations was arguably started by Sir Isaac Newton, when he solved the Brachistochrone problem (It took Newton just one sleepless night to solve this problem after it was suggested to him by one of the Bernoullis)

Bayesuvius now has chapter on Diffusion Models (a small part of DALL-E)

Filed under: Uncategorized — rrtucci @ 7:51 pm

bayesuvius-cover.-small

This blog post is to announce that my free open source book Bayesuvius now has a chapter on “Diffusion Models” (DMs). DMs belong entirely to rung 1 of Judea Pearl’s ladder of causality, but rung 1 is cool too, and Bayesuvius covers all 3 rungs.

DMs are a way of generating fake images from an original image. They are a competitor to GANs (Generative Adversarial Networks) and are used in DALL-E (OpenAI’s computer program that generates images from text).

Of course, a DM is only a small part of the magic of DALL-E. I haven’t studied DALL-E’s algorithm, but my guess is that it works roughly as follows. Given a text description of an image, such as “A hedgehog using a calculator painted in the style of Vincent van Gogh”, it uses a neural net trained on a vast corpus of words and images, to match separate words in the description with an image for each word. Then it uses a second neural net to create a pastiche from the set of images created in the first stage. Then, it probably modifies that pastiche by passing it through a stylistic filter that can be specified in the initial description. Finally, it uses a DM to smooth the transitions of the pastiche.

August 24, 2022

AI Can’t Reason Why

Filed under: Uncategorized — rrtucci @ 3:25 pm

I love this graphic. It comes from this Wall Street Journal article.

ai-cant-reason-why-wsj-2018

Next Page »

Blog at WordPress.com.

%d bloggers like this: