Quantum Bayesian Networks

September 29, 2021

Defensive Wave in a Bee Swarm

Filed under: Uncategorized — rrtucci @ 3:37 am

This reminds me of message passing in a Bayesian Network and of Granger Causality. Maybe the human brain works like this.

September 27, 2021

Time Series Analysis is more Fun with Bayesian Networks

Filed under: Uncategorized — rrtucci @ 8:10 am

I  just finished a new chapter entitled “Time Series Analysis: ARMA and VAR” for my free, open source book “Bayesuvius” (482 pgs) about Bayesian Networks and Causal Inference.

So why did I write a chapter on time series for Bayesuvius?

  1. This stuff is really useful. Data in the form of time series arises very frequently in all kinds of scientific fields and in economics.
  2. None of the treatments on time series that I found in books and on the web, explores the obvious connection between time series and Bayesian Networks (bnets). I think that is a pity because IMHO time series become much more intuitive and fun when represented as bnets. To my chagrin, I found that the bible of time series analysis, namely the book by the econometrician James D Hamilton, has nary a DAG in any of its 800 pages. In my chapter on time series in Bayesuvius, I present a DAG on almost every page. I’m such a contrarian 🙂
  3. I wanted to write a chapter for Bayesuvius on Granger Causality (GC), and I felt that a chapter on GC would work best if presented in conjunction with a chapter on time series. I’m happy to report that Bayesuvius now contains 2 new chapters entitled (1) “Time series analysis: ARMA and VAR” (2) “Granger Causality”. Here is a jpg from the chapter on GC:granger-c-bnet
  4. I wanted to please my rich economist readers. It appears that time-series were very popular in Economics about 15-25 years ago. Hamilton’s book was first published in 1994. I did a key-word search for the word “time series” in a list of all the economics Nobel prizes and found one hit in 2003: economics-nobel-2003

September 26, 2021

Does the human brain perform do-calculus in its sleep and neuroscientists have been studying this for years under the rubric of Granger Causality?

Filed under: Uncategorized — rrtucci @ 4:59 pm

DoAndroidsDream

Do Androids dream of electric sheep and causal DAGs?

I just wrote a chapter on “Granger Causality” (GC) for my book Bayesuvius. I felt compelled to include a chapter about this ever since Prof. Pearl mentioned in a Tweet that GC was an early, slightly flawed attempt to define Causality. I wanted to explain in my book what’s right and wrong about this definition of Causality, and how Pearl’s CI improves upon it. I hope I have succeeded. You be the judge.

Clive Granger won the 2003 Nobel prize in Economics for his work in time series analysis. Among his famous contributions is the concept of GC, which is nicely covered in a Wikipedia article, and in a Scholarpedia article. The Scholarpedia article has a fascinating section entitled: “Applications to Neuroscience”. The section is  short (just two paragraphs and one image). I recommend you read it. The section  includes the following image and caption. Maybe I am reading too much into it, but does it look to you like they are mapping an Artificial Neural Network or a DAG? 

granger-c-brain

September 14, 2021

Stanford championing shaky Foundation Models, a dead end street for AI

Filed under: Uncategorized — rrtucci @ 4:27 pm

erosion

Check out this excellent article that was just published. I highly recommend it:

Has AI found a new foundation? by Gary Marcus and Ernest Davis (The Gradient, 11 Sept 2021)

A month ago, Stanford University published a massive 212 page report, authored by 149 scientists from Stanford University, coining the term “Foundation Models”, and announcing the opening of a new institute dedicated to them. The above article explains why these so called Foundation Models (FMs) are a very poor foundation for AI, and therefore, dedicating so many resources to them is extremely foolish and counter-productive for the AI field. 

The article makes many great points that I agree with wholeheartedly. It particularly shines with scary/hilarious stories of catastrophic FM failures. It points out that FMs are (1) very limited in what they can do, (2) Even when doing their forte, they are dangerously erratic and not trustworthy. (3) Very ill-defined—their definition seems to be: “something that looks like Google’s BERT”. (1), (2) and (3) is not what you want in a foundation. It reminds me of a famous quote from the classic movie Animal House: “Fat, drunk and stupid is no way to go through life, son.” I’d give that advice to BERT. The article has a desiderata list of what a good AI Foundation should have. FMs fail most of the items in that list.

Let me add a few of my own criticisms of FM.

The Stanford monstrosity paper takes the stand that FMs are “risky” but fixable. I disagree. They are not fixable, their flaws are too deeply entrenched. As an advocate of Bayesian Networks and Causal Inference (CI), I see FMs as a dead-end street. FMs like BERT prove that our current machines are really good at curve fitting, better than humans. But we already knew that. But FMs are causal model free so they are incapable of doing CI. That is why I call them a dead end street. I believe that CI (i.e., distinguishing between correlation and causation, answering  “why?” and “what if?” questions) is a necessary part of any human-like AI.

The Stanford paper acknowledges that FMs are risky, but it fails to point out one of the biggest risks of FMs. By pouring so many resources into FMs, Stanford is promoting mono-AI; i.e., mono-culture and group think in AI. Stanford is sending a message that FMs are the only, or the main game in town. Do we want there to be only one game in town, and that that game be a dead-end street?

FMs are controlled by rich, monopolistic companies such as Google, and Big Science such as Stanford’s new FM institute. Do we want to have a few rich companies be the gate keepers (and main financial beneficiaries) of the only game in town?

FMs seem awfully expensive to use. Do we want to make the only game in town be one that only rich corporations and universities can afford to play?

 

September 10, 2021

I just sold the Brooklyn Bridge to my parents for $20M

Filed under: Uncategorized — rrtucci @ 1:29 am

brooklyn-bridge-drone

September 6, 2021

Bayesian Networks and GIT

Filed under: Uncategorized — rrtucci @ 10:15 pm

Did you realize that GIT is a Bayesian Network (bnet) generator? In a GIT-bnet, each node carries the NEW CHANGES to your document—a merging of all the changes carried by the incoming arrows. As in all bnets, the arrows connecting the nodes of a GIT-bnet describe a partial time ordering of the nodes. In a previous blog post, I discussed how all bnets reflect to some extent the passage of time. It is possible to define bnets which contain some nodes which stand for subjective qualities that don’t have a well defined time associated with them (people in the social sciences define bnets with such subjective nodes all the time). GIT-bnets, however, have no such nodes;  each node in a GIT-bnet has a time stamp. Note also that GIT-bnets are deterministic bnets; the probability tables  (aka, TPM Transition Probability Matrices)  associated with each node of a GIT-bnet are deterministic, but that is perfectly acceptable. Any node of a bnet can be deterministic. A deterministic TPM is just a special type of probability distribution. Artificial Neural Nets are also deterministic bnets.

If you’ve come this far reading my ruminations, here is your reward: A video game designed to learn how to use GIT.

https://ohmygit.org/

oh-my-git

September 5, 2021

Godzilla-KingKong-Doge

Filed under: Uncategorized — rrtucci @ 8:57 pm

godzilla-kk-doge-nn-ci

This meme was generated with this online meme generator

Blog at WordPress.com.