# Quantum Bayesian Networks

## July 28, 2009

### The Human Brain, the Mother of All Bayesian Networks

Filed under: Uncategorized — rrtucci @ 2:02 am

number of neurons in human brain: 10^11
number of connections (synapses) entering each neuron: 7,000
neuronal firing rate: 100/sec

Each neuron acts like a node of a gigantic Bayesian network. The synapses are like the arrowheads of the arrows entering each node of the B net. Maybe only small subsets of nodes are working together at any given time.

Refs:

• From Wikipedia for neuron:
“The human brain has a huge number of synapses. Each of the 10^11 (one hundred billion) neurons has on average 7,000 synaptic connections to other neurons. It has been estimated that the brain of a three-year-old child has about 10^15 synapses (1 quadrillion). This number declines with age, stabilizing by adulthood. Estimates vary for an adult, ranging from 10^14 to 5 x 10^14 synapses (100 to 500 trillion).[13]”
• From here:
“THE NERVE IMPULSE (primarily an electrical event): Each neuron is like a tiny biological battery ready to be discharged. It takes about one-thousandth of a second for a neuron to fire an impulse and return to its resting level. Thus, a maximum of 1,000 nerve impulses per second is possible. However, firing rates of 1 per second to 300-400 per second are more typical.”

## July 27, 2009

### Neural Nets, Bayesian Nets’s Poorer Cousin

Filed under: Uncategorized — rrtucci @ 7:50 am

Frequently, people who don’t know about Bayesian networks ask me, why don’t I use (artificial) neural networks instead, for mechanical decision making. They seem to think that Bayesian nets, whatever those are, can’t possibly be as good as neural nets. After all, neural nets mimic the human brain, a device which mother Nature has been improving for millions of years. I’d like to explain here to those people that neural nets are to Bayesian nets what a Chevy is to a car. That is to say, neural nets are a specific model of Bayesian nets. Neural nets are Bayesian nets for which the nodes in the network (a directed acyclic graph) have assigned to them a very special type of probability matrix. For general Bayesian nets, the node probability matrices are completely general. If a node has input arrows labeled by the variables $x_1, x_2, x_3,\ldots x_n$ and output arrows labeled by the variable $y$, then a neural net is constrained to have $P(y|\vec{x}) = \delta(f(\vec{x})-y)$, where $y\in R$, $\vec{x}=(x_1,x_2\ldots x_n)\in R^n$, $\delta()$ is Dirac’s delta function, and $f:R^n\rightarrow R$ is some convenient function. For instance, $f(\vec{x}) = \frac{1}{1+\exp[-(\vec{w}\cdot \vec{x}-\mu)]}$ for some $\mu\in R$ and weight vector $\vec{w}$ of positive real numbers.

Using B nets with these special “deterministic” node probability matrices has some advantages and disadvantages. (e.g. advantages: less calculational cost. disadvantages: deterministic systems like rule based systems are “too literal”, they are easier to confuse, less robust, less forgiving of contradictions and missing data). Read some criticisms of neural nets here.

Often, Biology comes up with brilliant solutions to problems. But Biology has a constrained palette of materials and environment to work with. So man can sometimes best Biology: for example, man can build flying machines called airplanes and rockets and helicopters, which are superior to birds in many ways.

One should also keep in mind that artificial neural nets are only a model of Nature, not Nature itself. How good a model is it? Perhaps real neurons are not completely deterministic. I don’t know if this hypothesis has been tested for, and ruled in or out. Let me speculate. Maybe idiot savants have B networks with node probability matrices that are much sharper than normal, and for this reason, they can calculate in an instant what day of the week was Aug 26, 1915, but they get very uncomfortable and confused every time you move the furniture around. Maybe some types of insanity arise in individuals whose  B net has node probability matrices that are broader, noisier than normal. Maybe when we sleep, we are creating new nodes, new arrows, and re-adjusting the probability matrices of our existing nodes.

By the way, Fuzzy Logic is often mentioned by the popular press as a reasonable alternative to Bayesian networks. The only good thing about Fuzzy Logic is it’s name. Fuzzy logic is a hodgepodge, poorly motivated technique that isn’t based on probability theory, and thus is difficult (read impossible) to scale consistently to larger problems. Read some criticisms of Fuzzy Logic here.

To be fair, Bayesian nets are often criticized too, because doing B net calculations on a conventional classical computer often requires doing a huge number of elementary operations (additions). I believe that by using quantum computation, we can reduce the time complexity of many B nets calculations to sqrt of their classical complexity. This has already been proven to be the case for simulated annealing.

## July 18, 2009

### Dr. Dobb’s Journal’s Readership Talks Quantum

Filed under: Uncategorized — rrtucci @ 6:21 pm

Dr. Dobb’s Journal (now a monthly section of Information Weekly) is a journal devoted mainly to computer programming (hurray!).  It’s illustrious past spans the whole personal computer era. In fact, it started in 1975 with some articles about Tiny Basic.

Dr. Dobb’s Code Talk, the blog of Dr. Dobb’s Journal, has recently featured an interesting series of blog posts, started by Jack Woehr, on Quantum Computer Programming. I hope we are reaching a tipping point in QC programming.

Blog at WordPress.com.