CB net= Classical Bayesian Network

DAG= directed acyclic graph

In supervised learning, you are given the graph (aka structure) of a CB net, and some data, and you evaluate the node probability matrices from the data. In unsupervised learning, you are given only data, and you are expected to come up with the structure and node probability matrices of the CB net from that data. Nowadays there are computer programs that do both supervised and unsupervised learning of a CB net on classical computers. I believe a quantum computer can do unsupervised learning of a CB net at least quadratically faster (due to Grover’s algo) than a classical computer. In fact, I have a patent for doing unsupervised learning of a CB net on a gate model QC. (The Quail group at NASA has proposed doing this also with a D-Wave annealer QC).

And yet, many of the examples of CB nets that show up in the literature (See, for example, the wonderful work of Andrew Gelman) were obtained “by hand”—their structure was derived without the help of a computer, arrived at partly by logic and partly by hunch. The quality and value of these CB net models depends on how well they fit the data.

So can I provide some guidance on how to find the structure of a CB net by hand? I don’t know how the experts do it, but I’ll tell you how I think about it.

There is one situation that I like to call the Snow White CB net (I call it Snow White because it’s the fairest CB net of them all). It concerns finding a CB net that connects a collection of sets.

Snow White DAG

Suppose you have sets which are not necessarily mutually disjoint.

- Merge all sets that are equal into a single set.
- Write an undirected line connecting those pairs of sets that overlap (but not if they don’t overlap).
- If , then replace the undirected line joining them by an arrow pointing from to . Thus

all mean the same thing.

- If and overlap, but neither is a proper subset of the other, then replace the undirected line between and by
- Go back to step 1. Exit loop when last two repetitions yield the same DAG.

At the end, you will have a DAG in which the arrows all indicate a subset relationship. Also, by the end, all the root nodes (the ones with no incomming, only outgoing arrows) will be mutually disjoint sets. This is all very trivial and I’m sure a lot of people have come up with the Snow White DAG on their own. I just mention it in case you haven’t yet.

PS. In the above convention, a typical operating system with folders is represented by a tree, with the arrows pointing away from the multiple innermost folders towards the single outermost folder. The outermost folder is often called the root directory in operating system parlance, but here I am calling the innermost folders the root nodes.

## Leave a Reply