Quantum Bayesian Networks

August 20, 2012

How I Swam Across the River Shannon

Filed under: Uncategorized — rrtucci @ 9:18 pm

Check out my new paper

“Shannon Information Theory Without Shedding Tears Over Delta \& Epsilon Proofs or Typical Sequences”, by Robert R. Tucci, arXiv:1208.2737 (abstract here)

The River Shannon is Ireland’s longest river, spanning 224 miles. It starts near the center of Ireland, flowing first south, then west, finally emptying into the Atlantic Ocean. The city of Limerick is situated near the end of the river, on its estuary (where the fresh water and sea water meet). One can swim across the Shannon at many places if one so desires.

My own desire was to swim across the mighty River of Shannon information theory. I had tried to do so many times before. But every time I had dipped my toes into the frigid waters of a standard textbook like the one by Cover and Thomas, I had nearly drowned. And then, I watched the London 2012 Olympics on the telly. And my life was changed forever.  After observing in action that most brilliant of all British Olympic athletes, Sir Rowan Atkinson, I realized that I too had the makings of an Olympian. I immediately saw myself decisively beating Michael Phelps and Lord Byron on a race across the River Shannon, in the doggie paddle swimming style (a style also known as the “breast stroke” in XXX rated blogs). Doing this with the theme song for “Chariots of Fire” playing in the background.

But first I had to learn how to swim more than 3 yards without drowning.

From the movie “Johnny English Reborn”. Johnny kicking thug

I decided that Cover and Thomas was too mathematically rigorous, too fancy pantsy for an athletically challenged body like mine to master. I would have to find a less athletic, more zen-like method to defeat my opponents. I was undaunted. Years of carefully study of British scientific works of the highest caliber (e.g., Harry Potter, Q’s inventions for James Bond, Wallace and Gromit’s Inventions, 10,000 pages of B. Russell’s Principia Mathematica, the 20 cylinder Rolls Royce car engine) had taught me that no scientific problem is unsolvable by the clever British mind. I wondered. What fiendishly high tech MI7 device would Johnny English use if he were caught in my predicament? And then it came to me. Floaties, I needed training floaties. 

I soon conceived of the technical equivalent of floaties for information theory. The steepest descent method. This zen-like method is much easier on the body of an athlete than the method of typical sequences usually favored by Computer Science jocks.

Success came quickly after that. At this moment I stand in the rarefied air of the highest of 3 podia, ready to receive my gold medal… 

Wake up Bob. Wake up!!! It’s 6 AM and you’ve got to take the girls to school before you go to work. 

As the River Shannon ends with a Limerick, I end my story with one too:

A spider that spun webs quite odd
thought Revered Bayes was quite mod
and Claude Shannon too
and quantum bits too
So what did she do?
She wove with her goo
q nets for the Rev and for Claude


  1. See Comments Section for comments from Prof. Neri Merhav
  2. YouTube Video “Mr. Bean at the Opening Ceremony of The Olympic Games of 2012 in London”

1 Comment »

  1. After I posted my “without shedding tears” paper on ArXiv, I received 2 emails from Prof. Neri Merhav, pointing out some previous work on classical info theory that uses the steepest descent method. I will soon incorporate some of those references into a new version of my paper. Here are some instructive excerpts from his emails, quoted with his permission. So there, reader! Who can blame you if you dismiss the rantings of a Johnny English wannabe like me, but are you ready to dismiss the pronouncements of a Technion professor?

    “You might be interested to know that that the idea of using the saddle-point method as an alternative to the traditional, combinatorial method of types in information theory, is not quite entirely new (although it is true that it has not been used much so far). Here are two references I am not sure you are aware of, considering the bibliographical list in your paper:

    M. Mezard and A. Montanari, Information Physics and Computation (Oxford University Press, 2009): In Section 4.7 (pp. 87-89), they compute the probability of a type class using saddle point integration. 

    N. Merhav, “Statistical Physics and Information Theory,” Foundations & Trends in Communications and Information Theory, vol. 6, nos. 1-2, pp. 1-212. In Subsection 4.3, Example 2 (p. 108), it is demonstrated how to assess the size of a type class using the saddle-point method. It is also emphasized that this technique extends easily to the continuous alphabet case (see Example 3 on page 110).

    Neri Merhav”


    “BTW, the saddle-point method can also be used very efficiently in large deviations (and moderate deviations) theory. It saves the need to derive separately an upper (Chernoff) bound and a lower bound using the tilted probability measure. Plus, it gives you the pre-exponent as a bonus. For example, take a look at:

    beginning from the second half of page 6 and ending after the first half of page 7. Of course, I could have easily derived also the pre-exponent, but didn’t bother to do that, since it was not important for my purpose.”

    Comment by rrtucci — August 20, 2012 @ 9:20 pm

RSS feed for comments on this post. TrackBack URI

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Create a free website or blog at WordPress.com.

%d bloggers like this: