Recently, someone asked Scott Aaronson in Scott’s blog the question

Are you a Bayesian? If not, could you describe your non-Bayesian belief system in short?

to which he replied

I’d ascribe maybe a 40% posterior probability to Bayesianism being true. (Up from my prior of 20%, after reading

The Signal and the Noiseby Nate Silver.)With 60% probability, I think quantifying our uncertainty using probabilities is great whenever possible, but is unambiguously meaningful only when an event is sampled according to a probabilistic process that we know something about theoretically (e.g., in quantum mechanics), or in the limited domain of “bettable events” (i.e., events that belong to a large-enough ensemble of similar events that one could form a decent betting market around them). In other cases—including many of the ones people care about the most—I think we’re really in a state of Knightian uncertainty, where at most we can meaningfully give upper and lower

boundson the probability of something happening. And in those situations, we might prefer “paranoid,” worst-case reasoning (as in Valiant’s PAC model, or indeed almost all of theoretical computer science) over Bayesian, average-case reasoning. Indeed, this might both explain and justify the phenomenon of risk-aversion in economics, as well as well-known “paradoxes” of decision theory such as the Ellsberg paradox.Again, though, that’s only with 60% probability, and is susceptible to revision as new information comes in.

Let me repeat, taking some choice phrases out of context to better bolster my argument:

“In other cases—including many of the ones people care about the most—”

“And in those situations, we might prefer “paranoid,” worst-case reasoning (as in Valiant’s PAC model, or indeed almost all of theoretical computer science) over Bayesian, average-case reasoning.”

In other words, Scott is a virulent, rabid anti-bayesian.

According to Wikipedia, “In economics, Knightian uncertainty is risk that is immeasurable, not possible to calculate.” Now, why would an alleged scientist put so much stock on an example from economics, the dismal science? Gag me with a spoon! Specially somebody who works almost exclusively in quantum mechanics, which according to Scott’s own admission, is a probabilistic theory very amenable to Bayesian analysis. Oh I see, just because he has concluded, either rightly or wrongly, that Bayesian thinking doesn’t fit nicely within his narrow field of vision, a field of vision which is severely limited by the blinders of some complexity theory party-line that was enunciated long ago by some Prince Valiant guy in his Pac-Man Model.

Number of times Scott mentions “D-wave” or “Bayesian Networks” or “Error Correction” in his book: “Quantum Computing Since Democritus, but skipping D-Wave, Bayesian Networks, and Error Correction” Zero, Zero and 3 times. Does the guy have some gigantic blind spots or what? Cataracts and tunnel vision at 32. Very sad.

For all its numerous faults, at least D-wave is doing some Bayesian modeling for AI under the direction of Hartmut Neven. Okay, Hartmut is not the world’s greatest authority in quantum mechanics—-he referred to the Heisenberg uncertainty principle as the Hindenburg uncertainty principle in this Video. But at least he is a true bayesian, as are most practical people in engineering and science today, with the possible exception of some complexity theorists with anti-bayesian crackpot ideas. (Scott should try convincing Israeli scientists to implement Iron Dome’s software without using a Kalman filter. See how long before they ship him to a loony bin).

Since Google, by its own admission, is a lover of Bayesian thinking, which includes Bayesian Networks, truth, justice and the American way. And since D-Wave/Neven are Bayesian freedom fighters just like Google is. And since Aaronson is speaking on behalf of all quantum complexity theorists when he utters this anti-bayesian hate language. Then do you think it is surprising that Google should prefer D-Wave/Neven to quantum complexity theorists for its quantum computing institute? Should Google hire any quantum complexity theorists at all for its QC institute? I think not! Let them eat cake and apply for a job at MIT under Aaronson (a temporary job with slave wages instead of a handsomely paid permanent job with free gourmet cafeteria food.) I would say, no Google jobs for complexity theorists unless they apologize for their ugly past anti-bayesian behavior.

Blacklists are considered to be a bad thing. Like Senator Joseph McCarthy’s blacklist of communists sympathizers. Or, in The Mikado, the Lord High Executioner’s list of people “who would not be missed” if they were executed “As some day it may happen”. But would a blacklist of anti-bayesians (not a list of poor, deluded, confused frequentists but one of outright anti-bayesian-racists like Scott) be such a bad thing? And would it be such a bad thing for Google to compile such a list? I mean, strictly speaking, doing so would not be doing evil, would it?

“The Hindenburg uncertainty principle” Gotta love it! Will she blow or won’t she?! Hah, Schroedinger try to fit a Zeppelin in your box!

Comment by siteadmin — June 3, 2013 @ 6:49 pm

Poor Scott is really getting it from all sides these days, now even Peter Shor argued on his blog in favor of D-Wave. I’d say the Bayesian probability for Scott losing this debate is pretty high now. I like Scott, and he has lots to offer, but he’s on the wrong side of history here, and doesn’t seem to understand how business decisions are made, nor how to do constructive criticism. Bill Kaminsky on the other hand does the latter brilliantly, if I had a QC business I’d hire him in a heartbeat.

Comment by siteadmin — June 3, 2013 @ 7:03 pm

Sorry for this stupid siteadmin handle, no idea why Chrome always reverts back to this on your site, flushing the cache should have taken care of this =:-o

Comment by H.D. — June 3, 2013 @ 10:51 pm

It’s Okay Henning, I know it’s you. You’re like one of the 3 people who read this blog

Comment by rrtucci — June 4, 2013 @ 1:42 am

Perhaps he doesn’t like the brittleness of Bayesian inference …

http://arxiv.org/abs/1304.7046

Comment by wb — June 4, 2013 @ 2:47 am

WB, sounds like a really interesting paper. Thanks for pointing it out to me. It will take me a long time to digest it before I can pass judgement on it. I wonder what the counterpart of these pathological cases is in quantum mechanics.

I think Scott’s alternative to Bayesian thinking is some bizarre thing called “Knightian uncertainty”.

http://arxiv.org/abs/1306.0159

Comment by rrtucci — June 4, 2013 @ 5:26 am

Bob, would be great if you could blog about this paper once you digested it, all I get from it, is that good math

willbe overlooked when published in Norwegian, maybe Selberg should have tried Swahili instead.Then again, if Bayesian learning really models the way humans go about it, then I feel this paper could explain a lot, and maybe perpetuate a breakthrough in artificial stupidity research.

Comment by H.D. — June 4, 2013 @ 5:15 pm

[…] After an exhausting rekindling of the D-Wave melee on his blog, Scott Aaronson's latest paper, "The Ghost in the Quantum Turing Machine", is a welcome change of pace. Yet, given the subject matter I was mentally preparing for a similar experience as with Penrose, especially in light of the instant rejection that this paper received from some experts on Bayesian inference, such as Robert Tucci. […]

Pingback by Will you or will you not? | Wavewatching — June 23, 2013 @ 2:30 am