Recently, someone asked Scott Aaronson in Scott’s blog the question
Are you a Bayesian? If not, could you describe your non-Bayesian belief system in short?
to which he replied
I’d ascribe maybe a 40% posterior probability to Bayesianism being true. (Up from my prior of 20%, after reading The Signal and the Noise by Nate Silver.)
With 60% probability, I think quantifying our uncertainty using probabilities is great whenever possible, but is unambiguously meaningful only when an event is sampled according to a probabilistic process that we know something about theoretically (e.g., in quantum mechanics), or in the limited domain of “bettable events” (i.e., events that belong to a large-enough ensemble of similar events that one could form a decent betting market around them). In other cases—including many of the ones people care about the most—I think we’re really in a state of Knightian uncertainty, where at most we can meaningfully give upper and lower bounds on the probability of something happening. And in those situations, we might prefer “paranoid,” worst-case reasoning (as in Valiant’s PAC model, or indeed almost all of theoretical computer science) over Bayesian, average-case reasoning. Indeed, this might both explain and justify the phenomenon of risk-aversion in economics, as well as well-known “paradoxes” of decision theory such as the Ellsberg paradox.
Again, though, that’s only with 60% probability, and is susceptible to revision as new information comes in.
Let me repeat, taking some choice phrases out of context to better bolster my argument:
“In other cases—including many of the ones people care about the most—”
“And in those situations, we might prefer “paranoid,” worst-case reasoning (as in Valiant’s PAC model, or indeed almost all of theoretical computer science) over Bayesian, average-case reasoning.”
In other words, Scott is a virulent, rabid anti-bayesian.
According to Wikipedia, “In economics, Knightian uncertainty is risk that is immeasurable, not possible to calculate.” Now, why would an alleged scientist put so much stock on an example from economics, the dismal science? Gag me with a spoon! Specially somebody who works almost exclusively in quantum mechanics, which according to Scott’s own admission, is a probabilistic theory very amenable to Bayesian analysis. Oh I see, just because he has concluded, either rightly or wrongly, that Bayesian thinking doesn’t fit nicely within his narrow field of vision, a field of vision which is severely limited by the blinders of some complexity theory party-line that was enunciated long ago by some Prince Valiant guy in his Pac-Man Model.
Number of times Scott mentions “D-wave” or “Bayesian Networks” or “Error Correction” in his book: “Quantum Computing Since Democritus, but skipping D-Wave, Bayesian Networks, and Error Correction” Zero, Zero and 3 times. Does the guy have some gigantic blind spots or what? Cataracts and tunnel vision at 32. Very sad.
For all its numerous faults, at least D-wave is doing some Bayesian modeling for AI under the direction of Hartmut Neven. Okay, Hartmut is not the world’s greatest authority in quantum mechanics—-he referred to the Heisenberg uncertainty principle as the Hindenburg uncertainty principle in this Video. But at least he is a true bayesian, as are most practical people in engineering and science today, with the possible exception of some complexity theorists with anti-bayesian crackpot ideas. (Scott should try convincing Israeli scientists to implement Iron Dome’s software without using a Kalman filter. See how long before they ship him to a loony bin).
Since Google, by its own admission, is a lover of Bayesian thinking, which includes Bayesian Networks, truth, justice and the American way. And since D-Wave/Neven are Bayesian freedom fighters just like Google is. And since Aaronson is speaking on behalf of all quantum complexity theorists when he utters this anti-bayesian hate language. Then do you think it is surprising that Google should prefer D-Wave/Neven to quantum complexity theorists for its quantum computing institute? Should Google hire any quantum complexity theorists at all for its QC institute? I think not! Let them eat cake and apply for a job at MIT under Aaronson (a temporary job with slave wages instead of a handsomely paid permanent job with free gourmet cafeteria food.) I would say, no Google jobs for complexity theorists unless they apologize for their ugly past anti-bayesian behavior.
Blacklists are considered to be a bad thing. Like Senator Joseph McCarthy’s blacklist of communists sympathizers. Or, in The Mikado, the Lord High Executioner’s list of people “who would not be missed” if they were executed “As some day it may happen”. But would a blacklist of anti-bayesians (not a list of poor, deluded, confused frequentists but one of outright anti-bayesian-racists like Scott) be such a bad thing? And would it be such a bad thing for Google to compile such a list? I mean, strictly speaking, doing so would not be doing evil, would it?