I was Googling last night to see if I’m crazy. Well maybe I am, but specifically about my perception of the frequentist tendencies of complexity theorists. And I found this funny (in an understated way) blog post by Larry Wasserman in his blog “Normal Deviate”. Check it out:
I am reading Scott Aaronson’s book “Quantum Computing Since Democritus”
Much of the material on complexity classes is tough going but you can skim over some of the details and still enjoy the book. (That’s what I am doing.) There at least 495 different complexity classes: see the complexity zoo. I don’t know how anyone can keep track of this.
So my claim is that computational learning theory is just the application of frequentist confidence intervals to classification.
There is nothing bad about that. The people who first developed learning theory were probably not aware of existing statistical theory so they re-developed it themselves and they did it right.