January 27, 2013

Are Bayesian statistics wrong?

UPDATE (25/04/2013): I wrote a short review of Silver's book.

In the New Yorker, Gary Marcus and Ernest Davis comment on Nate Silver's The Signal and the Noise and conclude that "[T]he Bayesian approach is much less helpful when there is no consensus about what the prior probabilities should be."

A Bayesian would probably reply that refusing to think about prior probabilities does not make them any less important. And they are always taken into account, either explicitly by Bayes' theorem or qualitatively, as in Sagan's dictum "Extraordinary claims [with a low prior probability] require extraordinary evidence [with correspondingly high significance]".

Setting a fixed and arbitrary confidence threshold (95%, 99% etc.) would be catastrophic. For instance, the hypothesis "Standard quantum mechanics is wrong" could be easily proven by any sloppy experiment: quantum theory is highly predictive, so it is easy to find oneself in complete disagreement with it. Fortunately, scientists will not immediately accept the hypothesis (exactly because its prior probability is so low).

A real-life example is that of OPERA's "superluminal neutrinos". To their credit, the researchers reviewed the experiment closely until they found the loose cable connection, instead of jumping to the conclusion that "relativity is wrong", although the statistical significance of the result was at an unimpeachable six sigma level. They somehow knew the theory of relativity was stronger than that.

1 comment:

  1. http://doingbayesiandataanalysis.blogspot.com/2013/01/bayesian-disease-diagnosis-with.html

    ReplyDelete