Book of the Week: Uncertainty

The following review appeared in the Journal of American Physicians and Surgeons:


This book has the potential to turn the world of evidence-based medicine upside down. It boldly asserts that with regard to everything having to do with evidence, we’re doing it all wrong: probability, statistics, causality, modeling, deciding, communicating—everything. The flavor is probably best conveyed by the title of one of my favorite sections: “Die, p-Value, Die, Die, Die.”


Nobody ever remembers the definition of a p-value, William Briggs points out. “Everybody translates it to the probability, or its complement, of the hypothesis at hand.” He shows that the arguments commonly used to justify p-values are fallacies. It is far past time for the “ineradicable Cult of Point-Oh-Five” to go, he states. He does not see confidence intervals as the alternative, noting that “nobody ever gets these curious creations correct.”


Briggs is neither a frequentist nor a Bayesian. Rather, he recommends a third way of modeling: using the model to predict something. “The true and only test of model goodness is how well that model predicts data, never before seen or used in any way. That means traditional tricks like cross validation, boot strapping, hind- or back-casting and the like all ‘cheat’ and re-use what is already known as if it were unknown; they repackage the old as new.”


Some of the book’s key insights are: Probability is always conditional. Chance never causes anything. Randomness is not a thing. Random, to us and to science, means unknown cause.


One fallacy that Briggs chooses for special mention, because it is so common and so harmful, is the epidemiologist fallacy. He prefers his neologism to the more well-known “ecological fallacy” because without this fallacy, “most epidemiologists, especially those employed by the government, would be out of a job.” It is also richer than the ecological fallacy because it occurs whenever an epidemiologist says “X causes Y” but never measures X. Causality is inferred from “wee p-values.” One especially egregious example is the assertion that small particulates in the air (PM 2.5s) cause excess mortality.

Quantifying the unquantifiable, which is the basis of so much sociological research, creates a “devastation to sound argument…[that] cannot be quantified.”

I could not agree more. As I have repeatedly observed, the only theories that are worthwhile are those that serve as the basis for successful predictive models. Or, as the ancients put it, let reason be silent when experience gainsays its conclusions. All the backtesting and p-values and statistical games are irrelevant if the predictive models fail.