I and a few like-minded folks have written many times of the over-certainty which is all but guaranteed using classical statistical methods. By “classical” I mean the ubiquitous frequentist p-value-centric “hypothesis testing” framework. But I also mean parameter estimation-focused frequentist and Bayesian methods.
Both testing and estimation take far too much for granted. Every analysis begins by assuming more than is warranted, a predicament explained by the impulsive rush to quantify that which is unquantifiable because it is felt that only quantification is scientific, and the analysis ends with a result in which too much credence is granted and too much faith is placed. . .
Some say “soft” scientists—educationists, sociologists, psychologists, and so on—are envious of the prestige of mathematicians and physicists, the two professions (in order) which can rightly boast of confidence in their results. The certainty “quants” have, like we talked about before, is because these professions picked easy subjects.
Saying why a proposition is true because certain others are, once you’ve identified the new proposition, is a matter of mental elbow grease. And explaining why a certain particle moves in a field where all the variables are precisely known and controlled takes almost no brain power. Not compared to saying what a person—even worse, what people—will do and why he or they do it six months from now.
I don’t think it is envy, but habit which drives the “soft” scientist (or other typical statistics user) to his over-confidence. Everybody does the same thing he is doing, and from that he develops his confidence. “It can’t be wrong if so many people are winning so many grants and publishing so many papers.” It’s not easy to change a custom, especially a beloved one.