Happyism: The Creepy New Economics of Pleasure. Economist Dierdre McCloskey, in the New Republic, digs into the mathematical underpinnings of the scientific study of happiness. Executive summary: she doesn't like what she finds.
Statistical hypothesis testing with a p-value of less than 0.05 is often used as a gold standard in science, and is required by peer reviewers and journals when stating results. Some statisticians argue that this indicates a cult of significance testing using a frequentist statistical framework that is counterintuitive and misunderstood by many scientists. Biostatisticians have argued that the (over)use of p-vaues come from "the mistaken idea that a single number can capture both the long-run outcomes of an experiment and the evidential meaning of a single result" and identify several other problems with significance testing. XKCD demonstrates how misunderstandings of the nature of the p-value, failure to adjust for multiple comparisons, and the file drawer problem result in likely spurious conclusions being published in the scientific literature and then being distorted further in the popular press. You can simulate a similar situation yourself. John Ioannidis uses problems with significance testing and other statistical concerns to argue, controversially, that "most published research findings are false." Will the use of Bayes factors replace classical hypothesis testing and p-values? Will something else?
Significantly what?...Or how our most common statistical methods really weren't meant to be used that way and why that study result is likely spurious. Since mefites like to argue about stats, here's some background for us all (and I'm not talking correlation vs causation)!