Syndicate content

Add new comment

Submitted by Sander Greenland on

This post is only halfway to where it should be: It never mentions reasons for "nonsignficance" like lack of power or precision, so that the "nonsignficant" results are unsurprising simply because the study was too low in statistical information (e.g., too small) to detect what was expected.
At least that problem is touched on by "authors taking special care to highlight what they can rule out" - provided that "rule out" means something more than outside a 95% confidence interval. Otherwise, the authors do not realize how weak P<0.05 or 95% confidence is even under ideal experimental conditions. For a brief explanation and further cite, see p. 642 of Greenland S (2017). The need for cognitive science in methodology. American Journal of Epidemiology, 186, 639–645, open access at https://doi.org/10.1093/aje/kwx259