Reply to: How to Publish Statistically Insignificant Results in Economics
This post is only halfway to where it should be: It never mentions reasons for "nonsignficance" like lack of power or precision, so that the "nonsignficant" results are unsurprising simply because the study was too low in statistical information (e.g., too small) to detect what was expected.
At least that problem is touched on by "authors taking special care to highlight what they can rule out" - provided that "rule out" means something more than outside a 95% confidence interval. Otherwise, the authors do not realize how weak P<0.05 or 95% confidence is even under ideal experimental conditions. For a brief explanation and further cite, see p. 642 of Greenland S (2017). The need for cognitive science in methodology. American Journal of Epidemiology, 186, 639–645, open access at https://doi.org/10.1093/aje/kwx259
- Reply to: Pitfalls of Patient Satisfaction Surveys and How to Avoid Them
Reply to: Pitfalls of Patient Satisfaction Surveys and How to Avoid Them
What studies have looked at other formulations of the questions and responses besides agree/disagree scale? For example, How clean was the clinic? not at all to very or extremely clean. How likely are you to recommend this clinic? etc. Or including one or more open ended questions...what was the best/worst thing about your visit to the clinic (could include menu of potential responses).
Reply to: What Do Development Bloggers Discuss?
Best tricks shared ever here. It came out to be really helpful thanks a lot for sharing.
Reply to: IE analytics: introducing ietoolkit
Thanks for sharing this interesting tool! I will use it very soon and give my feedback!