Weekly links June 27: badly managed Indian schools, evaluating peace-building, the perils of misunderstanding significance, new power calculations, and more…
In the LSE Centrepiece , Renata Lemos and Daniela Scur have a short piece summarizing new results from measuring management in retail, health, education and manufacturing in India: “In retail, the top 10% of Indian stores are better managed than 40% of US stores and 57% of UK stores. But in education, only 8% of US schools and 1% of UK schools are less well managed than the best 10% of Indian schools.”
In the NY Times Economix blog , Casey Mulligan discusses “the perils of significant misunderstandings in evaluating medicaid” – a piece about how the lack of a statistically significant effect can be misinterpreted – “If the Oregon study prevents even one state from expanding its Medicaid program, Affordable Care Act proponents could assert that, as Professors McCloskey and Ziliak predicted, emphasis on statistical significance has proven to be deadly.”
Marie Gaarder and Jeannie Annan have a new working paper about how to do impact evaluations of conflict prevention and peacebuilding programs .
Stata 13 looks like it does a lot more with power calculations, including a lot of things we’ve had to use Optimal Design for previously like calculating minimum detectable effects and graphs to show how power varies with sample size. At first glance it still doesn’t seem well suited for designing clustered randomization studies. A 6 minute video gives an introduction; and a 270 page manual provides plenty of reading. Once we get our Stata 13, we will try to give some more on this on the blog.
The Campbell Colloquium has online videos on how to do a systematic review (h/t @3ienews ).
A couple of more academic links this week:
Is economics a meritocracy after all? In “An empirical guide to hiring assistant professors in economics” – a new working paper looks at how productive individuals are in their first 6 years post-PhD, finding that class rank matters a lot – “one would be better off hiring a 95th percentile graduate of a typical non-top 30 department than the 70th percentile graduate of Harvard, Chicago, U. Penn, Stanford or Yale, or an 80th percentile graduate of Berkeley, Michigan, NYU UCLA or Columbia”. (h/t Marginal Revolution ).
Matthew Spiegel has an interesting paper critiquing the publishing process in finance: “Take any article accepted for publication at any journal. Now submit it to another journal. What are odds it will be accepted as is? Zero. There is even a pretty good chance it will be rejected….no published article is good enough to publish. After all, every article can be improved upon. Perfection cannot be the standard. Nor is it necessary. Post publication, influential articles are thoroughly vetted by the profession in a way that no amount of reviewing can hope to duplicate. The criterion for publication should be that once an article crosses some threshold, it is good enough to publish. At that point everyone involved should be willing to do so without further revision. The problem seemingly lies in our inability to say an article is “good enough.” ….”How frequently does a particular robustness check find anything? Based on the reports and articles I have seen, almost never. But one thing all these additional demands have accomplished is the production of far longer articles”….He then says his goal is to try and accept at least one paper as is each year.
For more links, follow me on twitter at @dmckenzie001