Syndicate content

David McKenzie's blog

Weekly links January 7, 2016: experiments on civil servants, fragile states, jeans and child labor, funding, and more…

David McKenzie's picture

Weekly links December 18: rejecting rejections, measuring success in economics, underreporting in polisci, and more…

David McKenzie's picture
  • In the BMJ Christmas edition, a nice form letter for how you can reject journal rejections: “As you are probably aware we receive many rejections each year and are simply not able to accept them all. In fact, with increasing pressure on citation rates and fiercely competitive funding structures we typically accept fewer than 30% of the rejections we receive… We do wish you and your editorial team every success with your rejections in the future and hope they find safe harbour elsewhere. To this end, may we suggest you send one to [insert name of rival research group] for consideration. They accept rejections from some very influential journals.”
  • From the political science replication blog: researchers looked at NSF proposals under the TESS program, and compares the pre-analysis plans and questionnaires to what was actually published, finding 80% of papers fail to report all experimental conditions and outcomes

Weekly links December 11: clustering, working with governments, the most terrible disregard for evidence, and more…

David McKenzie's picture

Towards policy irrelevance? Thoughts on the experimental arms race and Chris Blattman’s predictions

David McKenzie's picture

Chris Blattman posted an excellent (and surprisingly viral) post yesterday with the title “why I worry experimental social science is headed in the wrong direction”. I wanted to share my thoughts on his predictions.
He writes:
Take experiments. Every year the technical bar gets raised. Some days my field feels like an arms race to make each experiment more thorough and technically impressive, with more and more attention to formal theories, structural models, pre-analysis plans, and (most recently) multiple hypothesis testing. The list goes on. In part we push because want to do better work. Plus, how else to get published in the best places and earn the respect of your peers?
It seems to me that all of this is pushing social scientists to produce better quality experiments and more accurate answers. But it’s also raising the size and cost and time of any one experiment.

Weekly links November 20: sensitive topics, nightlights, should you co-author? And more…

David McKenzie's picture

Weekly links November 6: 5 years of nudging, peer effects, not enough news in New Zealand again, and more…

David McKenzie's picture
  • The Behavioral Insights Team (aka Nudge unit) turns 5 – Psych report interview discusses the achievements and where they plan to go “two things I would point to that, personally, I am most proud of. The first is that I think we can say we have changed the way in which policy is made in Whitehall. People think about drawing on ideas from the behavioral sciences in a way that five years ago almost nobody did. Secondly, people now think about using randomized controlled trials as one of the policy tools that can be used to find out whether or not something works. Again, that was just not considered to be part of a policymaker’s toolbox five years ago. So rather than pointing to the successes of the interventions, I think I’m most proud of the fact that we’ve started to change the mindsets of policymakers in the UK government.”

Finally a matching grant evaluation that worked…at least until a war intervened

David McKenzie's picture
Several years ago I was among a group of researchers at the World Bank who all tried to conduct randomized experiments of matching grant projects (where the government funds part of the cost of firms innovating or upgrading technology and the firm the other part). Strikingly we tried on seven projects to implement an RCT and each time failed, mostly because of an insufficient number of applicants.

Pages