· Must-read new series: CEGA at Berkeley has a series of blog posts about registration of pre-analysis plans for estimation in the social sciences. Blog posts by Ted Miguel, Don Green, Macarten Humphreys and others. In case you missed it, here is my post on a pre-analysis plan checklist.
· The LSE Impact of Social Sciences blog has an interesting piece on how newsworthiness can trump methodology at high-impact journals like Nature, Science, and PNAS. I found this bit particularly interesting about a small sample study that had found a significant effect that ““Action video games make dyslexic children read better”: “I hear you saying, this study did find a significant effect of intervention, despite being underpowered. So isn’t that all the more convincing? Sadly, the answer is no. As Christley (2010) has demonstrated, positive findings in underpowered studies are particularly likely to be false positives when they are surprising – i.e., when we have no good reason to suppose that there will be a true effect of intervention. “
· A nice review of work on the psychology of incentives in behavioral economics by Emir Kamenica, forthcoming in the Annual Review of Economics. Opening paragraph offers some nice motivation “Contrary to what you would expect based on a standard introductory text in microeconomics, if you pay a person more for doing a task, she might be less willing to work on it, she might be less productive given her efforts, and she may enjoy the task less. If you start charging a fee for something, more people might start doing it. If you want your employees to save more for retirement, you may want to give them fewer investment options. If you want them to engage more in a task, you might want to offer them an additional alternative to that task.” (h/t @ideas42).
· On the 3ie blog, Hugh Waddington discusses how to better incorporate the information from studies without credible counterfactuals into systematic reviews – noting that such studies are often helping for understanding implementation issues, steps along the causal chain, and participation, even if they can’t tell us much about the overall impact of a given intervention.
· On the Oxfam blog this week, a proposal to use a new measure of inequality - the ratio of the richest 10% of the population’s share of gross national income (GNI), divided by the poorest 40% of the population’s share instead of the Gini – apparently “more technical economists” hate it.
· Moving away from opportunity? Interesting piece in the Baltimore Sun looking at research on what happened to poor families moved to wealthier neighborhoods through the Moving to Opportunity federal housing experiment in 1994: nearly three-fourths of the families given vouchers to live in low-poverty neighborhoods were back in the same poorer neighborhoods as the other program participants (the control group) by 2002.
· Nudging the unemployed into work: The independent reports “During trials in Essex more than 2,000 job seekers were divided into two groups with one being given traditional support from Job Centre staff and the other working from a new programme designed to increase incentives to find work. Under the new scheme Job Centre staff get claimants to identify and write down what they are going to do to find work in the next two weeks as well as how and when they are going to do it. At the end of the trial those taking part in the new programme were 15-20 per cent more likely to be in work within 13 weeks from signing on.”
For more follow me on twitter (@dmckenzie001).
Join the Conversation