Over the past few weeks, we’ve both spent a fair amount of time at conferences. Given that many conferences ask researchers to summarize their work in 15 to 20 minutes, we thought we’d reflect on some ideas for how to do this, and – more importantly – how to do it well.
- Marc Bellemare on the subject of my dissertation work – using repeated cross-sections
- From Next Billion, a summary of research showing how saving leads people to generate more income by working harder
- In the Guardian, how the World Bank is nudging health and hygiene in several projects…and the defense against whether this distracts from more structural issues “Why not make all programmes as effective as possible, even if it doesn’t turn a very poor country into a Scandinavian country overnight”
- Also from the Guardian, 10 sources of data for international development research
- randtreat – a new Stata command to do random assignment that can deal with uneven numbers of observations (more details here) – this builds on an old blog post I did on the issue, and great to see some of these practical issues getting made easier for everyone.
- synth_runner – the IDB’s Development that Works blog has a post about a new Stata command to help automate use of the synthetic control method.
- On the LSE Business Review blog, work by Nguyen and Van Reenen using an RDD to show that tax credits increased R&D spending and innovation among SMEs in the UK.
- On Microeconomic Insights, a really nice summary by Munshi and Rosenzweig of their AER paper on why internal migration rates are so low in India? (h/t Marginal Revolution).
- The IMF’s Finance & Development profiles David Card: “he is tired of seeing his research oversimplified and used as lobby fodder, despite all the caveats attached to his work”.
- A vivid description of life and work in Sodom and Gomorrah (aka Accra’s largest slum area) on the Big RoundTable (via @alexevansuk). Includes the story of how literally building a bridge was a great capital investment.
The Research Papers in Economics (RePEc) database has over 46,000 researchers registered. Each month they send out rankings based on downloads, citations, and other metrics. Their ranking of economists based on publications in the last 10 years is topped by some of the best known names in economics (the top 5 are Acemoglu, Shleifer, Heckman, Barro and Rogoff). But looking through their top 100 (as of January 2016), I found 8 of the top 100 researchers are based in developing countries (taking World Bank client countries as “developing countries” for this purpose). Since I was only familiar with the work of one of these eight individuals, I thought it might be of interest to note some of this work going on outside of the usual top schools. I contacted the authors to ask them also what idea or work they were most proud of, or would most like to draw policy attention to.
Josh Ritter is one of my favorite musicians. So, imagine my joy when I saw that he was doing an essay in the middle of PBS Newshour this past Thursday – what is normally a depressing hour these days, full of bad news from Flint, South Sudan, Republican primaries and debates, and much more. The essay started with footage of him (seemingly at the 9:30 Club in DC) singing Homecoming: great.
- impostor syndrome
- What is a large effect size? In the Huffington Post, Robert Slavin educational research and finds average effect sizes differ depending on whether the sample size is small or large, and non-experimental (matching) or randomized – and comes up with the table below. The average effect size for a randomized evaluation on a large sample is 0.11 S.D. compared to 0.32 S.D. for a matching-based evaluation on a small sample. He suggests effect sizes therefore need to be “graded on a curve”, with what constitutes big depending on the method of evaluation and the size of the sample.(Although also recall our posts on the problems of using S.D. to compare effect sizes in the first place).
I’ve been asked several times what I think of Alwyn Young’s recent working paper “Channelling Fisher: Randomization Tests and the Statistical Insignificance of Seemingly Significant Experimental Results”. After reading the paper several times and reflecting on it, I thought I would share some thoughts, with a particular emphasis on what I think it means for people analyzing experimental data going forward.
- “Our working title was all measures suck, and they all suck in their own way” Angela Duckworth as quoted in a NY Times article on efforts in the US to start evaluating schools on socio-emotional skills like grit.
- The New York Times magazine has an interesting piece on what google has learned about why some work groups thrive and others don’t. They point to the importance of psychological safety —a ‘‘shared belief held by members of a team that the team is safe for interpersonal risk-taking”. I thought this was useful both for thinking about collaborative research teams, as well as for the discussion of the challenges and options for measuring group attributes.
- On the Africa Can blog, Dave Evans recaps the Cash vs Training Smackdown
- Chris Blattman on a new paper which questions the Science paper finding that most studies in psychology don’t replicate: basic issues were that in many cases they changed the intervention or sample pool and issues of power; the authors appear to concede in their response that these issues are there, but argue about how important they are. Wired has a good discussion of the work, noting “Two groups of very smart people are looking at the exact same data and coming to wildly different conclusions.”
So you’ve designed an awesome impact evaluation, you’ve carried out a rich baseline survey, you’ve presented the baseline results to the government of Brigadoon, and now you….wait two years until the follow-up survey? What else can you do with this baseline data? You can do a lot! You can write a report, you can write a brief, you can publish papers, you can test targeting strategies, and you can even [drumroll] affect policy.