10 years is a decade. 1,500 blog posts are a lot. We thought we’d reflect a little bit on how this all went since we had the idea of starting it in 2011.
The thing that we want to start with is the quality of our readership. In the early days, we would, not infrequently, receive comments from people, including guest bloggers, along the lines of “you don’t get a lot of comments, but the ones you get are very high quality” (in the first year we received 115 comments on 179 posts; last year it was 90 comments on 123 posts). This was true from the very beginning: just check out this set of commenters on Berk’s first blog post, “Is there an ‘unmet need’ for birth control?”, which is a ‘Who’s Who’ of experts on development economics and family planning. And, each comment is like a blog post in and of itself.
We’d like to think that our conception of the blog had something to do with it. We made a few conscious decisions:
· We would blog frequently (that’s why we started as a team of four rather than an individual or two). Along with Markus Goldstein, who has been a constant collaborator from day 1, we benefited from Jed Friedman and Dave Evans spending time as regular bloggers, before Florence Kondylis and Kathleen Beegle have now joined.
· We would avoid cutting and pasting the abstract of a paper, with a one-sentence intro, to count as a blog (other than occasionally in David’s now very popular Friday links). We would write deeper summaries instead.
· We would engage our readers who left the comments, meaning that we monitor and try to respond to them in a timely fashion, as well as respectfully.
We think that the pent-up demand for a blog like ours, which tackled issues in a rapidly growing development economics field, combined with some effort early on contributed to building a small but committed niche readership, which, even today, has only about 6,000 subscribers…We could not keep up the early pace of four posts a week (one per blogger per week), reaching the current equilibrium of two blogs per week (plus the weekly links) that we try to, but cannot always, honor.
Nowadays, there is a lot more discussion on Twitter, which has its pros, but the big downside is that those comments are ethereal in nature, and don’t get easily retained for others to come back to. In contrast, comments which enhance the post and that are left in the comments field can then be a resource for others coming across the post years later. For a more recent example, see the helpful Q&As over multiple years on this 2017 post on doing randomization inference in Stata. So, please do continue to leave comments.
Another thing that we apparently adopted early on, which surprised both of us when we were looking back through posts in 2011 & 2012 last week, is the “Blog Your Job Market Paper” series. Looking at that year’s guest posts (here and here), one sees a lot of familiar names, who went on to be a part of the development community. You can see our reflections on that first year’s experience here. This continues to be one of our highlights of the year, and we love getting this chance to get a first look at so much new research.
The issue of freedom of speech comes up often in questions to us – both from people who are asking us about the blog but also from researchers we are trying to hire. Rightly so, the latter group are particularly worried about whether they can freely disseminate the findings from their research – as sensitive politically or otherwise they might be. Over the years, we have disagreed with our directors and tried to address bad takes on impact evaluations, whoever the source: a take on Millennium Villages Project; on job creation from the IFC; on whether IMF research is biased?; on the bite-sized “factoids” about gender and development that are seriously wrong; IEG evaluations on ICT; and, more recently, on cash transfers and Give Directly. The response to the last example by Haushofer and Shapiro, as well as the comments that follow, is a good example of debating the nitty gritty of research findings in the public space – vigorously but respectfully. Bill Easterly offered a warning early on about what one can infer about censorship from only seeing what we post and not what would have got blocked if we had chosen to blog about it. This is a good point: we can only say that we believe steadfastly in following the data and the evidence that can spur debate in the development community about important topics. Sometimes our posts are on sensitive issues and make people uncomfortable. But, just like our Policy Research Working Papers Series, we believe that the role of research, and hence a blog that discusses research, is to ask such difficult questions and allow people to debate them and we try to act accordingly.
We have always been drawn to some topics from the start, consistent with the stated goal of the blog: surveys and survey methods; ethics of randomized experiments; a multitude of issues surrounding (quasi-) experimental study design – most of which we have collected into our curated links on methodology and survey design. One of the early themes we noticed here was how many of these issues were not covered by either the standard textbook discussions (e.g., with power calculations, we covered early on issues with doing power calculations for non-experimental methods like PSM (and then later for RDD), nor by what gets written up in final research papers. On some of those methodological issues, it’s been great to see progress: for example, when Winston Lin guest blogged about regression adjustments in randomized experiments (Part 1 and Part 2), or even when Berk wrote this post about David’s work on ANCOVA vs. difference-in-difference estimation, researchers did definitely not accept ANCOVA as the default manner in which to analyze data, including from RCTs. It’s nice to see that ANCOVA (and Lin’s recommended way of analyzing RCT data) is much more common now in development papers (of course, there has been a huge interest in doing diff-in-diff better outside of RCTs in the past 2-3 years).
We have also had our blind spots – not a lot of posts on environment, climate change, infrastructure, trade, and so on but this is slowly changing. In our 2011 survey of which topics are under-researched, we did not even list environment/climate change as one of the topic options! Some of our posts have suggested ideas that never went anywhere (like this early post on trying to get more collaboration between those implementing large surveys and grad students eager to add questions or a rather failed attempt to get more people to share their impact evaluation failures). Some other early methods approaches have not aged well because of methods improvements – for example, this post on power calculations with incomplete take-up in 2011 is now superseded by this 2019 post that incorporates the role of treatment heterogeneity. Better still is when our readers have improved on our early posts. E.g., in this early post on doing stratified randomization with uneven numbers in some strata we discussed the issue and gave some klunky solutions, and then a reader (Alvaro Carril) developed a Stata command randtreat that we now use regularly. Please continue to correct us as needed and help us continue to build a community of knowledge sharing.
Some early posts were more spot on: David’s post on the double standard in external validity issues remains a classic (in Berk’s view). Markus’ post on the usefulness of multi-day early workshops with stakeholders to build an impact evaluation remains a model for how many interesting projects get designed and get buy-in; Berk’s post on misinformation and its consequences discussed ‘fake news’ before it was a thing and seems to have, unwittingly predicted how incomplete and/or wrong takes would get propagated on Twitter and Facebook, even though he quaintly worried about this issue for blogs.
Berk’s blog on working papers not working , which remains his most popular post and the first paragraph of which he considers his best writing, perhaps reflects not an extreme case of results changing, but some really interesting content that comes through during revisions of a paper. It would be great to have this flagged more given all the work we put into revisions – perhaps a “if you’ve read my working paper, what’s new in the final version?” add-on? Things have also improved on this front – with many journals having faster turnaround times and options for short papers. That blog incidentally also mentions the possibility of trial registrations in economics, something that predates the AEA RCT registry and the vigorous debate on the value of pre-analysis plans, which were uncommon 10 years ago.
When we started the blog, one of the things we were asked was whether there would be enough content to keep it going. Some weeks we are definitely more inspired than others, but it has been great to have the continued questions, support, spirited debate, and encouragement of our readers, and to have a chance to share some of what we are learning more broadly. As Markus memorably put it in his fitting early post on the ups and downs in the roller-coaster of impact evaluation “here we go for another loop…back to work!”.
Readers: let us know, are there particular posts that you find yourselves repeatedly going back to? Others that you wish we would update or can’t believe we ever wrote? Let us know (maybe even in the comments!).
Join the Conversation