Published on Development Impact

What I learned from looking at what was published recently in top journals in sociology and political science

This page in:

Thanks to everyone who provided nice comments as we celebrated a decade of the blog. One comment made by Saad Gulzar was “Thank you for running it! One suggestion for the next ten years: more engagement with development research in political science!”. It is a great suggestion, but we naturally tend to write on things we know something about, or issues we face in our day-to-day work, and so while we could learn a lot from these other fields, I’m not sure we are the best people to be critically discussing many of those papers. That said, when has lack of knowledge prevented me from opining on something? So as a first attempt, I browsed through what has been published in the last year in a top journal of each of sociology and political science, and thought I’d share what I learned/what looked interesting.


The American Journal of Sociology seems to publish relatively few articles (around 5 per issue) and heaps of book reviews. I didn’t see anything particularly interesting for development among the articles in the last four volumes, but there were lots of interesting book reviews (each is about 2-3 pages, so easy to browse through). Among the book reviews, Mary Godwyn provides a review of Sarah Babb’s “Regulating Human Research: IRBs from Peer Review to Compliancy Bureaucracy”. For anyone who wonders why university IRBs are often so bureaucratic and not really helpful to researchers, this sounds like a useful explanation “Sarah Babb describes the organizational transformation of institutional review boards (IRBs), which were once composed of faculty researchers concerned with interpreting ambiguous and confusing ethical dilemmas and are now composed of bureaucrats concerned with interpreting ambiguous and confusing government regulations.”. The reason given is in part the lack of capable state regulatory agencies in the U.S. compared to France and the U.K., leaving a lot of confounding and at times contradictory regulations. “One unintended and potentially dangerous consequence is that bureaucratic labor stresses form over function… IRBs, like the financial sector, are subject to federal government audits that rely on prodigious documentation… By stressing elements such as the constitution of committees, adherence to local policies, and consideration of mandated criteria, the treatment of human subjects fades into the background. Audits, then, create an environment of compliance in form not substance.” There is also discussion of how financial incentives can matter “Babb explores the consequences of applying the procrustean bed of IRB rules to all research involving human subjects, biomedical and social science alike. Further, since biomedical research is more likely to be funded, for-profit IRBs are incentivized to approve these experiments. Unfunded social science and humanities research, which is less likely to have the same potential for serious injury to human subjects, might be more scrutinized by IRBs simply because there is not the same financial incentive for approval.”. The conclusion seems to be that the system we have is suboptimal, but better than nothing, and Godwyn critiques the book for not offering more of a pathway to improved IRBs.

Other book reviews that caught my eye as interesting for development were one on the role of beauty pageants in allowing “emerging nations to showcase their progressive values and readiness to become part of a global market economy, even though tensions surrounding national unity linger and remain unresolved”, and this one on collective remittances and the Mexico 3x1 program, which makes the book sound interesting, but where the book review doesn’t really get across a lot of new insight; this review of a book on making global MBAs which had some lovely lines in it: “MBA students are swamped with information and scheduled to within an inch of their lives, and the point is for them to learn how to process information, to quickly separate critical facts from irrelevant detail even when subject to severe time pressure, and then to make a decision and move on. One trick is to rely on shared heuristics, conveniently embodied in tidy aphorisms, mnemonics, and alliterative acronyms (e.g., BRICS and the four Ps of marketing). Welcome to the life of a busy and important CEO” and “Business pedagogy is famously reliant on case studies. These stylized documents are based on proper research of actual events (typically done by a business professor managing an army of research assistants), but functionally they are close to Aesop’s fables: each case contains a small number of important lessons that can be teased out, with carefully orchestrated spontaneity, in an MBA classroom discussion. The professor manages the conversation, knowing where it is supposed to go, and when the class gets there the moral of the tale is recognized and summarized and the class is congratulated for having arrived at wisdom (p. 188). As MBA programs became more globally oriented, so did the case studies.”

None of the articles or book reviews in my quick look were focused on causal inference methods or emphasized causality in their title/abstract. My sense is that few of these articles will be natural things to discuss on our blog, but as the above shows, still lots that can inform design or thinking about research on development.

Political Science

I chose the American Political Science Review to browse through. A lot of the papers here look similar to what we might see in econ journals. Here is what caught my eye in a first quick look:

·       Clifford et al. on using repeated measures to increase precision in estimating treatment effects – a topic dear to my heart. Their setting is survey experiments, where they note that there is often a reluctance to collect baseline measures because the follow-up is so soon afterwards and there is a fear that “measuring outcomes pretreatment may alter the inferences drawn from an experiment.”. They test whether this concern is warranted by conducting six experiments (with students or MTurk populations) that randomly assign participants to different designs (post-only, pre-post, quasi (asking questions about a similar, but not the same outcome, ex ante). E.g. one study replicates a study on foreign aid, where you ask whether foreign aid spending should be increased or decreased, with a treatment group getting told foreign aid makes up less than 1% of the federal budget – and one set of subjects would only get asked their attitude to foreign aid once (post), while another set would get asked it first, then given treatment, then re-asked. They find that the pre-post design has clear precision gains over the post-only design, and have little impact on the magnitude of treatment effects, suggesting that fears about measuring outcomes pre-treatment may not be warranted in many settings. This seems the sort of paper you will love to cite because it gives justification for doing something that makes your study more powerful.

·       Masterton and Yasenov on whether halting refugee resettlement reduces crime in the U.S. – this looks at Trump’s refugee ban in January 2017 and difference-in-differences on county-year data to compare crime in areas that had received a lot of refugees prior to the ban to those that had not. This is notable for a publication of a null effect (they find small and statistically insignificant estimates). This paper is interesting also for seeing how DiD is discussed in another field. The parallel trends assumption is stated, but we are then just referred to an appendix for “formal tests”. Standard errors are clustered by state, with no discussion as to why this is the right level of clustering. Log of number of crimes + 1 is used as one outcome.  The appendix does have a lot of robustness and placebo checks, and would not look out of place in an econ paper, but it is notable that the strong emphasis economics now puts on visually illustrating difference-in-differences is not part of the main paper here (but is in the appendix).

·       If you think it is hard enough getting down to 6,000 words for a AER insights paper, the APSR also has “Letters” which are a maximum of 4,000 words. These seem to be useful for succinctly reporting findings from some experiments. For example, Williamson et al. report on findings from three survey experiments that they use to show that reminding respondents of the immigrant histories of their own families can increase support for more open immigration. The paper uses mediation analysis to show the mechanism seems to be through increasing empathy for migrants – this use of mediation analysis is still less common in econ papers, in part because of more suspicion of the assumptions underlying these methods. This short letter format also seems like a nice length for summarizing the results of a meta-analysis, as in this Blair et al meta-analysis of 46 natural experiments that use difference-in-difference designs to analyze the impact of commodity price changes on armed civil conflict.

·       In another letter Haim et al. use a RCT in the Philippines to show how building trust between government service providers and elected village leaders increased the probability that village leaders provide time-sensitive pandemic risk information critical to the regional Covid-19 Task Force by 20%. This is a good example of researchers quickly adapting an ongoing field experiment with the pandemic, and an illustration of how much faster publication can be than in econ. The work is in a conflict-afflicted area of the Philippines. Another thing that stood out was the discussion on the ethics of this work “Recognizing the significant potential risks with conducting human subject research in conflict zones, as well as risks stemming from collaboration with state forces that have a history of abusive practices, we carefully evaluated the ethical risks and benefits of engaging in this study during a year-long due-diligence period that included two pilots. In the Research Ethics section of the appendix (section A.6), we outline the American Political Science Association’s 12 ethical principles for human subjects research, identify the potential risks in each category, and describe in detail how we addressed each one. Ethical considerations were a primary factor that motivated our sampling strategy, the nature of the intervention, and outcome measurement, for example.” The appendix discussion on ethics is 23 pages long and incredibly detailed, including discussion of how they thought about different trade-offs and design issues, how they changed design elements in response to feedback, etc.

So yes, Saad is right, there is definitely a lot of interesting development research going on in political science, and work that would be natural to discuss more on the blog. I’m still not sure whether I have as much value-added in discussing these papers as those that I know more about, and it is definitely more work, but we’ll think about how to try to cover this more than once a decade at least!


David McKenzie

Lead Economist, Development Research Group, World Bank

Join the Conversation

The content of this field is kept private and will not be shown publicly
Remaining characters: 1000