External validity is a recurring concern in impact evaluation: How applicable is what I learn in Benin or in Pakistan to some other country? There are a host of important technical issues around external validity, but at some level, policy makers and technocrats in Country A examine the evidence from Country B and think about how likely it is to apply in Country A. But how likely are they to consider the evidence from Country B in the first place?
Development economists sometimes try to signal the external validity of their work by how they frame the evidence they present. For example, “Strengthening State Capabilities: The Role of Financial Incentives in the Call to Public Service” (anywhere!) sounds more generally applicable than “The Political Economy of Deforestation in the Tropics” (tropical countries only!) which in turn sounds more general than “Education and Human Capital Externalities: Evidence from Colonial Benin” (Benin … 150 years ago!).
To characterize the norms in this area, I drew a sample of 450+ papers across 6 journals that publish applied economic development research between 2010 and 2015 to see how common it is for authors to frame their evidence as general versus country-specific. Specifically, I examined empirical development papers that use evidence from one or two countries, with at least one of them being a low- or middle-income country (as of 2010). To get a range of publications, I looked at three general interest journals – the Quarterly Journal of Economics (ranked #1 among economics journals by simple impact factor), the American Economic Review (#10), and American Economic Journal – Applied Economics (#31) – and three development field journals – the Journal of Development Economics (#36), Economic Development and Cultural Change (#132), and World Development (#136). For the general interest journals, I used the universe of applied development articles (mid-2010 to mid-2015); for the field journals, I drew a sample from the same period.
Fact 1: The broad norm is to include the country of evidence in the title. More than two-thirds of articles (69%) do this. I believe in a degree of external validity (i.e., we can learn across contexts), but I also put weight on local evidence. Signaling the source of the evidence in the title of the paper is one way to make it easier for people to find local evidence.
But it may not be the best way to reach the broadest academic readership, per Fact 2.
Fact 2: Papers are much more likely to identify the country in field journals than in general interest journals. In fact, the higher ranked the journal, the less likely it is that the country is mentioned in the title.
Of course, this association does not indicate that including the country in the title has a causal impact on journal placement. Rather, it may well be that articles of more general interest are BOTH less likely to mention the country in the title AND more likely to get published in the top ranked journals.
Fact 3: There is no simple relationship between income and country. Lower middle income countries are slightly more likely to have the country in the title than low or upper middle income countries, but as we’ll see next, this may well just be the China-India effect, as both fell into the middle group.
Fact 4: If the evidence is from the most populous countries (China and India), then authors do identify the country.
This is consistent with work by Das et al. showing particularly high research production and – potentially – interest in these countries: “The first-tier journals together published 39 papers on India, 65 papers on China, and 34 papers on all of Sub-Saharan Africa.”
Fact 5: Only a few papers identify the source of the data in neither the title nor the abstract. The vast majority of papers, if they don’t have the country in the title, identify the source of the data in the abstract.
These authors are implicitly making a strong argument that the source of the data is irrelevant. For example, when authors present a model of technological learning and test it with a field experiment, but don’t reference the country, it is implicit that the results aren’t specific to Indonesia. Likewise, when a paper examines the relative roles of motivation, training, and knowledge in health care provision but omits the country of study from the title and abstract, this suggests that it doesn’t matter that this took place in Tanzania.
Of course, in both cases it probably does matter. Seaweed farmers in Indonesia may learn differently than sorghum farmers in Kenya, and health workers in Bolivia may have different weights on motivation versus knowledge. Not including the country of study even in the abstract seems to unnecessarily tax those who believe that context matters.
Conclusion: Obviously, the title is just one way that authors signal the general interest of their evidence. They also do so through argument and data in the abstract and throughout the paper. Although articles in top journals are less likely to have the country name in the title, note that even in those journals, more than half of applied development articles do so.
Authors can reference the source of the evidence and still publish well.
Bonus: Do economists think Africa is a country? For the most part, no. Out of 127 articles in the sample with applied work in a country of Sub-Saharan Africa, only 3 use evidence from a single country to stand in for Africa as a whole.
Join the Conversation