with guest bloggers: Naureen Karachiwalla (IFPRI), Travis J. Lybbert (UC Davis), Hope Michelson (Illinois), Joaquin Sanabria (IFDC), James Stevenson (CGIAR SPIA), and Emilia Tjernstrom (Sydney)
In our first Devil in the Details post, we sounded a cautionary note on fertilizer quality measurement, which can be hampered by pesky but crucial calibration problems that can yield misleading test results. Measuring quality matters because it helps to inform if perceptions of fertilizer quality among farmers are accurate. This, in turn, is critical to understanding drivers of low agricultural productivity associated with low utilization. Here, we shift our focus to seeds – another agricultural input that has similarly attracted the gaze of economists. Farmers are also widely concerned about seed quality but measuring seed quality (by farmers or researchers) is fraught with complexities in analysis and interpretation. At the end of this post, we provide some guidance for economists using lab-based measures.
Seeds in 3D
Fertilizer testing is complicated, but at least the underlying concept of quality is clear and often unidimensional (e.g., nitrogen content in the case of urea). Assessing quality is much thornier with seeds since seed quality encompasses several dimensions. The most important areas, according to the International Seed Testing Association (ISTA), are analytical purity, germination and varietal purity. While specific standards or tolerances vary by crop and country, well-established ISTA testing protocols apply.
Analytical purity refers to the percentage of a given sample that is seed of the correct species (as opposed to weed seeds or other non-seed debris). The germination rate indicates the proportion of seeds that germinate based on standard test protocols. It is varietal purity where things get really interesting – and complicated – and where a bit of science fluency can avoid some interpretation pitfalls.
Varietal purity indicates whether the seed is the variety that it is supposed to be. That matters because the whole rationale for a farmer to buy certified seed of a particular variety (and likely pay more) is for that seed to have the traits they want. And yet, while analytical purity draws heavily on visual inspection, “it is not usually possible to accurately determine variety on the basis of visual examination of seeds” (Elias et al, 2012). In most countries, a government entity is charged with inspecting seed producers’ fields as a way to assess how seed is being produced, but this does not constitute a direct test of varietal purity of the produced seeds.
The only reliable way of detecting varietal purity of a sample is by DNA fingerprinting. An expanding set of private sector laboratories do this kind of specialized testing in rich countries. While such testing has yet to be mainstreamed into inspection elsewhere, researchers – including development economists – are tapping DNA fingerprint tests in the hopes of shedding new light on seed quality in Africa.
Varietal purity – a devil to measure
To illustrate, we return to Uganda, the site of the fertilizer test debate in the previous post. While its fertilizer quality problems may be more illusory than real, Uganda has a seemingly more well-established problem with seed quality. Indeed, Uganda’s 2018 National Seed Policy stated that an estimated 30-40% of seed traded in the market is “counterfeit.” But how would we really know since this would require varietal purity tests, which are rarely done (well) in practice?
Since Barriga and Fiala do not have access to samples of breeders’ seeds of the varieties in question, they cannot quantify varietal purity of their samples. Instead, they report how genetically similar a bulked sample of seed is to itself. They report low levels of variation on average for this distance measure, but this is only an indicator of varietal purity if the seed is also what it is purported to be – and this should be tested rather than assumed. Without the genetic reference material required for this test, a genetically homogenous sample of a given hybrid maize variety could be something entirely different, including an open-pollinated maize variety or (worse) maize grain masquerading as seed.
From experience, we have seen how even a world-class genotyping lab can be hamstrung without a genetic point of reference, which for hybrids requires access to seed produced directly from crosses of the relevant in-bred lines under breeder supervision. Without reliable reference material, DNA fingerprinting fails to deliver meaningful insights on varietal purity. Getting this reference library right requires painstaking, time-consuming work.
While applications of genotyping in economics are indeed exciting, it is a far more demanding tool than it first appears, particularly so for hybrid maize. It is no coincidence that the first uses of DNA fingerprinting in development economics focused on cassava, a clonally-propagated crop for which leaf samples from breeder collections are reliable reference material.
Depending on the crop, measuring varietal purity, often a goal in our work, can be far more demanding than it first appears.
Reflecting back to the post on fertilizer, we are not yet able to ascertain to what extent farmers should be skeptical about the true traits of seeds on the market. So we are a few steps behind as compared to fertilizer.
Reflections on the ‘Devil in the Details’
Interpreting results from new sources of data often requires specialized expertise—skills that typically lie beyond a development economist’s standard toolset. It’s tempting to treat new methods as plug-and-play extensions to familiar research designs, but the reality behind many new measurement technologies is likely more complex and nuanced than it first appears. We have a few broad suggestions on how to avoid the pitfalls when applying these new methods.
First, Researchers should thoroughly describe the protocols they use with comparisons to standard practice in the “home discipline” of the measurement. For example, since most reputable labs test a given sample twice as part of their protocols, researchers should document and report the analytical error (the within-sample variance in measurements). And second, Referees and editors should expect to see detailed descriptions of measurement methodologies that are new to development economics (at least in an annex and appropriately referenced). If a referee lacks the expertise to evaluate these methods, they should make this clear in their report and suggest that a qualified and critical evaluation be conducted. While economics editors cannot know expert reviewers in every possible “home discipline,” it seems reasonable to expect authors to seek out experts in that field and nominate a few professionals to evaluate protocols and interpretations.
We have come to appreciate how much can be lost in translation between data collection, laboratory analysis, and interpretation by social scientists like ourselves. Without sufficient expertise (and humility), misunderstandings can easily arise. For these new measurement tools to lead to useful new insights, we need close collaboration with experts from other disciplines and a sustained effort to understand new techniques—as well as their sometimes-devilish details.
Join the Conversation