This is part 2 of my conversation with Rachael Pierotti, who leads the qualitative work at the Africa Gender Innovation Lab. Part one can be found here.
MG: Tell me about methods. What is the one that you find the most useful in the work you do with economists?
Which qualitative data collection method is most appropriate depends on the research question. Different methods produce different kinds of data, and for that reason, it is often a good idea to use more than one method in a study. Focus group discussions are good for investigating social expectations, areas of normative consensus or contestation, or the social dynamics of groups. They are NOT good for studying individual attitudes or behaviors. I like to think of focus groups as performance space—if you want to observe what people say in front of other people, or what issues they are willing to debate publicly, or how social dynamics play out in a group, then focus groups can be a good idea. The goal of a good focus group facilitator is to get participants talking amongst themselves about the topic of interest. Asking entrepreneurs about their financial management practices in a focus group is a genuinely terrible idea.
To examine individual attitudes and behaviors, one-on-one in-depth interviews make more sense. Individual interviews are usually my starting place, and then I build from there, either by doing repeated interviews, or adding other complementary methods. There are a range of participatory methods that can be useful if you want to give the research participants the opportunity to express what is important to them. For example, in a recent study we wanted to learn about the daily lives of the young women working in one of Ethiopia’s industrial parks, so we gave them cameras and asked them to take pictures of their life outside the factory. We collected the photos and identified some that could be used as fodder for conversation in a subsequent interview where they were given an opportunity to explain what was in the photo and why it mattered to them.
Key informant interviews are meant for capturing information from officials—not information about themselves, but information about a topic that they know. In the Uganda land titling study that I mentioned in the last post, we conducted interviews with members of the Local Councils who have land administration and dispute resolution authority. We chose to do key informant interviews with them to assess whether they would represent women’s land rights in the same way as the rural residents who we had interviewed.
MG: So with these methods, how do you know if you have enough observations?
RP: Am I allowed to admit that I don’t have a good answer to this question? The textbook answer is that you continue collecting and analyzing data until you’ve reached “saturation,” meaning that you aren’t learning anything new from additional data. That is fine in theory, but it doesn’t work when you have to write a terms of reference for a research firm. Instead, I spend some time thinking about how much variation I expect to hear from my research participants, make a best guess of a reasonable sample size (more expected heterogeneity = bigger sample), and insert a clause in my terms of reference indicating that the data collection methods may need to be adjusted along the way. I will say that it is possible to have too much qualitative data. Data analysis requires attention to each source of data as an individual case, and that is very time consuming.
MG: Sorry, I interrupted – tell me more about methods!
RP: A final data collection method that I want to mention is observation, which is commonly used in ethnographic research. Observation is exactly what it sounds like—watching a process or an interaction—and it can be critical to understanding how an intervention actually happens in practice and how people respond in the moment. Humans are not great at providing full and factual accounts of their own or other people’s behaviors. An agricultural extension agent who has recently been told that s/he needs to reach more women may report success if a greater number of women than usual attend a training event, and the agent either may not notice or not care to report that the women were not paying any attention because they were uninterested in the topic. Someone observing the training event would be better placed to take note. That detail can be important for interpreting the downstream estimates of impact of a training for extension agents. In general, observation can be a really important tool for investigating the mechanisms linking the intervention with the outcomes measured in an impact evaluation. Unfortunately, observation is also very difficult to implement through research assistants. It only works when the research assistants have been sufficiently trained by the lead researchers to know what information is relevant. That is not easy to do.
MG: So I noticed you mentioned ethnographic research. Can you say more about that?
RP: Essentially, ethnographic research means studying people or interactions holistically and without the direct intervention of the researcher, to the greatest extent possible. In contrast, for data collection methods like interviews or focus group discussions, the researcher convenes and directs the conversation. Angotti and Kaler (2013) studied how Malawians talk about HIV testing and found that the degree to which responses conformed to officially sanctioned rhetoric about the value of testing depended on the degree to which participants were reminded that they were part of a research project, or their “research awareness.” When selecting research methods, this potential form of desirability bias is something that I consider.
MG: When talking about observation, you alluded to a practical challenge of doing qualitative research. How do you go about data collection?
Because I am not an academic researcher, I do not have the luxury of spending years doing data collection myself (not to mention learning all of the relevant languages). That is likely to be true of almost all qualitative research led by people outside of academia. That means I work with local research assistants, usually with a wealth of contextual knowledge, who use the methods I’ve mentioned to do the asking and observing for me. For truly in-depth qualitative research, the questions asked in interviews are rarely completely standardized. The quality of the data depend in large part on the ability of the research assistants to identify which aspects of participants’ responses are worth additional probing. Moreover, it is often not what a study participant says, but what they do not say, or what they take for granted, that can be most interesting. This makes the role of qualitative research assistants vastly different from that of survey enumerators. Quantitative enumerators should ask everyone the same questions the same way. Qualitative research assistants should do exactly the opposite; they should follow up on the specifics of each case. Qualitative research assistants must have extremely good listening skills and strong analytical capacity.
Under these circumstances, the best qualitative research will involve an iterative process of data collection and analysis. After a round of data collection, it is important to pause and analyze what you have learned and, thereby, identify the subsequent questions that need to be answered. Engaging the research assistants in this process is critical to increasing their ability to correctly anticipate which questions you would want asked in each interview or what behaviors you would want recorded in an observation. This type of iterative process is also important because the goal of qualitative research is to understand a phenomenon from the perspective of the study participants, not from your perspective as a researcher. That likely means that you will not know the most useful questions to ask at first. The questions on your data collection instruments should get increasingly refined as you proceed. This happened in the study of land titling in Uganda. In the first set of interviews, we kept hearing that men in unstable marriages do not put their wife’s name on the land title, so we added more questions about marital quality to subsequent interviews.
MG: Do you ever involve the quantitative researchers you are working with in this iteration?
RP: Yes, sometimes, although probably not enough. We had a productive qual/quant exchange that informed qualitative research that accompanied an impact evaluation of a public works program in Central African Republic. The first phase of qualitative data collection was meant to broadly capture the life experiences of program participants, since we had so little information about the environment. By the time we were designing the second phase, the estimates from the impact evaluation were showing heterogeneity in treatment effects by baseline poverty level. So, in the subsequent round of qualitative data collection, we purposively sampled more and less poor participants and tailored our questions to try to understand the pattern observed in the quantitative results.
MG: Thanks. Tell me more about what this iteration would be good for?
RP: As I just mentioned, the iterative process allows flexibility in sampling, which can also be important for following up on themes that emerge from early qualitative interviews and observations. In a study of agricultural labor for smallholder farmers in Nigeria, for example, in the first few rounds of data collection we noticed that intrahousehold negotiations about labor allocation seemed different for women who had off-farm income generating activities. In the next few rounds of data collection, we specifically targeted women with off-farm activities to test our emerging hypotheses. I’ve found that these kinds of iterative designs, plus triangulating findings by using multiple data collection methods, are the best ways of ensuring data quality and new insights from the qualitative research.
There are several important implications of this style of qualitative research where questions, sampling, and even methods are adjusted throughout data collection. Because not all study participants answer the same questions, and the sampling is purposive instead of random, it is problematic to quantify the qualitative data. That is often frustrating to quantitative researchers who want to know what proportion of my sample reported X. This iterative style of research is also time consuming. The best research assistants will take a full day to transcribe just one interview. Also, even preliminary (not systematic) analysis of qualitative data is a slow process. I find it difficult to attentively read more than 4 interview transcripts in a day. Altogether this means many months of data collection. Last, but certainly not least, qualitative field researchers should be more highly paid (and ideally more highly qualified) than quantitative enumerators.
This has been one of the richest, most intellligent and insightful discussion of complementary data collection methodologies I have come across in the past few years. Dr. Pierotti gives excellent real-world examples to help us understand how to integrate different types of data so that they truly inform one another. Thank you, RP and MG. I intend to use your two related blog posts to educate current social policy students.
Very enlightening! We are currently doing a rapid economic impact assessment in the province of Samar, Philippines. The assessment deals with the impact of the twin COVID-19 and economic crises and for this study, we used the mixed method research. I have felt that some academic rules on the the methods of data gathering are too constricting for the in-depth understanding of the object of study and the development of innovative methods. The insights in this conversation with Dr. Pierotti gives applied reseachers the needed boost to innovate in order to better discover.