Syndicate content

Using fieldwork to ask better questions

Maira Reimao's picture
Evaluate the following statements according to whether they are “not at all true”, “hardly true”, “moderately true” or “exactly true”:
  • I can always manage to solve difficult problems if I try hard enough.
  • I am confident that I can deal efficiently with unexpected events.
  • I can solve most problems if I invest the necessary effort.
  • I can usually handle whatever comes my way.

If, after reading the statements above, you were a little confused and found your eyes going back and forth between them, trying to figure out how they are different, you are not alone. When we tested these and similar survey questions on women in rural Guatemala, we found that they not only confused our respondents but also perhaps deflated them.

The statements above are part of a self-efficacy scale that has been applied in over 20 countries, and we originally included them in a questionnaire exploring the effect of male migration from rural Guatemala on women’s agency, household welfare, and agricultural productivity. We used their Spanish version, which has been applied and validated in various countries, including Costa Rica, Mexico, Peru, and Spain.  (Note: In most instances the scale was applied to university students or in urban areas, where education levels are generally much higher than those found among rural women in Guatemala.)

But before starting the data collection, we made site visits, conducting focus groups and observing pre-tests of the questionnaire on rural women in Guatemala. Through this effort, it quickly became obvious to us that this scale would not work for our target group. Though they were able to answer questions regarding their partner’s migration, household income, and assets without difficulty, these same women stumbled during this part of the questionnaire, offering answers that were very tentative at best – and often seemed like no more than guesses.

We share this story here not to criticize the scale itself, which is well established and validated, but to highlight the importance of a careful pre-test. During our field visit, we saw respondents really struggle with the self-efficacy statements but also observed that the enumerators were ultimately able to elicit responses from each of them. And had we not seen the respondents’ difficulty with this scale, we would have received a dataset that included answers and scores for each of the statements and would have analyzed the data and its correlations to our hearts content. But the pre-test revealed that the scale was too confusing, and that women were rating the statements while unsure of their meaning. We chose to scrap those questions entirely; what good are answers when the questions aren’t fully understood?

Our field visit also led us to expand the response options to questions like “who decides on the household budget?” or “… on the amount your household will spend on food?” While we started with the standard array of options, including the respondent, the respondent’s partner, and the respondent along with her partner, our focus groups revealed another common answer: “the respondent, immediately informing her partner”. This arrangement is distinct from the others, as women who revealed this level of decision-making explained that, while they made the decisions, they were not completely in charge of it, as they still had to inform their partners. This finding allowed us to improve the response options in our questionnaire and collect richer information. As we were particularly interested in the decision-making process within households with migrants, this change enabled us to learn more about the topic and the level of agency that women with migrant partners have relative to other women.

Of course, pre-testing a questionnaire and conducting focus groups are not radical or new ideas. But pre-testing is frequently left to the local enumeration company, and tends to focus on issues with sequencing, logic, technical terms and definitions, or identifying questions that are completely incomprehensible. It is not clear that the problems we faced in applying a standardized scale would have been picked up through this process alone, particularly since the respondents ultimately provided answers. And focus groups are often relegated to the end of the research project, as a tool for interpreting quantitative findings, ignoring their value in informing the questionnaire design itself. Tellingly, the enumeration company we used was surprised by our plans and revealed that, in their many years of experience, they had never had researchers visit before survey data collection.

In development economics research, we may say that it is important to understand the context and the subjects – but, in practice, this is often limited to post-collection data interpretation. Field visits carried out early on can improve the survey instrument, ensuring that questions and answers are appropriate and delve deep into the topics of interest. Our pre-tests in rural Guatemala allowed us to add detail to the response options regarding household decision-making, while also saving us from working with – and may be even publicizing the results from – a set of data full of numbers but no real meaning. Importantly, it also spared hundreds of respondents from having to rate questions that were utterly confusing to them, an experience that in itself could have been disempowering – the opposite of what we set out to measure.

This work was conducted with support from the Umbrella Facility for Gender Equality and the final report can be found here.

 

Add new comment