Over the past couple decades, data collection efforts on women’s exposure to intimate partner violence (IPV) have improved immensely due to the inclusion of the domestic violence module in the Demographic and Health Surveys (DHS). Data collected from this DHS domestic violence module are crucial for informing anti-violence program, policy, and advocacy efforts. For LMICs (see Figure 1), in particular, the DHS is the primary data source for monitoring the United Nations’ Sustainable Development Goals on the elimination of violence against women (UNSD, 2023). Additionally, these data are frequently used by researchers working on IPV in LMICs (e.g., Bhalotra et al., 2021; Corradini and Buccione, 2023; Díaz and Saldarriaga, 2023; Erten and Keskin, 2024; Guarnieri and Rainer, 2021; Pesando, 2022; Sviatschi and Trako, 2024).
Figure 1: Demographic and Health Survey (DHS) IPV Data Collection
Source: The DHS Program STATcompiler
Given the important and practical policy implications that emerge from the analysis of household survey data on IPV, the critical question becomes: can we take survey-based IPV data as given, or is there reason to be concerned with measurement errors?
In my Job Market Paper, I identify a new source of reporting bias in survey-based estimates of IPV prevalence. Specifically, I show that even if only 10% of surveys conducted by nationally representative survey efforts, such as the DHS, occur in the evening, over 458,640 women are excluded from annual estimates of physical and/or sexual IPV prevalence across Sub-Saharan Africa and over 3.25 million women are excluded worldwide. In other words, these women have been exposed to IPV but are not counted as such.
Identifying time-of-day effects on women’s IPV reporting
Leveraging DHS data collected from women across 17 countries in Sub-Saharan Africa, I study whether time of day influences women’s propensity to self-report IPV on household surveys. Specifically, I exploit a key feature of DHS protocol that introduces plausible exogeneity into within-day timing of household surveys, that is, enumerators are not allowed to know any members of the households that they are assigned to interview (ICF, 2017). Accordingly, I can assume that the start time of the household survey is as good as random among women available for an interview during an enumerator’s first visit to the household.[1] Indeed, I show empirically that, among women available for an interview during an enumerator’s first visit, household survey time does not predict respondent and household characteristics such as age, partner’s age, employment status, education, head of the household status, and number of children under 5 in the household. Additionally, I rule out differential selection into eligibility for the domestic violence module by household survey time.
If the timing of the survey is as good as random, that means that the time of day a woman began a survey should not be correlated with her ultimate disclosure of whether she faced IPV. However, using a linear regression, I show that respondents who received evening surveys (4 pm to 8 pm) were 2.8 percentage points less likely than respondents who received morning surveys (6 am to 12 pm) to self-report IPV.
Why does IPV reporting decrease over the day?
I present empirical evidence that husbands’ alcohol consumption drives heterogeneity in the perceived marginal costs of IPV disclosure depending on the time of day. Conceptually, the anticipated threat of IPV or the risk of retaliation for reporting IPV increases over the day with husbands’ alcohol consumption and may decrease women’s likelihood of disclosing IPV. Consistent with this hypothesis, in Figure 2, I show that there is no time-of-day effect for women with non-drinking husbands. Yet women with drinking husbands are 6.1 percentage points less likely to self-report IPV on evening household surveys than on morning household surveys.
I conduct several tests to examine and rule out other pathways that may explain the within-day decline in IPV reporting, including within-day variation in couple exposure, respondent fatigue, enumerator fatigue, interview interruptions, and the perceived opportunity cost of completing a multi-topic survey. Furthermore, I find that my results do not vary by other dimensions of heterogeneity including the frequency of husbands’ alcohol consumption, respondent’s age, parental violence, attitudes towards IPV, presence of children under 5 in the household, and differences in wives’ and husbands’ employment statuses.
Figure 2: Time-of-day effects on disclosure of intimate partner violence
Notes: This figure shows the afternoon (12pm to 4pm) and evening (4pm to 8pm) survey time effect, in percentage points (pp), on the disclosure of physical and/or sexual IPV. Effect sizes were obtained from an ordinary least squares regression in which morning surveys (6 am to 12 pm) are the comparison group. 95% confidence intervals are shown.
Why do my findings matter?
IPV measurement
Time-of-day effects create a sizable source of underestimation of widely disseminated indicators on IPV prevalence. The 6.1 percentage point estimate reported earlier implies that, in comparison to morning surveys, evening surveys underestimate IPV prevalence among women with drinking husbands by 15%.
This reporting bias falls within broad ranges of measurement errors reported in the survey design effects literature. For example:
· Cullen (2023) finds that women in Nigeria were 35% less likely to report physical IPV on a face-to-face survey than on a list experiment.
· Jeong et al. (2023) find that an additional hour of survey time increased the likelihood that survey respondents in Malawi and Liberia skipped questions by 10% to 64%.
· Abay et al. (2022) find that delaying a food consumption module by 15 minutes in a phone survey decreased reported food diversity scores among rural Ethiopian women by 15% to 17%.
IPV and causality
Given that time-of-day effects are correlated with husbands’ alcohol consumption, they are a source of non-classical measurement error in survey-based IPV data, which biases regression estimates in subsequent empirical analyses.
Consider, for example, an experimental study that examines the effects of a women’s employment intervention on IPV exposure. If the treatment increases women’s daytime employment outside the household, treated women become less likely than control group women to be available for endline household surveys during the morning or afternoon versus the evening household survey time. Hence, IPV underreporting is systematically more likely to occur among the women in the treatment group than among the women in the control group, leading researchers to overestimate the role that the intervention played in reducing IPV. Thus, in the presence of time-of-day effects, researchers may fail to detect unintended consequences of women’s empowerment interventions, due to male backlash, that would warrant critical changes in program design.
Implications for survey implementation and analysis
Fortunately, non-classical measurement error driven by time of day may be avoided during survey implementation or addressed post hoc during subsequent analyses of the collected data.
For large-scale survey efforts, such as the DHS, in which enumerators must interview large samples of households over short periods of time, it may not be feasible to conduct all interviews during the morning. Yet enumerators may attempt to visit households before 4 pm, which is the time in which the findings of my study suggest that sizable heterogeneity in IPV disclosure begins to emerge between women with husbands who consume alcohol and women with husbands who do not consume alcohol.
The implications of my results are not only applicable to large-scale household survey efforts but are also relevant for empirical researchers. For example, principal investigators can ensure that respondents from treatment and control groups are interviewed during similar times to preemptively avoid biased treatment estimates. Furthermore, enumerators can collect detailed data on the number of visits to the household, the time of day for each household visit, and the time that each module was administered to the respondent. Therefore, researchers conducting empirical analyses on survey-based IPV data can control for survey time in their regression models to account for heterogeneous selection into IPV reporting due to time of day.
Katherine Theiss is a PhD student at Fordham University.
[1] The DHS also includes households and respondents who were surveyed during an enumerator’s second or third visit. Yet I exclude these interviews from my analysis given that the household survey start time may be correlated with endogenous factors.
Join the Conversation