Measuring harassment in the workplace


This page in:

Collecting data on harassment in the workplace is challenging, especially in a context where there is fear of retaliation and implicit stigma associated with reports of harassment (especially of a sexual nature). In their new paper, Laura Boudreau, Sylvain Chassang, Ada González-Torres, and Rachel Heath tackle this challenge in the context of harassment of workers (mostly female) by managers in large apparel manufacturers in Bangladesh. This topic broadly, on collecting data on sensitive topics, is one familiar to Development Impact readers. See, for example, Berk’s blogs including this one (related to the paper in the blog you are reading now) and this one and this one

Boudreau et al. study three aspects of survey design on reporting of threatening behavior, physical harassment, and sexual harassment by a worker’s line supervisor. They implement three variations in survey design to a business-as-usual survey approach: hard garbling (v. direct elicitation), rapport building (between the enumerator and respondent), and not collecting a respondent’s work information (identifying their team and manager).  

What is hard garbling (HG)? Under the HG design, some “no” responses are randomly flipped to “yes”. Garbling gives a respondent who has been harassed plausible deniability and, by doing so, is theorized to encourage more truthful reporting of an event that someone might want to hide for fear of retaliation. Even if someone’s data say “yes”, it is not possible to know if the response said yes or they said no during the survey (but their response was flipped to yes). With the fixed flip rate, a ‘transformed’ rate of yes responses can be calculated, arguably closer to the true rate. (They assume no false positives in actual reporting in their theoretical framework -- which seems reasonable to me for this context -- , but HG is actually robust to strategic misreporting in a general equilibrium framework).  

HG works only if respondents understand it. To that end, the supplementary material describes what is explained to respondents (i.e. factory workers) and the short comprehension quiz given. I admit that I was skeptical about how well respondents would understood the HG design (average years of schooling is less than 7 years), but as you will see below, the new design does result in large increases in incidence of harassment. 

Why build rapport? The rapport building (RB) design consists of both a short and long version, where the enumerator makes prespecified (but hopefully natural) small talk about family and hobbies. It is intended to increase the respondents trust that the enumerator is trust-worthy in terms of keep information confidential. I would add that it might also ease concerns about stigma.  

In the case of collecting data on sensitive topics, there are other options one might consider. They discuss randomized response (“soft garbling” as they call it) and explain why their hard garbling approach is preferred. This is, in part, because HG addresses concerns with compliance when the respondent roles a die (in soft garbling) which determines a flip from no to yes. On the other hand, by removing the compliance problem, they have raised the concern that respondents might understand less well or lack trust that HG is happening (when the enumerator is rolling the die). Another option is the list experiment approach (collecting a veiled response thru a listing exercise—see those links above to Berk’s blogs). They also do not discuss alternatives to enumerator-led phone calls, such as IVR (and earlier methods related to self-administered surveys, such as those done with ACASI which I know of in the context of collecting data on sexual behavior). Maybe such discussion is missing because of assumptions about what makes their data sensitive (more the concern of retaliation if a manager finds out, rather than the stigma or embarrassment/discomfort with regards to an enumerator). 

In their study, they do not only want to measure the share of victimized workers. They also want to measure the share of problem managers (those who harass). A small share of managers who are harassers can be addressed by firing, whereas pervasive harassment across managers requires a different firm response. They also want to know the extent to which managers have at least 2 victims given that they have at least one. Hence they do need team identification information.  

In the business-as-usual survey (a phone call, direct asking, no extra rapport effort, and workplace identification information), almost 10% of workers report threatening behavior, under 2% report being physically harassed, and 2% report being sexually harassed by their supervisor. Design matters: Reported rates of harassment go up (a lot) with HG, by around 50% for threats and even larger (over 100% increase) for sexual and physical harassment. They also go up for threats and physical harassment when the survey design does not collect team-level identification information. While rapport moderately increases rates of harassment, the effect is not statistically significant.  

They also split the sample by sex. While they can’t statistically reject that that design effects are the same, the results suggest differences for some outcomes and design. The results are usually larger increases from the baseline rates for women, with the exception of reporting of sexual harassment where the HG design yields larger increases for men (to about 10% from the baseline rate of under 2%). They also find evidence that the RB design might have backfired for men – resulting in lower incidence.  

The paper includes various breakdowns on the incidence of harassment, including how widespread it is across teams. I am omitting a discussion of these results here but encourage you to take a look at the paper for much more. Overall, this paper is a nice contribution to the survey design literature, with a thoughtful design in a compelling context. The results make a strong case that there is serious underreporting of harassment workers face in these manufacturers. Let’s hope that better data will result in better policies to address it. 


Kathleen Beegle

Research Manager and Lead Economist, Human Development, Development Economics

Join the Conversation