An interesting new paper by Abhijit Banerjee, Raghabendra Chattopadhyay, Esther Duflo, Daniel Keniston, and Nina Singh shows how sometimes top-down reforms might be a good move through a look at a range of reforms tried out by the police in Rajasthan, India.
First, some background. As the authors (mind you, they include a police officer) put it: “the police could be regarded as the archetypal sclerotic institution.” Indeed, in India, the Police Act of 1861 is still in effect, and multiple attempts at reform have fallen flat. But the Rajasthan police can’t be that sclerotic: they worked with J-Pal to randomize a set of reforms.
These reforms included: 1) weekly rotation of duty assignments including a day off each week (it turns out that the rules do not provide for one single day off for cops in India – they theoretically are on duty 24/7), 2) a freeze on transfers of staff (about 1/3 of the police officers turned over during 18 months in the control group)), 3) training in hard (e.g. collecting scientific evidence) and soft skills (e.g. communication, stress management), and 4)placing community observers in police stations.
Note that among these, 2 & 3 are pretty much top-down: they can be implemented with orders from above. (1) requires the cooperation of whomever is in charge of the local station, and (4) requires community volunteers and some kind of coordination among them. (And one thing worth noting is that these reforms were developed based on some pretty extensive consultation, and a pilot).
So how does one measure crime? My gut instinct is that you use administrative data, but this is naïve – it turns out police have a tendency to downgrade crimes or get people to leave without filing a complaint, especially when this is being monitored (check out this interesting This American Life podcast for a story about this in New York). Of course, when you add to this a belief that the cops won’t do anything when you show up, the underreporting gets worse. So Banerjee, et. al. put in place a number of interesting data collection instruments to get a better measure.
First, you have a household survey with a couple of parts: a) a crime screener questionnaire to identify any household member who has been a crime victim, b) a crime victim survey, and c) an opinion survey on contact with, and perceptions of, the police. Second, a survey of police officers in both control and treatment areas. And third, they randomly selected a bunch of case files to send to retired police officers for grading. Unfortunately, the police officer survey and case review weren’t useable for the evaluation given that transfers continued to be significant and possible attrition bias comes into play.
The fourth data collection tool was particularly interesting – they sent mock crime victims into stations to report a crime. These folks were meant to look like average folks (well, the distribution of folks), and they attempted to register crimes, with an emphasis on the petty crimes that were likely to be underreported. Now, since falsely reporting a crime in India is a crime in itself, the surveyors had to reveal their identity if it looked like they were actually being successful…and this might make the police think twice about how they treat people…more on this in a minute
For the intervention, they used 162 police stations or 20% of the stations in the state (yes, Rajasthan has a lot of police stations – and a lot of people). Stratifying on geography, crime levels, and rural/urban, they randomly assigned the stations into a number of arms: all interventions, community observer + no transfer, rotation/day off + no transfer, no transfer only, and control. They also have a “pure” control – a group that basically did not know about the study until the end line survey.
This pure control is neat – given the rather extensive data collection, it lets them get at potential Hawthorne effects. And lo, these turn out to be present. After initial field reports came back that the police officers might be responding to this effort to measure performance through mock victims, Banerjee and co. expanded this measurement tool/now intervention to more stations outside of the study. They find that each visit by these mock victims increased the probability of the police filing an incident report in subsequent visits by 6-11% and that police politeness increased.
In order to examine the effects of the original interventions, they break outcomes in four main areas of perceptions about the police: responsiveness, fear of the police, corruption, and the adequacy of police resources. The most robust positive impact was in reducing the fear of the police. Here freezing the transfers – alone or in combination -- had an impact, with the exception of stations which had both the transfers frozen and the weekly day off/rotation. For this group, the rotation may have reduced the ability of individual officers to establish rapport within a community.
In addition, the training program intervention had a significant impact on the probability that victims are satisfied with the police investigation – increasing satisfaction by 16-21 percentage points off a base of 39%. In another interesting design feature, they varied the fraction of officers trained at a given station from 25-100%. It turns out that you probably have to train all of them: effects for 25 and 50 percent are not significant and close to zero, 16% and not significant for 75% of the officers trained, and 29% and significant for 100%. So no apparent training spillovers here.
Why don’t the other interventions – the rotation/day off and community monitoring not work? Well, it seems like implementation was a problem. Banerjee and co. had surveyors check for the presence of a community monitor when they went to the stations – they found them only about 10% of the time (and not necessarily the time they were supposed to be there – which was publicly posted). This could be a coordination problem – in order to avoid capture (figurative – not literal) of these volunteers by the police, the pool was large and folks were supposed to rotate out of this pretty quickly. And in terms of the rotation/day off, this was also not well implemented (maybe it had something to do with exempting the station master from this rule). Initially this intervention led to a shortening of the time since last day off by around 5 days (the average in the control stations was 30 days since the last day off!), but by the end of the study, this difference had disappeared.
So, what does this tell us? Well, it seems like if you want better acceptance of the police and better community relations, training and reducing police transfers will help. And in terms of broader institutional reform, it’s an interesting lesson in the relative efficacy of top-down command approaches versus more decentralized and local approaches. This is the police, after all.