Published on Development Impact

What happens when you train senior civil servants in econometrics?

This page in:

The answer to this question is the subject of an interesting new paper by Sultan Mehmood, Shaheen Naseer and Daniel Chen.  Off we go to Pakistan, where Mehmood and co. are working with “the Academy” which trains deputy ministers.

So in the case, the disruption due to COVID makes things easier. In a remote learning environment, Mehmood and co. randomize students at The Academy into either receiving Mastering ‘Metrics (an accessible introduction to causality and methods to identify a counterfactual) or Mindsight (a self-help book).  After getting their book, the civil servants were asked to do assignments where they a) summarized the book, and b) indicated how they would use the lessons from the book in their job.

These assignments were then graded, and the best students got awards that included desk-trophies and vouchers to redeem at department store. These students then presented their essays to other students, followed by video lectures by authors of the two books (separated by treatment groups), with a discussion afterwards. 

There are lots of incentives at work here – the grades matter for future assignments and, of course, there are the prizes.  Six months later Mehmood and co. were back to look at downstream behaviors. Now they ask policymakers to choose between an ICT initiative and a deworming program (which actually was a real policy choice for one locality at the time). Mehmood and co. elicit demand for different kinds of evidence and then reveal results from a long term study on the impacts of deworming (Hamory et. al 2021).  They then measure updating. 

Whew, it’s quite a set-up. And it yields some pretty interesting insights.

So, what does reading an econometrics book do, relative to a self-help book? It significantly changes attitudes towards the importance of quantitative analysis in policymaking – bumping participants up 0.9 points on a 5-point scale (with a control mean of 2.7).  It also makes them around 17 percentage points more likely to (hypothetically) choose an RCT to evaluate a policy before rolling it out. Interestingly (and perhaps testimony to the quality of the book), it has no significant impact on attitudes towards qualitative evidence (the point estimate is actually positive, but small and not significant)

The exposure to the lectures and discussion amplifies this. Attitudes towards quantitative analysis are now up 1.5 points and the choice of an RCT to evaluate policies is up by 22 percentage points.

Mehmood and co. show us that folks aren’t merely nodding every time they hear RCTs through a textual analysis of the writing assignments; phrases like “correlation is not causation” show up far more in the econometric-book treatment arm. 

Now there has been some discussion that training folks in economics might make them less prosocial. Mehmood and co. check for this in two ways. First, the policymakers take field trips during the training and they can choose between presentations from senior civil servants or trips to an orphanage and volunteering to teach in impoverished school. Folks in the econometrics course are no less likely to choose the orphanage or school. Second, Mehmood and co. do another textual analysis of the assignments to look for prosocial language (in another paper they show this language is correlated with things like donating blood!).  Nothing here. An interesting side result, but then these policymakers weren’t reading a book written by University of Chicago economists.  

On to the six-month later policy decision. First off, before getting the signal, Mehmood and co. elicit the willingness-to-pay for causal evidence. The folks who went through the econometrics course were willing to pay about $9.50 out of their own pocket and around $5000 out of public funds for this evidence. They are significantly less likely to use public funds for correlational data. And there is no change in the amount they would pay for advice from a senior bureaucrat.  After getting the signal, the amounts they would pay for causal evidence go up. 

Mehmood go to some lengths to show us these results are robust. One of the neat things they do is that they set up the policy choice at the end of their experiment with some priming to push things the other way, telling participants: “before the policy choice, all deputy ministers were told that ICT policy is important for the 21st century economy.” This helps offset any potential that the participants were going to try and please the experimenters. 

All in all, this is a fascinating study. Mehmood and co. make great use of administrative data (the Academy gave them pretty comprehensive access). They capture a range of different behaviors to help convince us the change is real. And they show us the difference a curriculum can make in shaping how policymakers think about making policy decisions.  Last, but not least, since this is a set of folks who are definitely not at the beginning of their careers, this demonstrates the importance of continuing education.  Time for me to go study, starting with this.   


Authors

Markus Goldstein

Lead Economist, Africa Gender Innovation Lab and Chief Economists Office

Join the Conversation

The content of this field is kept private and will not be shown publicly
Remaining characters: 1000