This is the fourth in this year’s series of job market posts.
As the quality of a country's public sector workforce is an essential factor in its effectiveness in providing services, governments should try to hire qualified individuals in the public sector (Finan et al. 2017). However, there may be an important obstacle to the quality of this recruitment process, especially in developing countries: politicians could engage in patronage - the use of public sector jobs to reward their political supporters.
Virtually all modern bureaucracies are characterized by a civil service system, where the introduction of meritocratic hiring criteria was meant to shield public sector jobs from patronage practices. However, politicians typically retain some discretion in the selection of public workers, for instance through the use of temporary contracts (Grindle 2012). As a consequence, patronage could still play an important role in public sector hiring. Despite the potentially significant impact of this phenomenon on the quality of the public workforce, no study has systematically documented its presence in a modern bureaucracy, limiting our understanding of its consequences for the public sector.
In my job market paper "Patronage in the Allocation of Public Sector Jobs" (joint work with Emanuele Colonnelli and Mounu Prem), I study patronage in the context of Brazilian local governments during the 1997-2014 period.
This is the fourth in this year’s series of job market posts.
Scenario 3 (SCORE DATA AVAILABLE, AT LEAST PRELIMINARY OUTCOME DATA AVAILABLE; OR SIMULATED DATA USED): The context of data being available seems less usual to me in the planning stages of an impact evaluation, but could be possible in some settings (e.g. you have the score data and administrative data on a few outcomes, and then are deciding whether to collect survey data on other outcomes). But more generally, you will be in this stage once you have collected all your data. Moreover, the methods discussed here can be used with simulated data in cases where you don’t have data.
There is then a new Stata package rdpower written by Matias Cattaneo and co-authors that can be really helpful in this scenario (thanks also to him for answering several questions I had on its use). It calculates power and sample sizes, assuming you are then going to be using the rdrobust command to analyze the data. There are two related commands here:
- rdpower: this calculates the power, given your data and sample size for a range of different effect sizes
- rdsampsi: this calculates the sample size you need to get a given power, given your data and that you will be analyzing it with rdrobust.
Part 1 covered the case where you have no data. Today’s post considers another common setting where you might need to do RD power calculations.
Scenario 2 (SCORE DATA AVAILABLE, NO OUTCOME DATA AVAILABLE): the context here is that assignment to treatment has already occurred via a scoring threshold rule, and you are deciding whether to try and collect follow-up data. For example, referees may have given scores for grant applications, and proposals with scores above a certain level got funded, and now you are deciding whether to collect outcomes several years later to see whether the grants had impacts; or kids may have sat a test to get into a gifted and talented program, and now you want to see whether to collect to data on how these kids have done in the labor market.
Here you have the score data, so don’t need to make assumptions about the correlation between treatment assignment and the score, but can use the actual correlation in your data. However, since the optimal bandwidth will differ for each outcome examined, and you don’t have the outcome data, you don’t know what the optimal bandwidth will be.
In this context you can use the design effect discussed in my first blog post with the actual correlation. You can then check with the full sample to see if you would have sufficient power if you surveyed everyone, and make an adjustment for choosing an optimal bandwidth within this sample using an additional multiple of the design effect as discussed previously. Or you can simulate outcomes and use the simulated outcomes along with the actual score data (see next post).
I haven’t done a lot of RD evaluations before, but recently have been involved in two studies which use regression discontinuity designs. One issue which comes up is then how to do power calculations for these studies. I thought I’d share some of what I have learned, and if anyone has more experience or additional helpful content, please let me know in the comments. I thank, without implication, Matias Cattaneo for sharing a lot of helpful advice.
One headline piece of information that I’ve learned is that RD designs have way less power than RCTs for a given sample, and I was surprised by how much larger the sample is that you need for an RD.
How to do power calculations will vary depending on the set-up and data availability. I’ll do three posts on this to cover different scenarios:
Scenario 1 (NO DATA AVAILABLE): the context here is of a prospective RD study. For example, a project is considering scoring business plans, and those above a cutoff will get a grant; or a project will be targeting for poverty, and those below some poverty index measure will get the program; or a school test is being used, with those who pass the test then being able to proceed to some next stage.
The key features here are that, since it is being planned in advance, you do not have data on either the score (running variable), or the outcome of interest. The objective of the power calculation is then to see what size sample you would need to have in the project and survey, and whether it is worth you going ahead with the study. Typically your goal here is to get some sense of order of magnitude – do I need 500 units or 5000?
This is the sixth post in our series of blogs by graduate students on the job market this year.
Interventions during early childhood have begun to gain importance in the social policy agenda in developing countries. These interventions have been mostly designed to address health, nutritional and cognitive deficiencies; and have shown to positively impact children’s development and nutritional outcomes, as well as socio-emotional abilities (Schady, 2006). Less evidence exists on the impact on discipline practices and spousal violence, factors that also affect children’s development. Experiencing or witnessing violence as a child appears to have important effects latter in life (UNICEF, 2014). Children who experience violence are more likely to drop out of school, to engage in adult criminal behavior and to become maltreating parents, among others. Studies have pointed out that physical violence against children is common throughout the world, and violence at home is the most common form of violence against children (Pinhero, 2006). For the particular case of Colombia, the three most common ways parents use to discipline children are verbal reprimand (76%), hitting with objects (44%) and slaps (28%) according to the Demographic Health Survey (DHS, 2005). Despite these figures, parenting practices remains a topic that has not received a lot of attention from researchers in developing countries.
Several countries around the world (notably Australia and Canada) have migration points systems- score above some points threshold and you can come in, score below and you can’t. This has intrigued me with the possibility of a regression-discontinuity design to measure impacts of migrating. However, there are several problems – the points given tend to be lumpy (e.g.
In the past year we have seen students in countries around the world protesting about the cost of higher education and lack of financial aid: Chilean students have been protesting for 7 months to change the overall educational financing system; Californians have occupied the UC Berkeley campus to protest fee hikes, and thousands of English students last year have taken part in protests against increases in tuition fees. Why is this happening all over the world?
I am writing to follow up on Berk’s post about using regression discontinuity design to evaluate the impacts of conditional cash transfer (CCT) programs. It happens that some colleagues and I at the International Food Policy Research Institute recently completed two papers using a unique regression discontinuity design (RDD) to evaluate the impacts of El Salvador’s Comunidades Solidarias Rurales (CSR) program. T