My summary of recent attempts to quantify the Hawthorne effect a few weeks back led to some useful exchanges with colleagues and commenters who pointed me to further work I hadn’t yet read. It turns out that, historically, there has been a great deal of inconsistent use of the term “Hawthorne effect”. The term has referred not only to (a) behavioral responses to a subject’s knowledge of being observed – the definition we tend to use in impact evaluation – but also to refer to (b) behavioral responses to simple participation in a study, or even (c) a subject’s wish to alter behavior in order to please the experimenter. Of course all these definitions are loosely related, but it is important to be conceptually clear in our use of the term since there are several distinct inferential challenges to impact evaluation arising from the messy nature of behavioral responses to research. The Hawthorne effect is only one of these possible challenges. Let me lay out a classification of different behavioral responses that, if and when they occur, may threaten the validity of any evaluation (with a strong emphasis on may).
- From the Stata blog: how to put the Stata user manuals on your ipad.
- Chris Blattman discusses the controversy surrounding a field experiment being done by political scientists in the Montana election – much of the controversy seems very odd to a development economist –especially a concern that political scientists might actually be doing research that could affect politics….Dan Drezner notes the irony “political scientists appear to be damned if they do and damned if they don’t conduct experiments. In the absence of experimental methods, the standard criticism of political science is that it’s not really a science because of [INSERT YOUR PREJUDICE OF CHOICE AGAINST THE SOCIAL SCIENCES HERE]. The presence of experimental methods, however, threatens to send critics into a new and altogether more manic forms of “POLITICAL SCIENTISTS ARE PLAYING GOD!!” panic.”
I agree with the general point raised by Berk in his previous post in this blog (read it here). We need to discuss when and how to conduct scientific replication of existing research in social sciences. I also agree with him that, at least in economics, pure replication analysis –which in my view it is the only genuine replication analysis- is of secondary interest –I hope to return to this issue in a future contribution in this blog. Instead, I believe that we should emphasize replication of relevant and internally valid studies both in similar and different environments. There is now excessive confidence in the knowledge gathered by a single study in a particular environment, perhaps as a result of a misconstruction of the virtues of experimentation in social sciences. As Donald T. Campbell once wrote (1969):
We are pleased to launch for the fourth year a call for PhD students on the job market to blog their job market paper on the Development Impact blog. We welcome blog posts on anything related to empirical development work, impact evaluation, or measurement. For examples, you can see posts from 2013 and 2012. We will follow the same process as previous years, which is as follows:
We will start accepting submissions immediately, with the goal of publishing them in November and early December when people are deciding who to interview. Below are the rules that you must follow, followed by some guidance/tips you should follow:
- job market series 2014
A while back I blogged about work using active choice and enhanced active choice to get people to get flu shots and prescription refills. The basic idea here is that relatively small modifications to the way a choice is presented can have large impacts on the take-up of a program. This seemed useful in the context of many of our training programs– attendance rates averaged 65 percent in a review of business training programs I did with Chris Woodruff. Therefore for an ongoing evaluation of the GET AHEAD business training program in Kenya, we decided to test out this approach.
- Leonard Wantchekon on the “curse of the good soil” and insufficient investment in rural infrastructure.
- From the Harvard Business Review: experiment with organizational change before going all in.
- Owen Ozier on deworming and child cognition in the long-run – particularly relevant after Berk’s post this week on the replication of the original Miguel and Kremer paper.
- Interesting piece on the challenges of attempted school reforms in India and Guinea-Bissau in the LSE Centrepiece: “With just four months until the schools were to open, our 48 candidate teachers arrived with demands that would … mean their wage rising to over four times those of the average teacher and more than the pay received by public sector doctors, as well as cabinet ministers….For the next six months, we watched as the 48 candidate teachers marched across Guinea-Bissau’s political map to try to extort a cash award from us….