- Video of Pam Jakiela’s very nice SEEDEC keynote “”. I always appreciate it when a keynote speaker doesn’t just present their latest paper, but makes an effort to give an overview/new thoughts on an area they have been working on for a while. Her talk notes that development economists don’t exert enough effort measuring preferences, given their importance for theory – she makes the good point that with already long questionnaires, the temptation is to add a single question to a baseline, measure something like risk preferences not that well, and then conclude that there isn’t much heterogeneity according to risk. She has suggestions for how to do better going forward. A couple of quibbles:
- Preferences are only one of many items going into economic decision-making, and people can equally claim that we don’t spend enough time , , , measuring the presence and extent of liquidity constraints, measuring the level of imperfect information, etc. etc. With limited time available on baseline surveys, different questions/contexts will mean different emphasis should be placed on how seriously we need to take preference measurement.
- I’m not convinced by her metric of whether they are being underused – which is that abstracts of the JDE refer to preferences/lab-in-the-field experiments less than they do to RCTs or to IV/DiD etc. First, if the paper is not entirely a lab-in-the-field experiment, and if preferences aren’t the main outcome, then the paper is likely to note how it measured risk and time preferences somewhere in its methods/data section, not in the abstract. Second, many of these other methods can be used without having to collect survey data, and can be used when firms or governments are the decision-making unit, so their use should appear in more papers. So it would be interesting to look at the subset of papers that use original survey data on individuals, and see what proportion of those papers measure preferences (regardless of whether noted in the abstract or not).
Anyway, the point of a good talk is to get you thinking – so take the talk seriously, if not literally.
- On the BITSS blog, interesting discussion to be published based on “pre-results review” - in particular, they have decided to handle the possibility of letting researchers submit to a top-5 journals differently from the JDE – authors would have to formally retract their acceptance before trying another journal, which would then forfeit the in-principle acceptance.
- More bad news for scaling up promising researcher ideas: a few years ago, attracted a when they found that needy students with great grades didn’t often apply to elite colleges, and that they could be encouraged to do so by a very cheap intervention that provided guidance on how to apply, information on net costs after aid, and fee waivers – at a cost of just $6 per student. Now Matt Barnum at Chalkbeat , with really no impact. In a brief statement in the article, Hoxby is quoted as saying that they think there were enough differences from the way the researchers had done it that it didn’t qualify as a replication, but not much more detail is offered.
- Looking for an article to discuss collider bias with your class – Science magazine after finding little success between GRE scores and performance in grad school amongst the group of people that grad schools admit. See this old by Sanjay Srivastava for an explanation of why this is problematic inference.
- If you aren’t sure what collider bias means, one explanation can be found in Bruce Wydick’s – he highlights some of the areas where a development economist might find the book useful – I found parts of the book very interesting, but also the rhetorical style very off-putting, and would have preferred some much more concrete examples of how to implement the ideas in real-world applications rather than fun toy puzzles. Bruce appears to have got more out of it than me, and gives an example where he applies some of the ideas to his own work.
- Does it matter whether you ask respondents to answer on a scale from 0 to 6, versus from 1 to 7? Jonathan Evans from Pew Research on their findings on asking questions of political leaning on these two scales and finds mostly not, except “when a scale is easily divided in half — for example, when the maximum value is 6 rather than 7 — it’s more likely for respondents to select the midpoint.”
- RSS Feed: Reminder the RSS Feed for Development Impact changed when the blog platform changed. If you wish to follow us with Feedly or another RSS reader, please use https://blogs.worldbank.org/feed/impactevaluations/rss.xml