- Quora did a Q&A with Josh Angrist. In one question he offers thoughts on whether economics journals excessively reward causal identification over answering big questions and whether Freakonomics has been bad for the profession; in a second question he discusses how he deals with brutal referee reports – and the importance of not giving up: “The universality of this pain is apparent in economics publication data, which show that, even among graduates of elite Ph.D. programs, roughly 40% have no pubs six years out. Not even a thesis chapter... I updated this analysis recently to show that, in addition to high barriers to entry, our grad students research careers are short. Ten years post-Ph.D., roughly 90% of our Ph.D. grads are done publishing. What explains this high washout rate? Most Ph.D. grads are presumably submitting their work to journals and seeing it rejected. Demoralized, many presumably give up. Those who succeed necessarily learn to navigate the process of rejection and revision”
- Looking for inspiration for a research discussion on research ethics and of a particular type of RCT? There was robust discussion on twitter this week following publication of an experiment on what drives political participation in protests in Hong Kong. Discussion can be found here and here, for example, with a project that passed 4 institutions IRBs getting debated a lot. Arindrajit Dube asks a more general question of what people would think of RCTs incentivizing participation in various types of protests, with many people offering their viewpoints.
- There is an increasing tension between the rapid rise in the use of administrative data, and increasing focus on research reproducibility. For example, the most recent AER editor’s report notes that 40 percent of all empirical papers published in the 2018 AER received exemptions to the data posting policy due to the confidential nature of the data. A potential solution is to have an impartial third-party certify replicability. A policy forum article in Science this week describes how this is being done in France. There is central government database of administrative datasets that takes researchers 6 months and much work to get access to for a specific project. A new non-profit, cascad, has been granted long-term access to all 280 datasets under this central database, and authors can request a certificate of reproducibility, where a reviewer from this non-profit will run the code on the same data, and certify the reproducibility of results and that the code provided works. The whole process is meant to take only 2 weeks. Cascad seems like they will also certify reproducibility for other datasets as well.
- What can you do to make sure your research/development program doesn’t have ‘grimpact’ (a nice term for negative impact)? On the LSE Impact of Social Sciences blog, Valeria Izzi and Becky Murray offer thoughts on how to mitigate these risks as development research increasingly moves to doing development instead of just studying development.
- The Big Five Personality Traits are not measured consistently in developing country surveys. That is one message from a paper just out in Science Advances by Rachid Laajaj and co-authors (including the World Bank’s Omar Arias and Renos Vakis). They look at 29 face-to-face surveys from 23 low and middle income countries and find commonly used personality questions often fail to measure the intended personality traits and have low validity. One reason appears to be that answers change depending on who is asking the questions. Rhitu Chatterjee of NPR reports on the study and quotes me saying something so banal about it that she can follow it with “Duh!”.
- Conference Deadline Reminders: today is the deadline for submissions to NEUDC 2019. Meanwhile the call for papers is now open for CSAE 2020 (deadline October 18).
Join the Conversation