Guest post by Abhijeet Singh
Last week on this blog, David wondered whether we should give up on using SDs for comparing effect sizes across impact evaluations. I wish that question was asked more often in the field of impact evaluations in education, where such comparisons are most rife. In this post, I explore some of the reasons why such comparisons might be flawed and what we might do to move towards less fragile metrics.
This is the fourth in our series of job market posts this year.
Research from numerous corners of psychology suggests that self-assessments of skill and character are often flawed in substantive and systematic ways. For instance, it is often argued that people tend to hold rather favorable views of their abilities - both in absolute and relative terms. In spite of a recent and growing literature on the extent to which poor information can negatively affect educational choices (e.g. Hasting and Weinstein, 2008; Jensen, 2010; Dinkelman and Martinez, 2014), there is little systematic evidence establishing how inaccurate self-assessments distort schooling decisions.