Guest post by Aleksandar Bogdanoski (Senior Program Associate, Berkeley Initiative for Transparency in the Social Sciences (BITSS)) and Katie Hoeberling (Program Manager, BITSS)
What will be the effect of raising the minimum wage by $2 an hour? How will cash transfers impact local prices in rural Kenya? Will a basic income support economic recovery from the coronavirus pandemic? When faced with such questions, policymakers and practitioners rely on research and expert perspectives to make decisions. Making good policy decisions often comes down to being able to reasonably predict the effects of policy alternatives, and choosing those that maximize societal benefits. Too often, however, hindsight bias leads us to believe we knew how a policy, intervention, or an evaluation would play out even if our priors don’t actually match the outcomes.
Collecting and recording predictions systematically can help us understand how results relate to our prior beliefs, as well as how we can improve their accuracy. They can also reveal how surprising results really are, potentially protecting against publication bias or the mistaken discounting of results as “uninteresting” after the fact. Finally, tracking prediction accuracy over time makes it possible to identify super-forecasters—individuals who make consistently accurate predictions—who can help prioritize research questions, as well as design strategies and policy options in the absence of rigorous evidence. All of these benefits become even more important in times of emergency, such as the COVID-19 pandemic which has disrupted the collection of field data for many research projects.
Recognizing the potential of a more systematic approach to forecasting, the Berkeley Initiative for Transparency in the Social Sciences (BITSS) has been working with Stefano DellaVigna and Eva Vivalt to build the first platform of its kind that will allow social scientists to systematically collect predictions about the results of research. Today we are excited to announce the official launch of the Social Science Prediction Platform!
The Social Science Prediction Platform makes it easy to collect forecasts
The Social Science Prediction Platform, or SSPP, allows researchers to standardize how they source and record predictions, similar to how study registries have systematized hypothesis and design pre-registration. Perhaps most importantly, the SSPP streamlines the development and distribution of forecast collection surveys. With the SSPP, researchers can:
● Use our survey template or develop their own to collect predictions for various outcomes such as effect sizes, standard errors, and confidence intervals. And because Qualtrics is integrated within the SSPP itself, users can upload .qsf files directly to the platform.
● Distribute the survey directly from the platform to a sample of their choosing, including topic or method experts such as senior academics and professionals, disciplinary experts such as pre-screened Ph.D. students, or members of the general public who sign up on the platform. They can also send auto-generated anonymous links directly to individuals via email. Importantly, the SSPP enables more efficient coordination of survey distribution, reducing the risk of overburdening a small group of popular forecasters and mitigating “forecaster fatigue” (a common problem for journal reviewers) and low response rates.
● Access survey results on a pre-specified date and easily download them in .csv files.
To learn more about key issues to consider when designing surveys for collecting forecasts, as well as how to use the platform, see our Forecasting Survey Guide. The SSPP team is also offering consulting support for those who need it—contact Nick Otis at support@socialscienceprediction.org for help.
Two types of accounts allow users to access different functionalities based on their interests and experience. Basic accounts allow users to view available studies and make predictions, while Researcher accounts are restricted to researchers at academic or other institutions and allow users to upload and distribute surveys.
To encourage predictions, we will offer the first 200 graduate students $25 for their first 10 predictions, re-evaluating as more projects and predictions are added to the SSPP.
Though widely applicable across research designs and disciplines, the SSPP may be particularly useful when studying complex, flagship projects that are unlikely to be replicated, or projects that have been offered “in-principle acceptance” as part of a registered reports journal track.
These projects need your predictions!
As part of the SSPP’s soft launch, three projects are available for prediction:
● Can Information and Alternatives to Irregular Migration Reduce “Backway” Migration from the Gambia? Accepting predictions until August 31, 2020. Tijan L. Bah, Catia Batista, Flore Gubert, and David McKenzie test the effectiveness of three interventions on deterring risky and irregular migration, involving some combination of information, financial support, and vocational training. See this project’s pre-analysis plan here.
● How do policy-makers and development practitioners weigh impact evaluation results? Accepting predictions until July 24, 2020. Aidan Coville and Eva Vivalt leverage a discrete choice experiment with low- and middle-income country government officials and professionals at the World Bank and Inter-American Development Bank to examine how different kinds of evidence is weighed.
● How persistent are the effects of two depression treatment interventions in India? Accepting predictions until January 11, 2021.Gautam Rao, Frank Schilbach, and Heather Schofield survey participants of two past RCTs conducted by psychiatrists, measuring outcomes including mental health 3-5 years later, economic well-being outcomes such as consumption, and behavioral outcomes such as over- or under-confidence, belief updating, social preferences, and risk and time preferences.
Since the SSPP is the first of its kind, this launch will be a learning process. We are discussing how best to address the following open questions and welcome feedback from the social science community!
● What role should the SSPP play in coordinating or facilitating incentives? Cash payments and gift cards tend to be the most commonly used survey incentives, but other systems like prediction accuracy leader boards might work just as well, if not better, for academics and other scientists whose expertise is an important form of capital.
● Relatedly, how will reciprocity play a role in contributing predictions? How should we encourage this? We are considering how many predictions we should encourage users to provide for each survey they upload. As forecasting becomes more normative, we hope to foster a shared sense of responsibility in eliciting and contributing to them.
● Should the SSPP assign Digital Object Identifiers (DOIs) to projects or predictions? The platform assigns unique identifiers to projects, but we are discussing what value DOIs may add.
● How should the SSPP share results back with forecasters, policymakers, or the public? Are some reporting aspects, such as design, timing, or level of detail, more helpful than others in facilitating the updating of priors? Should the SSPP also play a role in understanding belief updating?
Send us your suggestions, questions, and comments and help us make the platform as useful as possible. Follow us on Twitter @socscipredict and @UCBITSS, or join the discussion using #socsciprediction. And sign up on the platform to get started!
Join the Conversation