Published on Development Impact

Pre-results Review at the Journal of Development Economics: Lessons learned so far

This page in:

This is a guest post by Andrew Foster, Dean Karlan, Edward Miguel and Aleksandar Bogdanoski

About the JDE Pre-results Review pilot

In March 2018, the Journal of Development Economics (JDE) began piloting Pre-results Review track (also referred to as “registered reports” in other disciplines) in collaboration with the Berkeley Initiative for Transparency in the Social Sciences (BITSS). Through this track, the JDE reviewed and accepted detailed proposals for prospective empirical projects before results were available, offering a commitment to publish the resulting papers regardless of their findings. 

The motivation behind this exercise was simple: both science and policy should reward projects that ask important questions and employ sound methodology, irrespective of what the results happen to be. By shifting the bulk of the review process (including an editorial decision to accept a paper) to before any results are known, pre-results review has the potential to advance scholarly rigor and transparency by empowering authors to conduct research and report their results transparently, free from fear that certain results could render their papers “unpublishable.” Compared to other sub-disciplines within economics, development economics has long been an area for methodological innovation and rigor, from the widespread adoption of randomized controlled trials (RCTs) starting over two decades ago, to the promotion of study registration and pre-analysis plans in the past decade. Pre-results review is a natural addition to this arsenal of open science tools.

Since this was (to our knowledge) the first effort to implement pre-results review in an economics journal, we decided to introduce it as a pilot in the hope of learning about its compatibility with existing scholarly conventions in the discipline and its feasibility at the journal. A little over one year in, we reflect on the experience so far and offer our perspectives on the future of pre-results review at the JDE and in the discipline as a whole. Given the largely positive experiences so far, the JDE has decided to make pre-results review a permanent track for article submission at the journal. 

JDE Pre-results Review by the numbers

A little background on the terminology used in Pre-results Review could be useful. Specifically, pre-results review is based on a two-stage process: in Stage 1, authors submit a detailed analytical plan (formally referred to as a “Proposal”) for a prospective empirical study for which data have yet to be collected or analyzed. In Stage 2, the authors submit the full paper including their results and interpretation.

The reaction to the JDE pre-results review track so far has been overwhelmingly positive. Since the launch of the pilot, we have received 46 submissions in this track. By the time this post is published, editorial decisions will have been made for 65% of the 46 Stage 1 submissions, including 5 accepted and 25 rejected -- a 16.6% acceptance rate (see Table 1 below for more details). In comparison, the JDE receives over 100 submissions per month on average through its regular article submissions track, of which 6-8% are accepted for publication. (Given inevitable time lags in field data collection and write-ups, we are still expecting our first Stage 2 full paper submission.)

Since submissions to this track involved projects that were either in their early stages or about to begin data collection, it was also important for peer review to be as expedient as possible. Stage 1 of the peer review process has taken 70 days on average from submission to decision for the 30 submissions for which a final editorial decision has been made, counting from the time when a submission was first filed. This is slightly shorter than the standard peer review track for full-length articles at JDE, which took 77 days on average in 2018, and we expect to shorten the process further as the editorial team and our reviewers become more familiar with the pre-results review process. The review process for the five submissions that were accepted took 136 days on average, accounting also for the time taken by the authors to revise and resubmit their proposal based on referee and editor feedback.

Based on similar experiences with pre-results review at other journals, we anticipate that Stage 2 review will be faster, especially since, by that point, a significant portion of the work will have already been thoroughly reviewed in Stage 1.

 

Table 1: Metrics for submissions in the pre-results review track at the JDE in April, 2018-July, 2019. 

Number of Stage 1 submissions

Total Stage 1 submissions

46

  • Accepted

5

  • Rejected

25

  • Under review

16

Adjudication rate 

65.2%

Average Stage 1 peer review duration (in days)

All adjudicated

69.7

Accepted

135.8

Rejected

56.5

 

What we’ve learned so far

The JDE Pre-results Review pilot has allowed us to learn a lot about the ins and outs of this novel process, and we were able to make adjustments along the way. Below are seven takeaways that may be of interest to editors of other economics journals and the research community as a whole.

1. Pre-results Review has been challenging -- and that’s exactly why we are doing this. 

Referees (and editors) have found it hard on several occasions to judge submissions, often articulating reasons that are exactly why we think pre-results review is a good idea: the comfort and ease of seeing a full set of tables makes it “easier” to adjudicate on a paper. It’s a crutch, almost. Yet when the final research findings are known, it is impossible to fully safeguard against hindsight bias. The disciplining device of not knowing the final results is definitely challenging, but it’s a challenge we still believe is worth overcoming. 

2. Authors should submit their work for Pre-results Review well before conducting data collection of key outcome measures.

Work should be submitted relatively early for two reasons: (a) to maximize the possibility that feedback can be used in shaping final data collection, and (b) to ensure enough time for an R&R round and resubmission before the final data come in. This means at a minimum of three months, but ideally six months to a year, before data collection of the key outcome data. All participants, authors, referees and editors, must be blind to the key outcome variables at each stage of revision of the Stage 1 submission. (Note: it is typically quite fruitful to have baseline data in hand when submitting a proposal for pre-results review: baseline data provides useful information on the nature of measures, including their means and variances, which can help to understand the context and carry out meaningful statistical power calculations.)

3. Many authors have found Pre-results Review helpful in improving the ultimate quality of their research papers. 

Since preparing a Stage 1 submission can require a significant up-front investment from authors, it’s important that the pre-results review process is seen as valuable, transparent, and fair, regardless of the outcome. To learn more about the authors' experiences, BITSS staff interviewed 12 authors whose work had received a final editorial decision. When reflecting on referee feedback, almost all respondents agreed that going through the process helped improve the overall quality of their research projects. “Submitting the paper for pre-results review forced my co-author and I to think more carefully about the design of the analysis, down to the details, and thanks to feedback from the referees we will be able to address potential issues in the write-up of the paper” said Dr. Julia Vaillant, a researcher at the World Bank. In a similar vein, Prof. Eric Edmonds of Dartmouth College commented on the pre-results review guidance resources: “The Stage 1 submission template that the JDE put together helped me improve drastically what I thought was already a really good pre-analysis plan.” Finally, describing the differences between referee feedback in pre-results review and conventional peer review, Dr. David McKenzie of the World Bank pointed out that “referee reports are shorter and are written in a more constructive tone”.

4. Reasons for rejection in Pre-results Review are similar to those in the regular review process.

Though pre-results review makes it possible to address some research design flaws before data are collected, certain issues have led several papers to be rejected (these are often reasons to reject a paper in the regular review process, as well). Based on our experience so far, we recommend that authors carefully consider the following to minimize the chances of rejection:

a.                   Power calculations should be justified and informed by insights from earlier literature, and proposals should be powered to detect the effects that are theoretically meaningful. Pilot and baseline data can inform the targeted effect size.

b.                   Though Stage 1 submissions don’t include results, authors should still make a strong case for the importance of the research question in terms of its potential contribution to the literature -- regardless of the results.

c.                   Stage 1 submissions should be treated as a record of what will be reported in the full paper at Stage 2: all analyses must be outlined in enough detail to constrain researcher degrees of freedom, and make it possible to distinguish between pre-specified confirmatory findings, and additional exploratory analyses (more on this in the section below).

5. Pre-results Review has yet to motivate exploration in “riskier” topics and research designs, but may eventually do so.

We expected that the existence of a pre-results review submission track would eventually incentivize researchers to take on projects that they perceive as risky in terms of methods, topics and contexts, since in-principle acceptance offers the promise of publication regardless of the nature of the findings. Of course, it is difficult to objectively assess the “riskiness” of a research project. But our initial impression is that the Stage 1 submissions to the JDE so far do not appear to be substantially more (or less) risky than papers submitted through the normal journal review track. Pre-results Review does appear to be particularly well suited for RCTs, especially when plans for data collection and analysis can be laid out in advance in high detail, as they typically are these days in pre-analysis plans. Pre-results Review for observational studies -- which still comprise around 80% of the publications in top economics journals -- has yet to be tested at the JDE, although we welcome submission of such papers and there is actually some precedent for something analogous to pre-results review for prospective non-experimental research (see for example, Neumark (2002) and the 2016 Election Research Preacceptance Competition). 

6. JDE’s Pre-results Review process is discipline-specific to Economics in some aspects.

When we launched this pilot, just over 90 journals were accepting articles for pre-results review (this number has since grown to 203, and counting...). As the majority of these are psychology and other behavioral science journals, “registered reports” have taken on many features that reflect scholarly conventions in those disciplines. Likewise, pre-results review at the JDE has three distinguishing features characteristic of economics, and in particular of the subfield of development economics.

First, Stage 2 of review at the JDE will likely allow for some flexibility in interpreting deviations from research designs accepted at Stage 1. This flexibility may be at odds with emerging best practices in other disciplines. For example, the COS Generic Author Guidelines for Registered Reports (used by most psychology journals) state that “any deviation from the stated experimental procedures, regardless of how minor it may seem to the authors, could lead to rejection of the manuscript at Stage 2.” Our desire for flexibility is due to the nature of development economics field experiments, which are often far more susceptible to challenges that are beyond researchers’ control -- such as unpredictable political and policy changes, difficulties reaching subjects in remote areas, migration of subjects between survey rounds, etc. -- than is typically the case for U.S.-based lab experiments, for example. The flipside of this willingness to be flexible is that the JDE will require authors to exercise the utmost transparency in reporting any circumstances that could prevent the study from being implemented as proposed, including providing detailed explanations about how the final article is still able to successfully address the original questions despite these challenges. For journal editors, this will be a learning process but one in which we aspire to make sure that excessive rigidity does not stand in the way of the accumulation of knowledge. We recognize that, in development economics, much can be learned about a research question through the process of conducting fieldwork. This  prompts us to reiterate that we welcome serendipitous findings, while also asking for transparency: researchers should distinguish confirmatory, pre-specified tests from exploratory analyses. 

Second, because field experiments study human interactions in a real-world context, precautions need to be taken to prevent biasing participants’ behavior. We’ve been able to address this for one study accepted based on pre-results review by masking certain details of the research design (that, if widely circulated, could potentially affect participant behavior), and we will continue to work with authors to address similar issues moving forward. This means that we may not always be able to fully publicize accepted Stage 1 proposals until fieldwork is completed, even though this may be at odds with what is recommended in other disciplines that have adopted pre-results review. Nonetheless, the accepted Stage 1 proposals will be posted alongside each full-length paper when it is eventually published in the JDE.

Finally, since development economics experiments can literally take years before data collection is complete, it may also be a while before the JDE is able to publish the first paper in the pre-results review track. The nature of the timing of our studies in turn may make acceptance based on pre-results review even more attractive for development economists than it might be for scholars in other fields and disciplines. For instance, in experimental economics and in psychology, researchers tend to publish more frequently and operate on far shorter timelines. Being offered pre-results acceptance upfront may make the long process of fieldwork (and all the challenges involved) more rewarding, especially for young scholars who are on the job market or being considered for promotion at their home institution.

7. It took a village to make the pre-results review process run smoothly.

The JDE re-results review pilot’s success to date has been possible due to the support provided by a bunch of different groups. BITSS has been instrumental in helping us develop a suite of needed editorial resources; we believe this represents the biggest up-front investment for journals launching this format. These resources include Author Guidelines, Frequently Asked Questions (FAQs) for authors and reviewers, and a Stage 1 Proposal Template. Based on other journals’ experiences, we also knew that authors would need continuous support throughout this (new) process, so BITSS provided much-needed help conducting outreach and answering questions from interested authors. We are also indebted to our reviewers who have been supportive and ready to provide constructive feedback to authors, even when dealing with a pre-results review process that was out of their comfort zone and can be more demanding than conventional peer review. Finally, we were very fortunate that our journal manager was supportive and patient as we explored ideas and made amendments throughout the pre-results review pilot. Editors looking to implement pre-results review at their own journals would certainly benefit from gaining support and buy-in from the wider research and publication community upfront.

Moving forward

We’ve learned a lot from the JDE Pre-results Review pilot experience one year in, and consider it a success. Several open questions remain, especially regarding how Stage 2 of the review process will be carried out (as a reminder, this is when papers that were granted in-principle acceptance will be re-reviewed, now with results in hand). Will authors continue to explore their data and refine their analyses based on what they find, or will having pre-results acceptance (unfortunately) disincentivize additional investment in creative exploratory analyses, as suggested by experience in another discipline? Will development economists eventually move into riskier research terrain, now that a pre-results review track is available at a journal like the JDE? We also have adopted a flexible policy with respect to submission of Stage 1 acceptances to other journals, at the authors’ discretion. Will authors ultimately publish their research in the JDE, or will a substantial share of the research eventually appear in other journals instead? If the latter, will this diminish the incentive of journals to adopt pre-results review? The answers to these and other related questions will influence how pre-results review is implemented moving forward, both at the JDE and (we hope) at other economics journals. For this reason, we will continue to welcome submissions, questions, and feedback as pre-results becomes a permanent track at the JDE.

We also believe that further innovation and collective action is critical to fully realizing the potential of Pre-results Review, and more broadly, to continue to bolster the transparency and rigor of economics research. There are many welcome developments on the horizon. For one, we welcome the launch of a special issue of pre-results review at Experimental Economics, which is, we believe, now the second economics journal to adopt this submission format. And we encourage other economics journals, across subfields, to take up pre-results review. Journals that seek to do so can learn from our experiences and make use of the extensive resources from our pilot, which should give them a head start. 

Also don’t hesitate to send us your questions if you want to learn more about the JDE experience, or if you have any suggestions for making this review track work better for authors, reviewers, and research consumers alike!


Join the Conversation

The content of this field is kept private and will not be shown publicly
Remaining characters: 1000