Syndicate content

Working Papers are NOT Working.

Berk Ozler's picture

In research, as in life, first impressions matter a lot. Most sensible people don’t go on a first date disheveled, wearing sweatpants and their favorite raggedy hoodie from their alma mater, but rather wait to break those out well into a relationship. Working papers are the research equivalent of sweatshirts with pizza stains on them, but we wear them on our first date with our audience.

It is common practice in economics to publish working papers. There are formal working paper series such as NBER, BREAD, IZA, World Bank Policy Research Working Paper Series, etc. With the proliferation of the internet, however, people don’t even need to use these formal working paper series. You can simply post your brand new paper on your website and voilà, you have a working paper: put that into your CV! Journals are giving up double-blind refereeing (AEJ is the latest) because it is too easy to use search engines to find the working paper version (it’s not at all clear that this is good. See the recent comments on Blattman’s blog, which make it look far from clear that giving up on double-blind peer-review is a good idea). But, do the benefits of making these findings public before peer-review outweigh the costs? I recently became very unsure…

In economics, publication lags, even for journals that are fast, can be long: it is not uncommon to see articles that state: Submitted December 2007; accepted August 2010. I’ll grant you, a fair bit of that period may be due to the fact that the authors sat on a resubmission, but it is common to wait 4-5 months for a first decision, and then similar times for subsequent decisions on revise and resubmits. But, research findings are public goods and working papers are a way to get this information out to parties who can benefit from the new information while the paper is under review.

But, that assumes that the findings are ready for public consumption at this preliminary stage. By preliminary, I mean papers that have not yet been seriously reviewed by anyone familiar with the methods and the specific topic. Findings, and particularly interpretations, change between the working paper phase and the published version of a paper: if they didn’t, then we would not need peer-reviewed journals. Sometimes, they change dramatically. (BTW, the promise that the blogosphere would serve as the great source where we get many good comments on our working papers simply has not come through. Useful comments require time and careful reading, which is not how stuff online is consumed.)

Now, back to the point about first impressions: When a new working paper comes out, especially one that might be awaited (like the first randomized experiment on microfinance), people rush to read it (or, rather, skim it). It gets downloaded many times, gets blogged about, etc. Then, a year later a new version comes out (maybe it is even the published version). Many iterations of papers simply improve on the original premise, provide more robustness checks, etc.. But, interpretations often change; results get qualified; important heterogeneity of impacts is reported. And sometimes, main findings do change. What happens then?

People are busy. Most of them had only read the abstract (and maybe the concluding section) of the first draft working paper to begin with. Worse, they had just relied on their favorite blogger to summarize it for them. But, guess what? Their favorite blogger has moved on and won’t be re-blogging on the new version of the working paper. Many won’t even know that there is a more recent version. The newer version, other than for a few dedicated followers of the topic or the author, will not be read by many. They will cling to their beliefs based on the first draft: first impressions matter. By the time your paper is published, it is a pretty good paper – your little masterpiece. The publication will cause an uptick in downloads, but still, for many, all they’ll remember is the sweatshirt, and not the sweat that went into the masterpiece.

Of course, we can update working papers. But, unless we can alert everyone that there is a new version of a paper (AND make them read it and understand the changes since the first draft), this is of little use. Even when I am specifically looking for more recent versions of a paper, I am usually unable to find the most recent one with a simple Google search (Try it here for the Miracle of Microfinance. Now, go to Duflo’s web page for her papers and look for the same paper: what do you see?). Also, some working papers are that for a long time: this one by Duflo, Dupas, and Kremer, which came out a couple of weeks ago first appeared in 2006 and was updated in 2010. The authors likely did not intend to publish the findings until now (they were collecting Biomarker data on STIs until recently, but kept the public informed on short-term and medium-term impacts of te interventions on schooling and fertility). The findings, naturally, seem to have evolved.

There is another problem: people who are invested in a particular finding will find it easier to take away a message that confirms their prior beliefs from a working paper. They will happily accept the preliminary findings of the working paper and go on to cite it for a long time (believe me, well past the updated versions of the working paper and even the eventual journal publication). People who don’t buy the findings will also find it easy to dismiss them: the results are not peer-reviewed. At least, the peer-review process brings a degree of credibility to the whole process and makes it harder for people to summarily dismiss findings they don’t want to believe.

I have some firsthand experience with this, as my co-authors and I have a working paper, the findings of which changed significantly over time. In March 2010, we put out a working paper on the role of conditionalities in cash transfer programs, which we also simultaneously submitted to a journal. The paper was reporting one-year effects of an intervention using self-reported data on school participation. The reviews, which were fast (as good as it gets at about a month), suggested that we should not only report longer-term data but also use alternative measures of schooling – less subject to reporting bias. We followed this advice and updated our working paper, which now presented two-year impacts using enrollment and attendance data collected from schools, in addition to independent achievement tests, in December 2010 and resubmitted it to the same journal, again simultaneously. After one more revise and resubmit, the paper is now forthcoming, and the final version (more or less) can be found here.

What’s the problem? Our findings in the March 2010 version suggested that CCTs that had regular school attendance as a requirement to receive cash transfers did NOT improve school enrollment over and above cash transfers with no strings attached. Our findings in the December 2010 version DID. The difference was NOT that we had longer-term data: if we use self-reported enrollment to examine one-year or two-year impacts, the results are the same (see Table III, panel A in the paper linked above). Rather, the difference was caused by the kind of data that we were using: we supplemented self-reports with administrative data, enrollment data collected from schools, monthly attendance ledgers, and independent achievement tests in math and languages. These additional data all lined up to refute the findings based on self-reported school participation. It turns out that asking school-age people whether they are attending school is not the best way of assessing impacts of schooling interventions (a paper I have with Sarah Baird on this is forthcoming in a special issue of the JDE on measurement, and I blogged earlier about similar evidence here).

However, the earlier (and erroneous) finding that conditions did not improve schooling outcomes was news enough that it stuck. Many people, including good researchers, colleagues at the Bank, bloggers, policymakers, think that UCTs are as effective as CCTs in reducing dropout rates – at least in Malawi. And, this is with good reason: it was US who screwed up NOT them! Earlier this year, I had a magazine writer contact me to ask whether there was a new version of the paper because her editor uncovered the updated findings while she was fact-checking the story before clearing it for publication. As recently as yesterday, comments on Duncan Green’s blog suggested that his readers, relying on his earlier blogs and other blogs, are not aware of the more recent findings. Even my research director was misinformed about our findings until he had to cite them in one of his papers and popped into my office.

Many working papers will escape this fate – which is definitely not the norm. But, no one can tell me that working papers don’t improve and change over time as the authors are pushed by reviewers who are doing their best to be skeptical and provide constructive criticism. But, it turns out that those efforts are mainly for the academic crowd or for the few diligent policymakers who are discerning users of evidence. We don’t approve drugs based on a news release of the success of a trial. We need peer-reviews to confirm the findings (and further studies to confirm the findings before approval). Why is it OK to prescribe economic policy based on a working paper? Are we sure that the people who are doing the prescribing have all the information they need? Is it because bad economic policy kills people slower than a bad drug?

So, what if we chose to not have working papers? There is no doubt that the speed with which journals publish submitted papers would have to change. Some journals pay reviewers: this could become more prevalent to encourage speedy but thorough reviews. And, these days, journal articles, with all the requested online appendices, the data, dofiles, etc. are much more attractive than working papers and I don’t think they are more academic. If you can write well and make your findings accessible for policymakers, you do equally well via a journal article as a working paper.

If we didn't have working papers, we could also go back to double blind reviews again. No, it won’t be perfect, but double-blind was there for a reason. I see serious equity concerns with single blind reviews (Those of you out there who receive a paper to review: if you are not sure who the authors are by the time you read the abstract, please resist the urge to Google the title). This should be our default position until we study the effects of single- vs. double-blind reviews in economics a bit more.

The biomedical field does not have working papers and turnaround, on average, is much quicker. Colleagues from this field never understand how we have unpublished papers for so long, even though they have been aware of the results sometimes for years. People have recently been calling for economics to borrow trial registration, CONSORT guidelines, etc. from the biomedical field (I have my doubts that these would adequately address the issues). Let’s borrow faster publications instead without sacrificing on the quality of the peer-reviews if we can.

Update (7/5/2011): In today, Dave Johns has the perfect follow-up to this post, in an article called "Social contagions debunked":


Submitted by Anonymous on
The quality of the products is variable for working papers admittedly, which is why they are referred to as "working papers". Nevertheless, they remain very important means to relay information as it evolves and becomes available. Updates can be made through web publishing rather simply. Disclaimers lay out the caveats to working papers, so it is up to the user to decide on the utility of the information and appropriateness for their end use. As things stand, the formal publication process in the Bank is far too onerous. This is restricting important information that could otherwise make meaningful contributions to our work and help our clients. It is up to the publisher and authors to ensure the quality of the work. Perhaps it would be better to put in place some reasonable standards for working papers, rather than turning off the information flow.

Submitted by Helen Abadzi on
The ongoing nature of research and literature reviews means that many papers are in effect perpetual drafts. The situation where a specific study revised its findings seems a bit rare. Overall, I have seen much more benefit in reading the drafts than waiting for the formal papers, years after submission. A bigger issue about potentially unstable results is the limited external validity of economic findings. Before asking whether CCTs should have certain results, it is useful to study research from relevant constructs in psychological research. Those usually help create a chain of causality that can explain, predict, guide implementation. Rather than experience flip flop impressions, the chain of causality can guide explanations.

Submitted by Helen Markelova on
Overall, I agree with your assessment of the value of working papers, especially when it comes to using and citing them in your own research. However, as someone who was closely associated with a working paper series at my previous job at a policy research institute, I have to propose a few thoughts in defense of working papers. 1) Not all working papers are made equal---the working paper series that I worked on ( had a mandatory review process by 2 reviewers, which in most cases was pretty rigorous (some papers were rejected or sent back for major revisions). The program leader did another final review and often sent the paper back to the authors for more revisions. (Note: this is an interdisciplinary working paper series). 2) As you mention, journals do take a long time to publish an article, but in many cases, there needs to be a "deliverable" for a donor---instead of spending time writing a useless donor report that perhaps one person at a donor organization will read, why not draft a working paper (and then pluck out what you need for a donor report), publish it (with a clear disclaimer that it is a working paper, i.e. preliminary findings), and then proceed with the journal publication? In my experience, donors were always happy to see the results of a study published as a paper, since it meant additional outreach and policy impact opportunities. 3) As you mention in your post, it lets others know about the work you are doing, which can promote collaborations, useful input, create new outreach opportunities, etc. With the development literature, many of the readers are developing country professionals, academics, students, even policymakers, who may not have access to academic journals. The working paper series that I mention had the same download rates as one of the leading development journals (mentioned to us by one of the journal's editors). 4) Finally, if someone is working on collaborative research projects with developing country collaborators, being a co-author on a working paper is a big deal to them (in terms of their careers and professional development) since they may never publish in a journal. Some of the working papers submitted by our collaborators were not of the same quality as the ones submitted by established researchers (even though all went through a 2-person review process), but isn't capacity building part of development work? I have witnessed papers submitted by our developing country collaborators go from an unreadable stream of thoughts to a decent and well-presented research paper, via the review and revisions process. These are just some of my observations coming from being connected with a particular working paper series at a place interested in development country collaborations and policy impacts, so it may not apply to all cases of working papers.

Submitted by Berk Ozler on
Hi Helen, Thanks for these thoughtful comments. I am in full agreement with your comments. A few further thoughts: 1. Your WP series was, in a practical sense, a journal. It was peer-reviewed, it actually rejected papers, and was read widely. This is true for some other WP series as well. The World Bank's working paper series has some less rigorous requirements before publication (clearance by managers, etc.) and is also widely read. The WP version of a paper will, in most cases, be much more read than the eventual journal article. 2. Your series, as many others, are much faster than journals. But, what is preventing academic journals from being faster. If we did not have WPs, they would have to be. Even with the WPs, the last 5+ years has seen a big decline in turnaround times in econ journals, and many now boast (a) high desk rejection rates, and (b) fast decision times (app. one month). 3. What to produce for donors is a big one. While trying to instill some patience among donors (especially by slowing down project cycles among large donors) would be worthwhile, you're right that you might as well try to write a decent paper rather than a report. My question was whether you want to then give it the legitimacy of a publication by calling it a working paper. Most people will read your paper only once (if that). Which version do you want that to be? Thanks again for taking the time to post some comments on our blog... Berk.

Submitted by Anonymous on
Look at the Morduch Roodman paper on microfinance. The headline caught fire and Roodman used his finding (based on wrong coding) in Congressional Hearing to say that they could not find any impact of microfinance in Bangladesh. The damage was already done. And eventually these so called scientific results were used by critics and politicians in India and Bangladesh to go after microfinance industry.

Submitted by Berk Ozler on
Good example - thanks. Had they gone through some peer review first, and I believe more than one person has made this point, that coding error might have been caught before going public, which in turn would have changed the whole tenor of the debate. This also is related to Helen Abadzi's point that it is not all that rare for these things to happen. We likely just don't hear about many of them (or they don't matter)...

Submitted by Tim Ogden on
Berk I don't know how I've become an associate defender of the Morduch Roodman paper on World Bank blogs, but anyway... I continue to contend that this critique is based on a near willful misreading of the Morduch and Roodman work. Their contention was never based on the opposite sign they found, but on the insufficiency of the Pitt Khandker work in demonstrating causality. Their citation of the opposite sign finding was to show that the results were highly dependent on assumptions and statistical manipulations. They NEVER claimed the opposite sign was a valid finding. However, they did contend, and still do, now with more robustness than ever, that the Pitt Khandker paper does not establish causality. Tim

Submitted by Berk Ozler on
Hi Tim, The last thing I want to do here on the DI blog is to get into that debate. I could barely stand reading some of the very long exchanges -- like watching a train wreck... You may well be right and I know some will disagree with you (I don't know enough detail to chime in, nor would I want to). But, my point remains that many working papers, most likely including that of Murdoch and Roodman, have room for improvement and would benefit from peer review before hitting the airwaves. Thanks for the comments. Berk.

Berk, before Pitt pointed out the key errors in our replication, the paper had gone through *two* peer reviews at two different journals, neither of which turned up serious problems. The first of these reviews, at JPE, was done by Pitt himself. Pitt's second attempt earlier this year finally found two real discrepancies that mattered---and was *not* a peer review. This is hardly an example of how peer review before circulation of a working would have helped. On the contrary, the errors were only found because we put the working paper out there and because we were fully transparent about our methods, posting data and code. Meanwhile, they only occurred because of a relative lack of openness on the part of Pitt and Khandker. They did not share their code in the way required by current JPE policy. All in all, this example argues for more transparency in research, not less. --David

Submitted by Berk Ozler on
Hi David, Thanks for the explanation. As I said before, I don't know enough about the details of this case and I would prefer to not get into it. However, I don't think any of the 8,000 or so people who read my original post would see it as a call for less transparency. I simply questioned the role of working papers in economics -- pros and cons. Sincerely, Berk.

Berk, the post seems to propose that we contrast the status quo with a world without or (or with fewer) working papers. The hypothetical alternative would be a world in which the ongoing work of individual researchers is less visible to others. This might reduce the risk of misimpressions but it would increase the risk that opportunities for improvement would be foreclosed by eliminating the opportunity for uninvited reviewers and commentators to chime in. So I agree, there are pros and cons. But surely making research less visible to the public can reasonably be described as making it less transparent.

Submitted by Berk Ozler on
The tradeoff we are discussing is an empirical question and I am not sure we know the answer as to which way we should tilt. If the goal is transparency (as you define above) in and of itself, it would be optimal to blog about new results on a daily basis for on ongoing study. This, however, would not be optimal if we're trying to maximize the absorption of robust research findings by the public: daily updates that may bounce around (due to coding errors, flawed reasoning, robustness checks, etc.) would leave many simply confused. There is a reason why many people want short policy notes, executive summaries, etc. That's why blogs are becoming more and more popular. I believe that transparency has to do with openness about the research methods, which include the data and the code (in empirical work), the method of analysis, the assumptions, etc. It does not, at least not to me, have to do with the speed with which people put stuff out there or at what stage of their research. It is not clear that working papers improve either the quality of information that is disseminated to the public or the quality of the work itself. Most working papers don't even fit the criteria you lay out above: they don't come with the data and code. Whereas many journal articles these days do: you even cited the JPE policy above. I see no fundamental reasons why publications have to be more opaque than working papers -- as long as we can replicate them (as you and Jonathan did with the PK paper), we're fine. More and more journals require the authors to make the data and code public at the submission stage, so we're presumably moving in the right direction. If we can now only speed things up and make journals open to everyone... Sincerely, Berk.

Submitted by Berk Ozler on
An interesting related link (and a paper) about researchers from top econ departments sidestepping the general interest journals (partly because they still get widely cited through other means, such as WPs):

Submitted by Tim Ogden on
Would it not be worthwhile, a la "The Email Charter" to produce a "Working Paper" charter that set some community standards and expectations that would help to overcome some of the limitations of the Working Paper form. For instance I can imagine a cover page that lists versions and changes in findings--a simplified version of what Wikipedia does to track edits. That would be particularly helpful for papers like Duflo, Dupas, Kremer which if I recall, the original version suggested that the HIV curriculum increased rates of STI infection.

A co-author posted a draft version of our paper as a working paper, and after being accepted at a respected journal, the offer has been rescinded because they found an online version (I didn't even know it was there!). Here is more on the saga:

Submitted by Berk Ozler on
Hi Kim, So sorry to hear that. This is the case with the biomedical journals that I am familiar with: if you choose to put out a working paper, you have pretty much foregone your right to a journal publication. As far as I understand it, their definition of original work and copyright includes anything online -- it wouldn't even have to be in a working paper series. Good luck with your paper (and impending tenure) and please give my best to Susan. Berk.

Submitted by Berk Ozler on
I like that. Some people (but not all) do list the versions in the cover page, but the changes in findings is a good idea. I wonder how easy it would be to implement. Sometimes, new findings are added, there is a new emphasis, a model, heterogeneity, scaling down of certain discussions, etc. Might be hard to summarize all that neatly... As a side note, however, we'd like to thank everyone who are taking the time to read and comment on our blog. This type of exchange of thoughtful ideas and brainstorming is exactly what we hoped for when we started this blog a few months ago. Berk.

Submitted by Alice on
Hello Berk, I'm not sure the issue is primarily to do with 'working papers' per se so much as improvement over time. I had previously read your earlier working paper and was unaware of your subsequent improvements upon it. (I made reference to Duncan's blogs not because this was my only route of access but to remind him of his mention of the paper). Suppose your earlier working paper had been peer-reviewed and published. (While your subsequent research is certainly an improvement I don't think it's impossible that the working paper could have been published, given that so much accepted research uses indicators that are self-reported. E.g. on poverty: consumption, income expenditure etc etc). I would have then made the same remark: recalling a previous paper I had read, which unbeknown to me (given my lack of omniscience) had been improved upon. So I'm not convinced the problem is with the existence of working papers, but rather our lack of omniscience. Given the advantages of working papers (especially feedback), I'm inclined to think the solution is not to move away from them, but rather to try to ensure people know of subsequent improvements - just as you did in your reply to my comments on Duncan's blog. Alice

Submitted by Berk Ozler on
Hi Alice, Thanks very much for the comments. You're absolutely right and i agree with you mostly. Two small additions, however. First, while peer-review does not make papers perfect, on average, it improves them. And, given that many people treat evidence from working papers almost as good as publications, the quality of follow-up policy advice suffers more. Second, I am not convinced that the feedback we may get for working papers is better than feedback from 2-4 anonymous reviewers whose job is to give exactly the kind of feedback. This is an empirical question, but at least for me, the supposed advantage of working papers made public online has not panned out in terms of providing me with ideas for substantive improvements. My feeling is that people bring out working papers for dissemination rather than substantive feedback. You might get good feedback from close colleagues, others you sent the paper to and specifically requested feedback from, or from presenting the work at academic seminars. Glenn Ellison (from MIT) has done some nice work on this topic and has a paper called "Is Peer Review in Decline?" ( I highly recommend it.

Submitted by Anonymous on
You just have to accept the fact that this is a digital age, and the speed for journal review process is simply not catching up. Think about the financial crisis: The worst recession since the 1930s happened so quickly in a couple of months. Without working papers there won't be reasonable exchange of ideas and data on the crisis. If someone submit a paper at the beginning of the crisis, it may get published after the recession ends. The problem is not with working paper, but with the journal system.

Submitted by Vivek Prakash on
What is the goal of Working Papers? If it's dissemination, I think they've been successful. (And in fact, more successful than journal articles simply because it's a faster, more flexible process.) If it's substantive feedback, then I think the current mechanisms to distribute the papers are poorly designed. Let's look at the working paper series' you cited above: 1. BREAD: general download link, no feedback or opportunity to provide comments in a community manner (ie. a semi-moderated comment field or a wide open comment field like blogs have) 2. NBER: log in or pay for working paper before download (university email addresses suffice to get the download link). I think it would be harder to make this system more unfriendly. This mechanism is not designed to encourage feedback, but the email address/login might be enough to track readers/universities and request comments, even if that is clumsy. 3. IZA: Calls its papers "Discussion papers" but a decade after blogs arrived on the internet, still provides no forum for discussion. 4. World Bank Policy Research Working Paper Series - pretty much the same as above - papers can be downloaded in text or PDF, no forum for comments. Like (some of?) the others above, you can open papers and email specific authors directly. My conclusions: - Working Papers are currently designed to be distributed (disseminated). - Ad-hoc efforts to garner feedback (individual email) don't seem to be effective. - Design a feedback mechanism to get feedback. Suggested feedback mechanism: mini-peer reviews by graduate students at universities across the world. The reviews can be blind or not, with the goals that 1) the author(s) get useful feedback, 2) the 'reviewers' get useful experience and build their analytical skills, and 3) the chat/feedback circle becomes virtuous instead of whiny (you know that graduate students critique these papers anyway, but you're not capturing their views at all). With the authors' consent, the mini-reviews could even be posted with the paper for wider learning. (ps. eliminating blind-reviews for journal articles is, in a word, stupid.)

Submitted by Dan K on
Totally agree with the spirit of this comment. As I was reading, I was thinking "someone needs to create Quora for working papers", not "this is hopeless". Breaking the problem down as you did here is useful. Solving distribution was the first challenge that has been achieved (with moderate success). But until someone actually tries to change the feedback mechanism, then academic economists will continue to live with processes two or three decades out of date as compared to the private sector and half a cycle behind their physics colleagues...

Submitted by Ted on
This is really helpful! But what are some argents actually for the working paper system? I am pretty convinced that it is generally a bad idea.

Submitted by Tom Hickey on
The major objection to working papers is that they are not peer reviewed. The problem with the argument is a major flaw that has affected the economics profession in particular and most other professions, too. In order to gain standing in one's profession is is necessary to publish in the "right" venues. These venues are tightly controlled by people who have already "succeed" in the profession and act as the gatekeepers. This is a process that guarantees a selection process that favors the views of the gatekeepers. This is death to innovation and creativity, and it creates an "in" group (orthodoxy) and "out" groups (heterodoxies). Where have we heard these terms "orthodox" and "heterodox" before? Oh, right in theology and religion. Prior to online release of working papers, access to publication was limited to the "in" groups that dominate a field and decide what is normal in the universe of discourse. That smacks of dogmatism and ideological bias. Working papers have added the advantage of being free online downloads for those not having institutional access. "Professionals" may think that this is a small point, or even undesirable, but it is drawing many more people into the debate and serves as an educational aid that increases use of materials. "Let a hundred flowers bloom."

Submitted by Berk Ozler on
I think that the discussion generated here lays out nicely all the pros and cons of WPs in economics currently. It's hard to not agree with parts of each comment. This issue is a tough one and I am glad we're discussing it here. We usually like to respond to each comment here, but as there are so many contributions in this case, just a few lines below. I should say, in response to Tom's comment above, that the freely available nature of WPs is one of the strongest arguments for their existence: WPs get read much more in developing countries, partly because they are widely available. I also agree with anonymous above that speed of dissemination is a major advantage -- as of now. However, it's harder to agree that journals are killing innovation when there are so many journals of all kinds out there, including online journals. Thanks again! Berk.

Submitted by Ozecon on
One of the reasons, which you you ignored, why economics has drifted into WP territory is that for those of us outside the cosy mainstream of top econ departments in North America and Europe - who all know each others research and approve - there is never going to be research published (anywhere that counts) that challenges the consensus view (or consensus argument) of these elites. I have at least strong casual evidence that this dominance is not so strong in other disciplines, including other social sciences. That said, proper double blind reviewing would go some way to remedy this problem and should at least be seriously trialled.

Submitted by Gerald on
"It turns out that asking school-age people whether they are attending school is not the best way of assessing impacts of schooling interventions (a paper I have with Sarah Baird on this is forthcoming in a special issue of the JDE on measurement, and I blogged earlier about similar evidence here)." Seems to me part of the answer lies in proper research design at the front end. DUH! Much of academic research has become so essoteric that it matters not whether it is peer reviewed, a WP or just an internet rant. If the subject question to be answered is weak and of little practical value then all of it is a waste of time and generally Public or Private Donor money. Find REAL questions that need answering and even the most preliminary of findings will have value. Also as the work progresses people will stay tuned as the results matter and are not just another piece of job justification.

Submitted by Berk Ozler on
Hi Gerald, Thanks for the comment. Of course, research design, which includes asking an important and pertinent question in the first place, is very important. I urge you to read the papers that are linked in the original post, which have long sections on research design and data collection that explain in detail why the many different decisions on how to design the intervention and data collection were made. The introduction section of the "Cash or Condition" paper outlines the policy question faced by governments and donors regularly, who spend billions of dollars on similar projects. Sincerely, Berk.

Submitted by Asif Dowla on
Take the case of Burnside and Dollar paper that suggested that aid would enhance growth in poor countries under enabling macroeconomic environment. This was used by the bank to suggest that poor countries need to improve macroeconomic environment. But when Easterly et al. added additional data points and fill in missing data, the results were not robust. The moral of the story is that don't make judgments/policy prescriptions based on a few papers, albeit published ones.

I've blogged a reply to this interesting post at --David

Submitted by Berk Ozler on
Thanks very much for the thoughtful post. I commented on it here: Berk.

This is a hoot, wondering why there are no white papers to work with viz the likes of the World bank?! Guess it's working in the 3rd World that has made the practice of having white papers unnecessary? Let's face it, Africa and the rest of the 3rd World is being used as the guinea pig of tests, and without white papers, no one can apportion blame when everything goes awry.

Submitted by Maurice Schiff on
A late comment (but better that than never). Tim Ogden mentioned a Working Papers 'Charter'. I agree that the process can be improved but what TIm has in mind may differ from my suggestion here. I don't know whether this is the rule in all DECRG units but at least in DECRG's International Trade Unit, submissions to the WP series are sent to two anonymous reviewers, one inside the Bank and one outside, and the comments were typically of high quality and often required a lot of work. My point is: rather than corner solutions of no WPs or quick publication of non-reviewed paers in the WP series, how about a middle-road, namely not waiting (years?) for the paper to come out in a refereed journal but rather publish the WPs after serious review (possibly as in the Int'l Trade Unit). Improving the quality of the WPs would also provide a public good from which all Bank WPs would benefit, namely enhancing the reputation of the Bank's WP series (over and above that of the various WP and DP outlets where papers are published 'as is'). Producing such a public good might justify paying outside reviewers, which would hopefully speed up the review process. The details of such a "charter" would have to be worked out of course. Two suggestions: i) we could think of including the reviewers' reports/comments in the WPs (three potential benefits: it might a) help the readers, b) contribute to the WPs reputation (assuming the comments are taken seriously in the revisions), and c) enhance the quality of the reviews as some in the field would probably be able to figure out who wrote them; and ii) make a distinction between WPs published by DECRG (possibly together with the staff of the Bank's Chief Economist and of the various regions' Office of the Chief Economist) on the one hand, and WPs published by others in the Bank on the other (the dividing line between the two WP series is also something that would have to be determined).