Syndicate content

Trials – A journal I did not know existed

Berk Ozler's picture

Reporting findings from studies in economics is changing, and likely for the better. It’s hard to not credit at least some of this improvement to the proliferation of RCTs in the field. As issues of publication bias, internal and external validity, ex-ante registration of protocols and primary data analysis plans, open data, etc. are being debated, the way we report research findings is changing. How research findings are reported also have beneficial knock-on effects on how research is conducted in the first place, improving our confidence in research findings and our ability to interpret them.

However, reporting details on background information, intervention design, data remain woefully inadequate in economics. With a co-author, I have been conducting a systematic review of cash transfer programs and I can’t even begin to describe the problems one has trying to mine basic information from these documents. Some don’t even define their key variable properly, some are missing baseline balance or attrition tables, some don’t discuss the implementing agency, some don’t include standard deviations in descriptive statistics tables (if they included such a table to begin with), etc. My own work is no exception.

The solution to this problem in the medical field is reporting guidelines, such as the CONSORT. I have respect for these attempts, I do. If all of the 80 something papers had abided by similar guidelines, I’d be able to extract information from them so much faster and much more reliably. But, I am not sure they’d be better papers. With the constraints on article length, while I am telling you whether I used Stata 11.0 or 11.1, I am leaving out some other crucial information. Sometimes context is key, sometimes some secondary analysis is needed, and sometimes I need to conduct channels analysis and a bunch of robustness tests. There is no easy way to standardize these things, especially for the researcher types, who despise templates, logframes, checklists, and the like. In the end, no reporting requirements are an adequate replacement to a good editor and a few expert and careful reviewers.

So, how do we balance having some minimal reporting guidelines while allowing the authors some flexibility in reporting important study details that would be missed if one were to strictly adhere to a checklist of reporting requirements? I don’t know exactly (I have some ideas), but fortunately we are not alone. Trials, a journal of BioMed Central, has been trying to tackle some of these issues and has recently issued a five-year progress report. They claim some progress in publication of protocols and raw data from trials. They also discuss the need to make progress on publishing expanded articles (think of a short Lancet article, expanded to include much more that could not be included in the original article due to various reporting requirements), threaded publications (linking protocols with articles, expanded articles, updated results/protocols, etc.), and publication bias (in which negative or null results are much less likely to be reported). I like many of these ideas and suffered from their non-existence in the past.

If there is one goal the progress report and this article on the untold role and impact of context on RCTs that makes me worried, it is the call for more structured reporting. Perhaps economics journals can agree on some basic guidelines for reporting standards, but we, as authors, peer reviewers, editors, readers, and bloggers, need to be more vigilant about reporting all the pertinent details and insist on them being reported when they are not, before papers are published. Checklists are not going to solve our problems…