Skip to main content
NIHPA Author Manuscripts logoLink to NIHPA Author Manuscripts
. Author manuscript; available in PMC: 2016 Oct 27.
Published in final edited form as: Lancet. 2015 Dec 18;386(10012):2447–2449. doi: 10.1016/S0140-6736(15)01177-0

Time for a prepublication culture in clinical research?

Michael S Lauer1,*, Harlan M Krumholz2, Eric J Topol3
PMCID: PMC5082701  NIHMSID: NIHMS822432  PMID: 26738703

In 1969, Franz Ingelfinger wrote in The New England Journal of Medicine about the journal’s “definition of a ‘sole contribution’”.1 The journal’s masthead had stated a clear condition for any manuscript’s consideration: “Articles are accepted for consideration with the understanding that they are contributed for publication solely in this journal.”1 In other words, a given paper could be published exclusively in The New England Journal of Medicine, and nowhere else. This policy, known as the “Ingelfinger rule”, has had a major role in determining the ethos of publication in clinical research, and was revisited at least three times.24 The editors offered justifications: assuring the novelty and newsworthiness of published papers, and that published papers met high quality standards by virtue of going through the process of peer review.

The internet came on the scene in the 1990s, and with it the opportunity for individual scientists and organisations to post manuscripts, data, and key findings in a widely available venue. But, as described by The New England Journal’s editors in 1995, “direct electronic publishing of scientific studies threatens to undermine time-tested traditions that help to ensure the quality of the medical literature”.4 Therefore, the editors made clear that, in their view, posting a paper, data, or key findings on the internet represented presubmission publication and would disqualify it from consideration.

And so, the Ingelfinger rule lived on. During the first decade of the 21st century, editors at various journals invoked the rule (or similar policies) to reject previously accepted papers. As recently as 2012—43 years after its initiation—findings from a survey5 showed that a substantial proportion of clinical scientists considered the Ingelfinger rule as active, precluding any consideration of posting full manuscripts or a significant body of their results on the internet or elsewhere before formal publication by a peer reviewed journal.

Meanwhile, beginning in the 1980s, other scientific disciplines—such as astronomy, mathematics, and computer science—were taking a wholly different approach.6 Authors would post preprints of their manuscripts on publicly available websites such as arXiv, where they can undergo one or more rounds of voluntary peer review. After the authors have responded (or not) to the peer reviewers’ comments, authors would then submit their papers to journals for official publication. Several other preprint servers are available, including figshare, PeerJ (which also functions as a peer-reviewed journal), F1000Research, GitHub, and ResearchGate. To date, more than 1 million e-prints are available on arXiv.

Desjardins-Proulx and colleagues6 cite a number of advantages for preprints. Perhaps the obvious advantage is time; under the traditional approach, scientific work does not become available to the public until after the process of submission, peer review, revision, and formal publication. Even without taking into account rejections along the way, this process can take many months, even years, a delay that can be seriously problematic for all investigators and especially early-career researchers. The problem is only becoming worse as changing practices are leading to longer times from completion of work to eventual publication.7 Moreover, there is a cost to society when important research findings are delayed on their way to publication. This circumstance could be particularly relevant for clinical trials, for which methods have been prespecified and registered, and results should be relatively standardised. It is therefore highly unlikely that main findings will change with additional data cleansing or follow-up. Second, venues for preprints establish provenance—who made discoveries, and when. Third, and perhaps counterintuitively, preprints allow for more robust peer review, as peer review is not limited to the 2–4 reviewers chosen by one journal editor; instead, any number of interested readers can offer comments and suggestions. For example, one of us (MSL) recently offered a voluntary preprint peer review, which even included analysis of posted data; the results of the external preprint data analysis were included in the final published version of the paper.8

For many disciplines, the posting of preprints for presubmission peer review has been the norm. Biology, however, has trailed behind,6 whereas clinical research remains well behind. Some have argued that biology’s tardiness is related to two factors: a misconception that preprints make it easier to steal ideas (when in fact preprints actually make idea theft easier to track), and the Ingelfinger rule. Desjardins-Proulx and colleagues6 argue that “a preprint is simply a document that allows ideas to spread and be discussed, it is not yet formally validated by the peer-review system”. Therefore, most major publishers in biology now allow, and many even encourage, preprints, including the Nature group, Public Library of Science, BMC, Elsevier, and Proceedings of the National Academy of Sciences. Moreover, sites like bioRxiv are accepting posting preprints of interest to biologists. However, to our knowledge, none of the major clinical medical journals have explicitly allowed preprints. Nonetheless, clinical journals, including The Lancet, do allow for prior posting of results through conference abstracts. It may seem counterintuitive that conference abstracts are allowed whereas fuller dissemination of clinical findings ahead of journal publication is not.

Two of us (EJT and HMK) have noted recently9 that the US National Institutes of Health issued a press release and held a press conference about the results of the Systolic Pressure Intervention Trial (SPRINT). The NIH had stopped the trial prematurely, after a recommendation from the data safety and monitoring board to do so given the markedly positive results. However, the NIH posted neither the main results nor the board’s report; instead, the public had to wait for publication of the final paper,10 which was based on cleaned data and reflects the views of the trial’s lead investigators.

Is it dangerous to release reports of clinical trial results prematurely? Potentially, reports that have not undergone formal peer review could be misleading.14 Without peer review, and specifically peer review that is organised by a journal, clinicians and patients might be uncertain about the results and how to incorporate them into their practice and thinking. But, arguably, the same could be said even with the current system; without the benefit of systematic literature reviews and the work of guideline panels, clinicians, patients, and the public will not know how best to interpret any particular study’s findings. The published journal paper itself is unlikely to be the final word on the matter; continuing post-publication dialogue is part and parcel of knowledge acquisition. And, conversely, we might argue that in an era of open science and transparency, withholding findings from the public until a journal does its own peer review and editing could represent a kind of paternalism, which is troubling when many are calling for a democratisation of medicine and for widespread adoption of open science.11 It is important to distinguish this discussion from the important and active debate about sharing individual-level patient data.12 The current issue is specifically about the speed of reporting summary results.

The time has come for stakeholders in clinical research—funders, investigators, journals, policy makers, and most importantly patients and clinicians—to engage in an international conversation about our culture of scientific communication. Other scientific fields have come into the internet era, in which timely, global conversations about rapidly evolving scientific discoveries are stimulated. Clinical research has lagged far behind—it is time for us to catch up.

For arXiv see http://arxiv.org/

For Elsevier’s prepint policyhttps://www.elsevier.com/about/company-information/policies/sharing

For bioRxivhttp://biorxiv.org/

Acknowledgments

HMK discloses that he is Editor of Circulation: Cardiovascular Quality and Outcomes and is Editor of Journal Watch Cardiology; he receives support from the American Heart Association and the Massachusetts Medical Society, non-profits that own medical journals. He is the recipient of research agreements from Medtronic and from Johnson & Johnson (Janssen), through Yale University, to develop methods of clinical trial data sharing and chairs a cardiac scientific advisory board for UnitedHealth. MSL and EJT declare no competing interests. The views expressed herein are those of the authors and do not reflect the official positions of the National Institutes of Health or the US Federal Government.

Contributor Information

Michael S Lauer, Offce of Extramural Research, National Institutes of Health, Bethesda, MD 20892, USA.

Harlan M Krumholz, Department of Medicine, Yale University, New Haven, CT, USA.

Eric J Topol, Scripps Translational Science Institute, La Jolla, CA, USA.

References

  • 1.The New England Journal of Medicine. Definition of sole contribution. N Engl J Med. 1969;281:676–77. doi: 10.1056/NEJM196909182811208. [DOI] [PubMed] [Google Scholar]
  • 2.Relman AS. The Ingelfinger Rule. N Engl J Med. 1981;305:824–26. doi: 10.1056/NEJM198110013051408. [DOI] [PubMed] [Google Scholar]
  • 3.Angell M, Kassirer JP. The Ingelfinger Rule revisited. N Engl J Med. 1991;325:1371–73. doi: 10.1056/NEJM199111073251910. [DOI] [PubMed] [Google Scholar]
  • 4.Kassirer JP, Angell M. The internet and the journal. N Engl J Med. 1995;332:1709–10. doi: 10.1056/NEJM199506223322509. [DOI] [PubMed] [Google Scholar]
  • 5.Peters HP. Gap between science and media revisited: scientists as public communicators. Proc Natl Acad Sci USA. 2013;110:14102–09. doi: 10.1073/pnas.1212745110. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 6.Desjardins-Proulx P, White EP, Adamson JJ, Ram K, Poisot T, Gravel D. The case for open preprints in biology. PLoS Biol. 2013;11:e1001563. doi: 10.1371/journal.pbio.1001563. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 7.Vale RD. Accelerating scientific publication in biology. Proc Natl Acad Sci USA. 2015;112:13439–46. doi: 10.1073/pnas.1511912112. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 8.Cook I, Grange S, Eyre-Walker A. Research groups: how big should they be? PeerJ. 2015;3:e989. doi: 10.7717/peerj.989. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 9.Topol EJ, Krumholz HM. [accessed Dec 3, 2015];Don’t delay news of medical breakthroughs. The New York Times. 2015 Sep 17; http://www.nytimes.com/2015/09/18/opinion/dont-delay-news-of-medical-breakthroughs.html?_r=0.
  • 10.The SPRINT Research Group A randomized trial of intensive versus standard blood-pressure control. N Engl J Med. 2015;373:2103–16. doi: 10.1056/NEJMoa1511939. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 11.Topol E. The patient will see you now: the future of medicine is in your hands. Basic Books; New York, NY: 2015. [Google Scholar]
  • 12.Ross JS, Krumholz HM. Ushering in a new era of open science through data sharing: the wall must come down. JAMA. 2013;309:1355–56. doi: 10.1001/jama.2013.1299. [DOI] [PubMed] [Google Scholar]

RESOURCES

close