Increasing our return on investment in science:Start with better behavior

2023-08-31 04:57DavidMoher
Journal of Sport and Health Science 2023年3期

David Moher

Centre for Journalology,Clinical Epidemiology program,Ottawa Hospital Research Institute,Ottawa,ON K1H 8L6,Canada School of Epidemiology and Public Health,University of Ottawa,Ottawa,ON K1H 8L6,Canada

In August of 2022,I gave the opening keynote address at the 5th North American Congress on Biomechanics,in Ottawa,Canada.The topic of my talk was about whether the research ecosystem was getting a reasonable return on its investment in biomedical research.I provided several examples as to why I posed this question.

During a 6-week period in 2015,a research team from the University of Oxford examined recently published randomized controlled trials (RCTs) published in 5 well-known journals.1Their specific objective was to examine the primary outcome as reported in the completed RCT publication and compare it with the primary outcome reported in the RCT’s protocol.Of the 67 RCTs examined,the researchers observed that 58 of the published trials had a different primary outcome from what was reported in their respective protocols.The researchers also observed that 365 “new” outcomes were reported without declaration.During coronavirus disease 2019 (COVID-19),ivermectin was purported to be a magic elixir for the treatment and prevention of the deadly virus.Widely used in the management of parasitic diseases,flawed research,including RCTs,indicated that the research ecosystem had dropped their“pandemic research exceptionalism” multiple times.2,3Ivermectin was not the magic elixir some had touted it to be.Although articles published in predatory journals account for a very small percent of the total global research output,universities from high-and middle-income countries,whose research is often funded by highly respected agencies(e.g.,the National Institutes of Health),are the most prevalent authors publishing in these outlets.4Such research has infiltrated trusted resources,such as PubMed Central and Scopus,and incorporated in health policy guidance from many influential organizations (e.g.,World Health Organization).Finally,there are examples of researchers’ complete corpus of output being questioned.5,6

There are many other examples of questionable individual and collective research behaviors.Rather than be a “Debbie Downer”,I think it is more useful to discuss some potential solutions.My hope is that these actions might resonate with academic leaders,funders,editors,publishers,and researchers interested in modifying their respective current behaviors.Implementing changes to the research ecosystem can be on a spectrum from suggested,recommended,mandated to a hybrid process.My preference is for mandatory implementation.Any actions can be incorporated into new student or faculty onboarding processes.Existing students and faculty are likely used to mandatory activities,such as annual conflict of interest declarations and Workplace Hazardous Materials Information System training or updates.

I am starting to implement a training program for my incoming graduate and post-doctoral students;I do not typically supervise undergraduate students;they should be exposed to such opportunities.First,each student must take a course on peer review.The global community spends about 100 million hours completing peer review annually.Despite this enormous investment,there is little evidence that peer review is effective at improving the quality of reporting of published research.This might be because few options of credible peer review training exist,even though early career researchers are calling for such training.Most available training lasts an hour.7This is unlikely to be the depth of training to make a true impact of peer review.

Second,students are required to write an essay about reproducibility.The essay can be included in their thesis.Psychology is perhaps the only discipline that has thoroughly examined reproducibility.For example,1 important study identified 100 classical experiments published in 3 psychology journals.8The results indicate that the replicated studies did not produce similar results as those reported in the original publications.The average effect sizes in the replicated studies were about one-half of those reported in the original publications.Exposing students to the problems of reproducibility also provides them with important knowledge about transparency,openness,and open science more broadly.

Third,students also need to take a course on scholarly communication.For example,students need to know that editors do not routinely provide peer reviewers with knowledge of reporting guidelines to facilitate peer reviewing,even though there is some evidence that reporting guidelines are effective tools to improve the quality of published research.9The content of the course(e.g.,responsibilities of authors,the emergence of preprints)is based on a graduate course taught at the School of Epidemiology and Public Health at my university.Readers might be asking themselves,do these courses and activities result in better researchers.This is an empirical question.Certainly,evaluations should be conducted and reported.What I do hear anecdotally,is that students feel better prepared,informed,and more confident about the research ecosystem having taken these courses.

The researcher assessment process used globally10likely contributes to some of the unfortunate behavior described earlier.There is considerable pressure on researchers to publish papers and secure funding.Universities and other research organizations are addicted to making assertions about a researcher based on the number of papers they’ve published within a given time-period,especially those papers published in high impact factor journals.Journal impact factors says nothing about the quality of an article.It is an abused metric about citations.Similarly,researcher assessment is focused on the number of grants and their fiscal value.Conducting a randomized trial is going to be more costly compared to a project about bibliometric gender bias.Does financial “worth” really tell us anything meaningful? Universities also add to this behavior by focusing on university rankings,although there are high profile institutions removing themselves from the rankings’madness.11It is not a myth that university deans are imploring faculty to improve their university rankings.Rankings should be about universities commitment to improving the research ecosystem for current and future researchers,patients,and the public.Universities are part of society,not separate from it.

Open science is quickly becoming ubiquitous in Europe and the United Kingdom.As defined by the United Nations Educational,Scientific and Cultural Organization,open science is a movement away from “me” science toward something more useful for society broadly.12At the front end of this movement is open access publishing and data sharing.In August 2022,the White House’s Office of Science and Technology Policy announced that as of January 2026 publications resulting from U.S.federally funded research will have to be immediately accessible to the public.13COVID-19 publications might foreshadow what 2026 will look like,more generally.During the pandemic,many paywalled and hybrid publishers committed to making COVID-19 research openly available immediately to the public.The second part of the Office of Science and Technology Policy announcement is the requirement of immediate availability of the data underpinning the results of the research at the time of publication.Shouldn’t researcher assessment include an examination of data sharing associated with their publications?

Even if readers think these proposed solutions are worthy of consideration,there will be the question of “who is going to pay for it”.Some solutions are likely easy to implement.Asking researchers to report which of their publications include sharing the data underpinning the publication is one example.The Human Genome Project offers a solution.It invested 1% of its budget in the Ethical,Legal and Social Implications Research Program,with enormous payoff.A similar investment might have broad ranging dividends to the research ecosystem.Universities and other research organizations could commit 1% of their total revenues toward solutions.Blind neglect is likely not helpful and will do little to improve the ecosystem.While university ranking systems are not likely to disappear quickly,they could modify their obsessiveness about publications and funding.Rankings could be more focused on the broader ecosystem,such as what internal mechanisms are used to boost their return on investments—are there university wide courses on research integrity and other topics.Publishers,particularly the oligopoly,could use 1% of their profits to help fund the development of free online courses on manuscript peer review.Better still,they could collaborate with universities to develop,implement,evaluate,and fund such an undertaking.Funders could do likewise.

There is other individual and collective action that can be taken to help improve the research ecosystem.The Declaration of Research Assessment(DORA)is a strong push for universities to drop their use of journal impact factors from researcher assessment.As of writing this opinion,more than 2500 organizations and almost 20,000 individuals from 159 countries have signed DORA.While only launched in 2022,more than 350 organizations from 40 countries have signed Coalition for Advancing Research Assessment,a commitment to reimaging the research assessment process.

Why might universities be interested in moving in this direction? It would provide current students and faculty,and prospective students and faculty,with the explicit value a university places on improving the entire research ecosystem.Similarly,patients and the public,integral members of the ecosystem,would see a commitment to trying to improve the system and make research more trustworthy.

Competing interests

The author declares that he has no competing interests.