- 85% of biomedical research is wasted
- 50% of it because it never gets published
- of the remaining, another 50% because publication is not complete and applicable
- of the remaining, another 50% because study includes fixable design flaws and lacks systematic review of literature
- Several initiatives underway to fix the issues, of technological and financial nature.
- Some funders achieve 98% publication rates.
- Technological tools assist in doing the tedious systematic reviews
Paul Glasziou is a Professor of Evidence-Based Medicine at Bond University. His key interests include identifying and removing the barriers to using high quality research in everyday clinical practice. Besides authoring over 200 peer-reviewed journal articles he is the author of 7 books. Find more about him and the titles of his books here.
Paul Glasziou and his colleaugue Iain Chalmers published a widely referenced study in 2009 where they state that 85% of biomedical research is being wasted
85% waste in biomedical research sounds like a lot – how has this number been calculated?
Over 50% of all health research does not get published at all. From the published research, 50% is not usable in practice because things are missing. And from the 25% which is left, half contains design flaws. Hence, over 87.5% of health research is avoidably wasted according to Glasziou (which they rounded down to 85%).
The unpublished research is wasted because if unpublished nobody can build or conclude something from it. Incomplete research doesn’t allow to interpret, use or replicate. Finally, design flaws in research could most of the time be avoided by consulting a statistician to ensure proper blinding and randomization and by performing a systematic review of existing research and building on that.
Why is that an important problem to talk about?
The research is often funded by governments and hence by taxpayers, or indirectly by patients paying for medications or medical treatments. In the end all of us pay for the wasted resources.
What can we do about it?
For every problem shown in this podcast we also want to reason about possible ways to improve the situation. The three biggest problems we discussed are
- Incomplete research
- Avoidable design flaws
1) No Submission Problem
The number one reason for researchers not submitting their results is because they are negative. It would be wrong to blame researchers alone for that because journals tend to be more likely to accept positive results, i.e. proving that something works or an assumption holds. This is also called the positive publication bias.
Other surprising reasons include change of focus of the researchers, or the death of the researchers.
One key stakeholder in improving the submission and publication rates are the funders, ie governmental or philanthropic organization that disburse the funding in forms of grants.
They have the power to increase publication rates by
- a) incentivizing it in a financial way (ie holding back some of the existing or future funding until publication of results)
- b) providing the infrastructure to register trials and making the process of publishing data & protocols from experiments easier for researchers
There is evidence that these steps work:
The NIHR HTA Programme, a major UK funder of clinical trials reaches a 98% publication rate for studies they fund. They achieve these amazing results partly by holding back 10% of the funding until the research is published, but also monitor and help the researchers.
On the bright side, inspired by Paul Glasziou’s and Iain Chalmers’ research, funding groups have started talking to each other about how to coordinate and standardize funding and publication processes to increase the publication rates
As list of groups working on the problems can be found at the Reward Alliance
2) Incomplete Research Problem
Journals often don’t require specific details on the protocols of how certain results have been achieved. In biology this may be the lab setup, temperature, how cells are being dyed. In non-drug interventions, Paul cites the example of an RCT-Trial where a video about “whiplash” injuries is shown to the active group. But since the video wasn’t published along with the study, doctors couldn’t immediately use the results of the study but instead would have to email the study authors in the hope of getting access to it. This likely leads to way fewer doctors trying out the intervention with their patients.
To avoid this, an appropriate infrastructure is needed where the complete data, the detailed results as well as the protocols used, can be published for the public. In order to keep this data accessible for everybody Paul thinks it might be even better to not let the journals handle this infrastructure who might be incentivized to put this behind a paywall.
Another way to assure quality of research results is to introduce and enforce publication standards which clearly defines what needs to be included.
3) Avoidable Design Flaws
There are two broad categories of design flaws
- Many design flaws can be avoided if a statistician is consulted prior to the execution of a study, to assure the setup of the study does not contain any obvious flaws including e.g. randomization processes or blinding.
- A systematic review of past research is necessary to make sure the question in the current research has not been answered in the past. Furthermore new research should not repeat the flaws of past research. However, a thorough systematic review will take between 6 month and 2 years. In practice hardly anybody has time to do do this properly. There is a lot of ongoing effort to automate this systematic review process including the International Collaboration for the Automation of Systematic Reviews ICASR.
Currently there exist many good solutions which solve parts of the process. A universally applicable tool that pulls all of that together is still missing. If you, dear reader, are a programmer and interested in improving the health research, then this is an interesting challenge for you!
Here are some of the cool tools available today for systematic literature reviews:
Robot reviewer: summarizes and predicts bias of for clinical trials (just drop in the pdf).
Systematic Review Accelerator helps with the searching and deduplication phases of the review and also provides a list of other recommended tools.
Several tools to manage the literature review process
And these tools are starting to include some of the automation tools also.
It turns out that writing the first draft of your systematic review paper is hard. Especially if English is not your first language.
A major producer of systematic reviews, the Cochrane Collaboration, employs Review Manager (RevMan) programme—a software which assists reviewers in doing systematic reviews.
RevMan HAL is an extension to Revman which automates the writing of the results section of a systematic review, once the data has been extract to provide the “forest plots”.
It will write the abstract, results and first part of the discussion sections of analytical results in multiple languages.
The tool is already saving reviewers an extraordinary amount of time and is especially helpful for researchers whose first language is not English.
Background, strengths and limitations of the software can be found here
The project is open-source, the source code is available on github.
Huge amounts of biomedical research are likely being wasted in ways that we could avoid. The main reasons of waste have to do with the lacking infrastructure, time-consuming processes and missing incentives to publish studies that include their protocols, regardless of the outcome of the study.
Funders are being called to review how they can provide better infrastructure as well as rethink their payout schemes to incentivize higher publication rates.
Software engineers can help by working on tools that make it easier to conduct systematic reviews and develop infrastructure to publish protocols and associated data.
Scientists can help by getting more training in statistics, doing more careful systematic reviews (potentially by familiarizing themselves with some of the tools we have mentioned in this article) and committing to publish results with protocols regardless of how positive they are.
Thanks to Philip Junker for helping writing this summary
Leave a Reply