The continued ineffectual practice of testing on animals for the purposes of human medical research appears to have exposed wider drug testing bias, at least according to a new analysis that says there are too many reports of “significant” results in animal trials to be accurate.
The analysis, published this month in the journal PLOS Biology, saw researchers from Stanford University study 4,445 animal studies for 160 candidate treatments of neurological disorders like Parkinson’s and Alzheimer’s.
The research team found that 1,719 of those studies had a “positive” result, or an outcome that would allow researchers to advance the drug they are testing to its next clinical trial stages.
This, the analysts concluded, was odd.
To be absolutely clear, the so-called meta analysis was not assessing the effectiveness of animal research — there is already plenty of literature that discusses why the practice continues as well as its inherent drawbacks, flaws and serious ethical problems.
However, the fact that a significant proportion of animal research does not translate into effective medicines for humans that enabled the researchers in this case to work out that statistically only 919 studies in the group being analyzed would be expected to have a “positive” result — and obviously, the resultant discrepancy far exceeds statistical tolerance.
From the analysis summary:
Studies have shown that the results of animal biomedical experiments fail to translate into human clinical trials; this could be attributed either to real differences in the underlying biology between humans and animals, to shortcomings in the experimental design, or to bias in the reporting of results from the animal studies. We use a statistical technique to evaluate whether the number of published animal studies with “positive” (statistically significant) results is too large to be true.
In summary, we judge that there are too many animal studies with “positive” results in the neurological disorder literature, and we discuss the reasons and potential remedies for this phenomenon.
Digging deeper into the data the analysts found that only eight of the 160 treatments should ever have advanced to human trials.
Furthermore, many of the benefits that were said to have been evidenced from drug applications during animal testing failed to materialize during human trials. For instance, the data showed that dozens of drugs related to treating strokes seemed promising during animal trials, only for most to prove ineffectual in human testing.
Despite this, researchers are still being encouraged to perform widespread animal testing and use animal testing results as a threshold to move on to human trials even though it is widely understood that animal testing for the purposes of divining the effectiveness of drug suitability for humans is a poor gauge.
So what is going on here? Is this fraud? No, say the researchers, who appear leery of arriving at that conclusion.
Instead, the analysis seems to favor an institutionalized bias that uses animal testing, even though its limitations are widely understood, as simply part of the “system” of hoops that must be jumped through in order to advance drugs through their trial stages. As to the over-reporting of good outcomes and the disappearing of bad ones, it may just be a question of money. Positive results look better and are more likely to attract funding for future projects, leading to over-reporting of positive results.
However, the researchers do seem to be suggesting there may be some intentional practice that exceeds the realm of simple number fudging:
“There’s [sic] just too many significant results out there,” epidemiologist John Ioannidis, lead author of the study, is quoted as saying. “Something is obviously wrong. Apparently what is happening is there are just too many studies being done that are being selectively reported, either having negative results suppressed or having the analysis presented in a way that the results would look positive even though they are really negative.”
This is not new ground for Ioannidis, who in 2005 authored a paper entitled “Why Most Published Research Findings Are False” in which he outlined the many possible ways scientific results can be biased.
Ioannidis’ results land in the wake of a number of similar analysis, including a 2008 study into the effectiveness of antidepressants where 94% were reported to have positive results but a subsequent analysis by the Food and Drug Administration found that only a little over half were evidenced to provide positive outcomes.
A number of other meta analyses have also shown inflated positive reporting.
While, again, not actually commenting specifically on the ethical issues surrounding animal testing itself, this meta analysis will add further sharpness to accusations that animal testing is an outmoded, imprecise and cruel way of testing medicines that are intended for humans — and fundamentally that it betrays, as a whole, serious ethical flaws in current drug and treatment testing procedures.
Image credit: Thinkstock.
Disclaimer: The views expressed above are solely those of the author and may
not reflect those of
Care2, Inc., its employees or advertisers.
Problem on this page? Briefly let us know what isn't working for you and we'll try to make it right!