How to Spot an Untrustworthy Study

Each day, a raft of new studies pops up in the media, many promising cures for fatal diseases that never materialize or warning of global threats that often aren’t explained or contextualized.

With a new study about mobile phones and brain tumors making the rounds despite serious criticism from other scientists, this begs the question: how do we spot a trustworthy study over all the fake news?

Below is a list of questions you can ask about a study to assess its credibility. These are not meant as an exhaustive list but as a helpful guide to factors that can contribute to a study’s trustworthiness.

Is the study peer reviewed by a credible journal?

We know that the peer review process is not perfect, but the idea of submitting a study to be evaluated before it goes to print is a solid one that has enabled several scientific journals to gain a high level of credibility.

If studies appear in journals like the British Medical Journal, Nature, or other well-respected journals, it does at least suggest that the people who reviewed the research were confident enough in its findings to allow it to be published and put their journal’s name on it.

Consider the journal’s reputation by doing a quick internet search.

What are other saying about the study?

Another thing you can do is look at what scientists not connected with the study are saying about it.

Even better, look at whether a study has been replicated by other scientists. If their findings match the original, or largely corroborate them, that’s a good sign of a well-put-together study that yields meaningful results.

Did it involve animal testing?

Studies, particularly those in the medical field, often rely on testing a drug, treatment protocol or practice on animals, for example mice. It is important to stress that clinical advances have been made through animal research. Nevertheless, a study that has only reached animal trials cannot provide us with evidence of what might happen in humans.

For example, animal studies into the cancer fighting compound endostatin were incredibly encouraging, but in human tests its ability to fight tumors was not nearly as impressive.

This shows that animal studies may only be indicative of the potential for benefit or harm to human health. It is not guaranteed, and many animal testing results do not translate in human trials.

What is the sample of people like?

If the research has reached human trials, or if this research is a sociological study looking at groups of people, look at what groups are actually being used in the trail. Ask:

  • What is their race?
  • What is their age?
  • What is their sex?
  • What is their sexuality?
  • What is their income?
  • What is their health?

These factors are all important, because they tell us how broadly a study might apply to the general population.

For example, what is true for a white middle aged man with a stable income may not be true for a poor black woman who is working a minimum wage job. Was the sample large and mixed enough to account for this? Does the pattern it highlights hold true for all demographics or just some?

It’s worth paying attention to how the sample was recruited for the study, too. Were they selected randomly, or were they chosen for specific traits from a specific area?

Lastly, pay attention to the sample size. As a general rule, a bigger population sample size, the better. However, it’s important to understand that different groups naturally have different available sample sizes. Research involving trans people, for example, often has relatively small sample sizes because trans people are a relatively small minority.

How rigorous was the study?

If studies are making big claims, particularly when it comes to treatments for things like cancer, heart disease and other such serious ailments, we might look to see how rigorous the study was.

A randomized controlled trial (RCT) is often considered the best standard for a study. This refers to when researchers randomly assign people in the sample to a group in the trial for, say, receiving the treatment that is being investigated. 

This also involves allocating some to a placebo group, another critical step to assess whether the drug has a meaningful effect.

If these measures were taken, and if these findings are replicated by scientists not connected with the original study, you can feel more confident that this study was trustworthy.

Lastly, look at how the scientists controlled for different things that might have affected the study. Health, income level, and other factors can all change the outcome. Studies cannot control for everything, so it is about context: Was a reasonable attempt made to tackle major issues that could have affected the outcome of a study?

Are we talking correlation or causation?

Very few studies that look at what might cause health complaints actually prove causation.

Rather, we are usually looking at adding to a body of evidence that suggests a pattern, or correlation. For example, that when a person does “x” activity, their likelihood of developing “y” goes up or down.

This doesn’t prove that “x” caused “y” but it does suggest that scientists might be looking in the right place. Researchers can then use further tests to hone in on the cause and, through an accumulation of data, become more certain.

Be wary of groups who try to dismiss studies for only showing a correlation though. A quick Google search will tell you if this study is a one off, or if this pattern keeps cropping up.

Are the conclusions actually supported by the data?

Sometimes people can read things into a study that are not actually there or draw conclusions that the data doesn’t support.

To assess this, look at whether the conclusion fits the framework of the study. If the study is limited, is the conclusion reflective of that? Or does it make broader claims, and if so, why?

Who funded the study?

People generally are skeptical of industry-funded studies, and sometimes with good reason.

We know that some industries — particularly the food industry — have purposefully reported only partial results in order to get the press coverage they want, or have deliberately funded studies with the hopes of misleading the public.

While this is concerning, industry funding is not necessarily indicative of a biased study and is indeed necessary for well-funded research. A good test for whether an industry-funded study is trustworthy is looking to other studies on the topic and using that as context.

Does this study say something that is completely the opposite of what the other research is telling us? If that is the case, what new evidence does this study bring to support that finding? If it doesn’t bring new evidence, we have to ask why its results are so different from the consensus.

A General Guideline: Be Cautious of Big Claims

These are just a few things to look out for in research studies. A more general rule that may help guide us as we navigate press reports is being conservative. Science is a slow and meticulous process.

If press reports are making big claims, it usually (though not always) means that they might be concentrating only on the sensational elements of a study and not giving the full context.

Photo credit: Trust "Tru" Katsande on Unsplash.


Marie W
Marie W4 months ago

Thank you for sharing

JoAnn Paris
JoAnn P8 months ago

Thank you for this very interesting article.

Chrissie R
Chrissie R8 months ago

Peer reviewed? Replicated? Approved? Reliable journal????

Ruth S
Ruth S9 months ago


Ken H
Ken H9 months ago

Real truth is hard to come by.You must question everything,truth does not hide,only lies seek darkness.If you cant question something theres a good chance its written in lies,truth doesnt hide,only lyers hide the truth.

Cindy Smith
Cindy Smith9 months ago


Danuta W
Danuta W9 months ago

Thanks for sharing.

natasha p
Past Member 9 months ago


Dr. Bob Abell
Dr. Bob Abell9 months ago

I should add, that in general, this article is spot on. In particular, understanding that correlation is a first step to further research on causation is critical. Anyone interested in that topic should know of Hill's criteria as a starting point for such further research.

Dr. Bob Abell
Dr. Bob Abell9 months ago

The "peer review" process has become more problematic with the merging of scientific media interests. For example, the Dutch company, Elsevier, now controls a very large number of scientific journals, but also has many publications that are supported by industrial advertisers. This creates a direct conflict of interest. They became embroiled in a massive controversy when they first published - as peer reviewed - a study by Giles Serelini on GMOs and glyphosate. They then subsequently withdrew that study, while about the same time bringing on a person from Monsanto as biotechnology editor. This drew over 1500 scientists to protest that the withdrawal was inappropriate amid suspicion that potential loss of advertising revenue from the agribiz industry was more likely the cause for retraction than any actual problem with the research. At the same time, Elsevier has also come under fire from many Educational Institutions for overpricing critical scientific journals that actually depend in the first place on scientists within the institutions for their content. One of the results has been the rise of a number of perfectly credible on-line publishers of scientific papers. With this facility available, I would argue that the most scientists in a given field can quickly ascertain if the research has been well done. Suppressing studies because they don't meet a particular bias is highly dangerous. One need only look at the 75 years it took from the first public