Science - how to spot the tricks!
Updated: 6 days ago
It’s good to understand science! Marketing/advertising campaigns, newspapers, websites, even governments bandy around the terms “following science” or “science shows…”. I firmly believe that having more people who know what to look for and the questions to ask when it comes to reading the results of scientific studies and their write ups in the media would make the world a better place!
In this blog I pose a few questions that are good to ask, and how to go about seeing if the evidence can answer them.
Is it actually base on science?
It should be easy to find out if there is actually any science at all by the claims. Sometimes there is a * which references a scientific paper at the bottom of an article. Sometimes there is a link to a paper in the article or advert. Sometimes the company who makes the product shares information about it on their website. If you can’t find it anywhere, it’s not usually a good sign - if the evidence is good why wouldn’t they share it? It you ask and they can’t share it, or don’t give a clear answer that really isn’t great.
A common tactic these days seems to be avoiding any science or research at all and just promoting your product with personal success stories. Remember these stories are written to sell things, and may not reflect the full truth, any potential downsides and you don't know what other factors may have influenced any of the positive results.
Is the science any good?
Now here is the key question! I’ll give you an example to start, a couple of years ago Strava launched a service where a partner of theirs would hoover up all your Strava training data and predict your race times - it sounded interesting! I did a bit of digging, and following the links off their website I quickly got to the scientific paper. The algorithm in their service was based on the results of one scientific trial, carried out fairly well from what I could see. The problem however was that they had only used around 25-30 runners in the trial - a very small “sample size” (I talk more about this later in this blog post) which can lead to higher error in results as a result of chance. Further to that all the runners in this small group in the trial were men between the age of 20 and 35. It doesn’t take a rocket scientists to work out that this service isn’t going to be very accurate, especially for people who aren’t men between 20 and 35 years old…!
Some ways to check if the science is carried out well…
Generally speaking the more people involved in the trial the more accurate it is. The impacts of chance and unexpected results is smoothed out with large numbers of people. A good line in the sand to determine if something is "statistically significant" is 1000 participants. Of course this isn’t always possible, for example when trialling a treatment for a rare illness. You also want to look at who was included in the trial, if they focus on just one age group, ethnicity, or exclude females for example, then the results can’t be widely applied to a population. This is a common “trick” for sports products, trial it on a sub section of the population you think will respond well and report those results to make it look better and try to hide the fact that’s what you’ve done in the press releases. It’s gradually improving but it’s still common in medicine and sports to have male only trial participants as women hormonal cycles can impact trial results; but this means you can’t apply the trial outcomes to half the population! Read the details of the people used in the trials if you can to see if they are really applicable to you.
In a double blind trial any information which may influence either the trial participant or the researchers conducting the trials are withheld until after the tests are completed. This helps stop any bias in the researchers and also removes the placebo effect. An example might be a trial comparing painkillers vs a “control” or sugar pill with no medicinal effect. If the participants new if they were taking the sugar pill, not the painkillers, they’d likely report very different results!
The placebo effect
The placebo effect is real and can have a huge impact on how we feel and react. Studies have shown that “sham surgery” on the knee, where they anaesthetise a person give them a surgical scar, but do nothing more, has a positive impact on their knee pain. They think they have had a treatment, and that it’s likely to work and because of that their pain perception reduces. The placebo effect can be used in sports too, for example one trial found that trial participants who drank a drink they thought contained energy (but in fact didn’t) still performed better, almost as much of an increase as those who had the real sports drink! Because the placebo effect is so strong it’s important to design trials that can rule out its effect, see the next section.
Randomised Controlled Trials
Randomising is where the participants in the trial are randomly assigned to different groups to test different things in a way that those running the trial don’t know who is in which group. This removes the risk or deliberate (or even unintentional!) influence on the results by the people conducting the trial.
Controlled trials mean that people who are given the thing being tested are compared to a group that are given the placebo, this rules out the impact of the placebo effect.
Randomised Controlled Trials are a gold standard for conducting trials that’s good to look out for. If it’s not done it’s easy for the researchers to cherry pick the best participants to skew the results and any impacts of the placebo effect could impact the outcomes.
Who’s paying for it?
Another example for you, back in the 1970s large tobacco companies paid scientists and researchers huge sums of money to do “scientific studies” that showed that tobacco products didn’t harm health. Would have you trusted those results if you knew who was funding them? The same still happens; for example sports nutrition companies funding trials that they hope to show their products in good favour. It’s always worth checking who’s funding the trial and paying the researchers salaries to find out if they are unbiased.
Another trick is for companies and organisations to quietly drop trials that aren’t going well. So if a company isn’t getting the results they wanted to for a trial they just stop it, instead of having a trial prove (for example) that their product actually doesn’t work! Scientific organisations are trying to get all trials registered on a publicly accessible database before they start to make this sleight of hand more easy to spot.
These are the most useful kind of scientific study; it’s where researchers take a look at all the papers that have been published on one topic, review their data all together and see what conclusions they can draw overall. This increases the sample size as all the trial participants are added together and goes some way to reduce the impact of the results of one or two poorly planned and carried out trials. If possible look for systematic reviews as these can give much more clarity on what works and what doesn’t in a particular area.
The published trial
The journal or publisher that publishes the trial is also a factor in it's credibility. If the study is published in UFOs anonymous (a journal I've made up) you may treat it with less confidence than if it appeared in the British Medical Journal! Peer review is also an important step before a trail gets published, this is where other experts in the same field read the trial, how it was designed and run and the results to see if they can spot any errors or mistakes.
It's worth saying that most trials will have had results in a percentage of participants, very rarely does what's being tested have a great result in everyone who took part. There is no one size fits all, so even if the headline says something works in this trial always check the trial results as it's likely it will say it works in a certain % of people and not everyone will be in that group!
For those interested in finding out more about research, trials and science I’d recommend “Bad Science” by Ben Goldacre as a great book to read!
Here is his website; https://www.badscience.net