Before I start, a point or two about methodology.
Ever seen an article that tells you, for example, that drinking red wine in moderation can have a positive effect, then a few weeks later seen another that says exactly the opposite? Both sets of conclusions may be correct based on the statistical approach used, but both conclusions may be hopelessly misleading for the same reasons. So how do you tell the difference?
First of all, you have to know where the results come from - what kind of study was conducted.
Ideally you want a
Cohort Study, where a large number of people are tracked over time. Unfortunately this kind of study is hugely expensive and, while good for common things like, for example, influenza, it is somewhat impractical for conditions that are rarer and take a long time to develop.
Cohort study - Wikipedia, the free encyclopedia
Alternatively you can use
Case Control, a type of study that compares a group of people who have a condition with a 'control group'. Case Control studies are cheaper and take less time to conduct. Unfortunately, Case Control is likely to be biased because it specifically includes a group who have a condition (for example, if you want to find out if red wine causes heart disease you might select a group of wine drinkers who have heart disease, but leave out wine drinkers who don't. See how that might influence the findings?)
Interestingly, it was a Case Control study that initially showed potential linkage between smoking and cancer. The results were questioned until Cohort Studies demonstrated that a link did exist. Once established by a Cohort Study, the criticism of the original findings of the Case Control study evaporated because of the reliability of Cohorts. The Cohort Studies are the reason that I will take issue with anyone who says that smoking does not cause cancer.
Case-control study - Wikipedia, the free encyclopedia
You can also use
Meta Analysis. These are reviews of existing studies. A researcher would find all the appropriate studies, select the data that is specific to the issue he/she is studying, combine the data and calculate what it all suggests. In this approach, you actually don't have to do any fieldwork at all, so it's very quick and comparatively cheap. Unfortunately, because of the different methodologies used, it is difficult to combine the data sets without generating confounders (issues that skew the results), and is also subject to bias because leaving out certain studies can skew the findings in an almost infinite number of ways. It's also worth checking whether the analysis is conducted by someone who has an agenda. I would be extremely skeptical of any meta analysis that was conducted by a corporation that had a vested interest in seeing the results come out one way or the other.
Meta-analysis - Wikipedia, the free encyclopedia
Sorry if this seems like an introduction to statistics, but THIS is the sort of stuff you need to understand before you say things like "it's good science" and expect your views to be considered well informed.
OOh - one other thing about Meta Analysis. Two studies can point in one direction, but when they are combined they can point in exactly the opposite direction. It's called The Simpson Paradox. Weird, huh? Starting to get the sense that this is difficult stuff? Well, that's because it is.