The Scientific American throws down the gauntlet!!!


I previously written (Fraud, Bias, Funding) about problems in the scientific research community.  I have taken some criticism because I work in the “private” sector instead of the “public” sector.  Anyone who really understands research would not make such a foolish critique.  It is the private sector that has driven technology to where it is today.  The idea of trying to falsify product that you sell to customers is laughable.  The price is no repeat business and that is the end of the business, in most cases at least.  Certainly one that is highly competitive.

Now the Scientific American has essentially come out and said the same thing as I have previously written.  The Epidemic of False Claims is their new article on the topic.  I will post the entire article below, but wanted to show some highlights and discuss.

“False positives and exaggerated results in peer-reviewed scientific studies have reached epidemic proportions in recent years. The problem is rampant in economics, the social sciences and even the natural sciences, but it is particularly egregious in biomedicine.”

This is comparable to what I have found.  The common theme is money.  The more money there is to be made based on a particular result, the more likely there is to be shenanigans in the results.  This is where the real big problem in scientific research shows up.  It can be summed up nicely in three words.

Conflict of Interest!

Wether the conflict be financial, political or personal, these conflicts abound in the scientific community.  The peer-review system does nothing to diminish the problem.  As has been shown in climate research, it tends to be the peers that do the review that have the exact same conflict of interest as the paper they are reviewing.

This is where the Scientific American really lays it down.  They recognize that the problem is real and it is not going away by itself.

“First, we must routinely demand robust and extensive external validation.”

“Many fields pay little attention to the need for replication or do it sparingly and haphazardly”

Many people have become skeptics going through the process of trying to replicate the results of climate scientists.  Steve McIntyre is one of those that tried to replicate the results of Dr. Mann who produced the hockey stick and it simply could not be done using any valid method.

In my little “I told you so moment” I would like to point out that The Scientific American is saying the same thing about the same problems.  Their solution isn’t what I would do (more funding for more studies…), but they are at least acknowledging that the problem is real and needs to be solved before all trust in the scientific community is gone.

Full Article Below:

————————–

An Epidemic of False Claims

Competition and conflicts of interest distort too many medical findings

By John P. A. Ioannidis

False positives and exaggerated results in peer-reviewed scientific studies have reached epidemic proportions in recent years. The problem is rampant in economics, the social sciences and even the natural sciences, but it is particularly egregious in biomedicine. Many studies that claim some drug or treatment is beneficial have turned out not to be true. We need only look to conflicting findings about beta-carotene, vitamin E, hormone treatments, Vioxx and Avandia. Even when effects are genuine, their true magnitude is often smaller than originally claimed.

The problem begins with the public’s rising expectations of science. Being human, scientists are tempted to show that they know more than they do. The number of investigators—and the number of experiments, observations and analyses they produce—has also increased exponentially in many fields, but adequate safeguards against bias are lacking. Research is fragmented, competition is fierce and emphasis is often given to single studies instead of the big picture.

Much research is conducted for reasons other than the pursuit of truth. Conflicts of interest abound, and they influence outcomes. In health care, research is often performed at the behest of companies that have a large financial stake in the results. Even for academics, success often hinges on publishing positive findings. The oligopoly of high-impact journals also has a distorting effect on funding, academic careers and market shares. Industry tailors research agendas to suit its needs, which also shapes academic priorities, journal revenue and even public funding.

The crisis should not shake confidence in the scientific method. The ability to prove something false continues to be a hallmark of science. But scientists need to improve the way they do their research and how they disseminate evidence.

First, we must routinely demand robust and extensive external validation—in the form of additional studies—for any report that claims to have found something new. Many fields pay little attention to the need for replication or do it sparingly and haphazardly. Second, scientific reports should take into account the number of analyses that have been conducted, which would tend to downplay false positives. Of course, that would mean some valid claims might get overlooked. Here is where large international collaborations may be indispensable. Human-genome epidemiology has recently had a good track record because several large-scale consortia rigorously validate genetic risk factors.

The best way to ensure that test results are verified would be for scientists to register their detailed experimental protocols before starting their research and disclose full results and data when the research is done. At the moment, results are often selectively reported, emphasizing the most exciting among them, and outsiders frequently do not have access to what they need to replicate studies. Journals and funding agencies should strongly encourage full public availability of all data and analytical methods for each published paper. It would help, too, if scientists stated up front the limitations of their data or inherent flaws in their study designs. Likewise, scientists and sponsors should be thorough in disclosing all potential conflicts of interest.

Some fields have adopted one or several of these mechanisms. Large international consortia are becoming commonplace in epidemiology; journals such as Annals of Internal Medicine and the Journal of the American Medical Association instruct authors to address study limitations; and many journals ask about conflicts of interest. Applying the measures widely won’t be easy, however.

Many scientists engaged in high-stakes research will refuse to make thorough disclosures. More important, much essential research has already been abandoned to the pharmaceutical and biomedical device industries, which may sometimes design and report studies in ways most favorable to their products. This is an embarrassment. Increased investment in evidence-based clinical and population research, for instance, should be designed not by industry but by scientists free of material conflicts of interest.

Eventually findings that bear on treatment decisions and policies should come with a disclosure of any uncertainty that surrounds them. It is fully acceptable for patients and physicians to follow a treatment based on information that has, say, only a 1 percent chance of being correct. But we must be realistic about the odds.

Posted in Bad Science and Politics and Global Warming by inconvenientskeptic on June 2nd, 2011 at 8:07 am.

1 comment

This post has one comment

  1. “natural sciences” – hmmm, could that be AGW/Climate Change?

    What % of papers published in the last 10 years fall under the “natural sciences” and relate to AGW/CC? Given the amount of grant $ at stake, I’d guess it’s close to half… any idea?

Web Design & Dev by

Mazal Simantov Digital Creativity