Science is Never Certain (and Lichens Don’t Cure Cancer Either)

By Madronna Holden

updated 12.11.2013

Lichens don’t cure cancer, or rather, we don’t know whether they do — in spite of the article accepted for publication by over one hundred scientific journals touting the lichen cure.  The article  is a fake created by the journal Science, and it has some large bloopers, including the assertion of findings not related to its research, and promises to forge ahead with human testing without any safety protocols.

This bogus study is meant to demonstrate the importance of peer review and prestigious “first tier” journals as against “open access” journals. But not all “first tier” journals like Science caught the problems in the fake, whereas the open access journal PLoS ONE did. According to an analysis published on October 19, 2013, contemporary scientific work in general is riddled with errors.

Indeed, the larger issue in need of discussion here is the intersection of science and culture—and science and profit.

Science itself recently published a research paper that has since been widely discredited– and it is not the only prestigious scientific journal with such problems.  Of 73 articles recently published by the New England Journal of Medicine, 50 were co-authored by drug company ghostwriters. 

Rather than tightening their peer review process in light of this, the Journal decided to de-emphasize the critical assessment of industry funding   Lest we think industry funding has little impact on research results, we should note that research sponsored by drug companies portrays drugs as positive 3.6 times more than the same research funded by government or non-profits.

Notably, certain journals have taken a more proactive stand in this respect.  The British Medical Journal  (as they write in an editorial just this month) have joined PLoS Medicine, PLos One, PloS Biology, The Journal of Health Psychology, as well as journals published by the American Thoracic Society, in the refusal to publish research funded in whole or part by the tobacco industry.

Such a stand acknowledges the ways that industry shapes knowledge to amplify profit as in the cases of those  who hid health effects of lead on children,  of plastics manufacture on workers—and of heart irregularities in those taking Vioxx and Avandia–and the company who launched a secret research program to vindicate asbestos.

By burying and tinkering with scientific data, companies postponed the loss of profit resulting from making good data public.

The profit motive calls into question the work of the American Council on Science and Health, an advocacy group that is a self-proclaimed attacker of “junk science” — going after environmental and health legislation and defending the likes of fracking, BPA, and pesticides.  Secret documents recently made public indicate that that group is directly funded by industries selling the products it defends.

Gilbert Ross, the research group’s director, previously had his medical license pulled while he served time in prison for defrauding the New York State Medicaid program to the tune of 8 million dollars.

And even if we take the profit motive out of the equation, cultural values play a large part in scientific findings.   Peer reviewers for instance, may unwittingly add to social prejudice.  Social psychologist Laura Purdy makes a case for hiring seemingly less qualified women in order to give women an equal chance, since not only do scientific discoveries take longer to be accepted if made by women, but both men and women in the contemporary US evaluate the very same resumes and articles as “better” if attributed to a man rather than a woman.

Respected scientist Shirley Strum relates how her own groundbreaking research on baboons was at first locked out of regular academic and publishing channels. The “old boy” scholars did not want to give up their position on innate baboon aggressiveness and male dominance in spite of Strum’s research, which was more meticulous than their own.  She was the first to actually follow baboons on their daily rounds in the field, as well as to record  social interactions of particular troops and individual baboons.

Geneticist Barbara McClintock was forced to finance her own work when universities and research institutions refused to hire her.  Her breakthroughs eventually won her the Nobel Prize, but in the context of Western science, her method of “listening to the corn” traveled a hard road to acceptance.

Eileen Pollock’s recent New York Times essay outlines the ongoing problems of gender prejudice in evaluating scientific work—as well as in assessing potentials of students going into science.

Such prejudice effects acceptance of knowledge from non-mainstream cultures as well.  I am old enough to remember the dismissal of indigenous ecological knowledge by mainstream peer reviewers. Today the burgeoning of ecological science and the number of indigenous individuals earning advanced degrees has created a social context in which such knowledge can take its rightful place in scientific understanding.

Predisposition shades our scientific observations in purely physical ways as well. Purdy also cites an experiment in which observers recorded the performance of one group of rats in a maze as better than another—even though the groups were in fact entirely equal.  The difference?  The observers were told beforehand that one group was smarter.

Maintaining that “objective” science circumvents social and personal values only makes such values unconscious. Goethe once observed, “all fact is really theory”.  A presumed “fact”, that is, exists in the context of a particular worldview  which is itself a theory of the world entailing assumptions, perceptions, and choices. As Thomas Kuhn’s history of Western science details, science has persistently ignored data that does not fit the worldview of its time—only accepting such data after a shift in worldview.

This history provides a solid case for the critical assessment of scientific values. What we are conscious of, we can compensate for.  What we don’t recognize, on the other hand, we can’t fix—as in the tragic medical errors that kill at least 98,000 annually in the context of a culture within medical schools that encourages doctors to ignore mistakes—since it teaches that doctors don’t make them.

It is this same culture that causes scientific errors in general to be denied-– and problems of data fraud to be passed on to “others”.  A compilation of twenty-one surveys of researchers in various scientific disciplines shows that whereas only 2 per cent admitted fudging their data, 28 per cent claimed to know colleagues who did.

Science will never live up to its claims of being self-correcting until scientists are able to admit their mistakes:  as Bruce Alberts, then editor of Science, recently testified before Congress, scientists  “need to develop a value system where simply moving on from one’s mistakes without publicly acknowledging them severely damages, rather than protects, a scientific reputation.”

But the idea that scientists don’t make mistakes is part of the arrogance endemic to the Western worldview– expressed by DNA co-discoverer James Watson’s question, “If we do not play God, who will?” This question has nothing to do with science and everything to do with cultural values that cast humans as dominators of the natural world—a trend in Western thought longstanding as it is unfortunate.  Ancient Greeks termed unwarranted human arrogance hubris –and their literature is full of examples in which hubris fated human downfall.

“Playing God” with the natural world has brought us to our current condition—in which every natural system on earth is in decline. It is neither science nor wisdom to cling to a worldview with such results.

Other values inherited from our current culture contribute to the ineffectiveness of science’s self-correcting mechanisms.   Studies replicating previous work are rarely funded. Researchers generally assume that replication is done with those with a “bone to pick”– the characteristic interpretation in a culture based on the value of competition rather than cooperation.

Paul Woodruff’s Reverence offers an alternative to the arrogance that closes scientific minds– and the competitive stance that stops scientists from admitting and learning from their own mistakes.  He details how wise historical traditions have cultivated reverence toward other lives as a means of combating  tyranny and authoritarianism.  Reverence facilitates the opening to the world essential to good science expressed by McClintock’s “listening to the corn” and Strum’s getting to know baboons as individuals making their own choices.

The indigenous value of acknowledgement discussed by Oneida elder Joanne Shenandoah, also pays homage to the value of other lives:  “We acknowledge their worth, acknowledge that we are equal with the woodland, the trees, the berries, the two-legged and the four-legged. We share the same air, space, and water.”

I can only imagine how our science might evolve if it held such acknowledgement of the world it hopes to know.

This would certainly prompt us to replace human arrogance with an appropriate dose of humility—and to make self-reflection an essential part of good science. After all, if science is based on observation, shouldn’t we know as much as possible about the observer?  Indeed, Nobel Prize winning physicist Werner Heisenberg’s Uncertainty Principle, indicates that a physicist’s expectations change the physical outcome of an experiment.

Heisenberg ‘s observations were focused on the arena of quantum dynamics, but philosopher David Hume, termed by the Stanford philosophy site “the most important philosopher ever to write in English”,  argues that there is no such thing as scientific certainty period.

Hume notes that scientific methodology develops its theories from observed experience.  Such theories can only be only our best guess at the way the world works—that is, they are hypotheses that give us probability rather than certainty. As in a coin toss, we can predict the chance that heads rather than tails will come up. We can elaborate the things impinging on the outcome.  We can even assign a statistic to that outcome.

But no matter how many times we toss the coin, we cannot say for certain that heads will be our next result. The issue of significance in scientific research is intimately intertwined with judging probability.

Unfortunately, this is not something with which enough scientists are familiar.  In 2005 John Ioannidis, an epidemiologist from Stanford University showed why, as a matter of statistical analysis, “most published research findings are probably false.”

In other ways, as well, science is constitutionally incapable of knowing everything about our world.  But if we base our science on careful observation, along with a critical assessment of our perceptions and values and an understanding of the limits of our knowledge, we can do good science.

However, if we skip such critical self-assessment, we have the type of Monsanto-science that asserts that its genetic engineering is necessary to feed the world. According to a report by the Union of Concerned Scientists, genetically engineered soy seed actually produces less than its traditional counterpart.  The Monsanto claim also ignores the key issue of food distribution.  Indeed, Monsanto’s activities occasion the consolidation of small farms taken out of the hands of the hungry.

In general cultivation techniques in industrial agriculture that many term “progress” without evaluating what progress actually is– lead to the deterioration of global farmland in escalating use of pesticides, chemical fertilizers and water.

In assessing the scientific claims for such agriculture, we might well consider Mark Twain’s caution:  “It’s not what he doesn’t know that worries me. It’s what he knows for certain that just ain’t so.”

Keeping our minds open to what we don’t “know for certain” helps compensate for our selective perception– illustrated by a video in which a group of students play basketball while a man in a gorilla suit walks through the scene. Viewers told beforehand to count the number of times the basketball bounces miss the presence of the gorilla entirely– as I myself did the first time I saw the film.

Distraction works.

Enough of the TV audience viewing drug ads fix their attention on people depicted in healthy poses to miss the voice-over rattling off a drug’s sometimes fatal side effects. And thus drug ads are commercially profitable despite the side effect listing.

We see what we expect to see. We also see what we are rewarded for seeing.  If we keep our eyes on the ball of career success, on corporate profits, on the prestige of science—or simply on the habits of our modern lifestyle—we easily miss the side effects of our choices.

Just as we need humility that honors the limits of our knowledge, we need a perspective that takes our whole interdependent world into account.

Assessing our values is the first step in doing good science.

Choosing our values is the next one.

We have considerable historical precedent to help us in making such choices. We can choose values that have accompanied human survival over thousands of years:  values such as humility, care, reverence, and thanksgiving– and get to know our world as a friend rather than a dominator.

See also

Why Science will Never Know Everything.

And for a profile of scientists who have done the right thing, working to share accurate information, see these personal profiles of of science “champions”.

This essay is copyright, 2013 by Madronna Holden.  Feel free to link to share. These are important issues to discuss.

6 Responses

  1. This is frightening. In particular the bits about the pharmaceutical-company-funded research and advertising. I’ve thought for a while that “ask your doctor about X” ads shouldn’t be legal, because they influence the patient to insist on X instead of whatever medicine works best for them (or instead of no medicine at all), and I’ve known for a while that the tobacco industry funds research to declare tobacco not as dangerous as other research insists in hopes that more people will buy tobacco, and of course pharmaceutical companies fund research because in absence of government-funded research where else will new medicines come from, but it hadn’t occurred to me that pharmaceutical companies fund research for the same reasons tobacco companies do and then market the results the same way.

    • I agree with you that this is frightening. Not only because of the neglected side effects of so many drugs (re your point about some medicine being worse than none at all), but the way such ads give us the idea that happiness comes in a second or two from popping a pill.
      I think our only hope is to make the Big Pharm funding of medical research (and pulling the strings behind medical publications) more public.
      Thanks for your comment.

  2. One of the areas I work with my students on is recognizing and acknowledging their sources of error in science experiments. For many, this is the most difficult part to do in their report. Often, I have to offer ideas of where I think errors may have occurred before they will be able to write anything in that section of the lab report. Although this is different than the admittance of mistakes mentioned in this article, it speaks to some of the same mentalities. It seems to be very difficult for many humans to admit mistakes. Even when I coach them through this process, it is easier for them to finally see mistakes within the instruments or the data, rather than a human mistake within their control. Mistakes are not failures, and failures are not a death sentences, yet we often see them treated in that way. In the professional community, to admit a mistake publicly could very well result in loss of practice, loss of income, and loss of the life they know. How difficult that must be to accept when faced with the choice of admitting your error or allowing the public to accept it as truth so that your livelihood might continue. And, that decision only comes into play when one is able to recognize their own errors.

    • Thoughtful discussion concerning mistakes, Micki.
      And of course if we don’t admit our mistakes, we can never learn from them. That is one advantage of traditional stories– they allow us to experience the mistakes of the past in story–and thus without bringing destructive consequences on ourselves for certain actions.
      I have seen a contrary example to the ones concerning hesitancy to admit mistakes here– when I collected oral history among members of pioneer families some decades back, my consultants were often anxious to share their mistakes for the sake of future generations.

  3. I went and read Eileen Pollock’ NYT essay that the blog links to and I was glad I did. Thank you so much. IT was very illuminating. It really ought to be mandatory reading list for all women heading off to to college in the math or sciences. I have lived long enough to know that there is a dead zone in men and women’s communication that men are less likely to cross over to share important thoughts. There is a bit both ways, but I think more men are caught like deer in the headlights, thinking that doing nothing is good enough- especially since doing nothing is so easy to do. The more they actual acknowledge and respect a woman’s ability, there seems to be an inverse relationship to the ability to communicate effectively.

    Of course they would never do that with other men–especially men with promise! I recognized it as early as 5th grade although I took it personally and had not yet understood the its link to gender.

    Now the bit about the MIT administrators ignoring a well staff study that concludes there is an issue. I can see them acknowledging it and not know what to do to make it fair, but downright ignoring the study–ARG! If that just doesn’t say it all, I am not sure what does.

    • I am glad you liked the NYT essay, Kate.
      It is important to separate out what is about you personally (as in your fifth grade assumptions) from what is about gender perceptions. I hope that someday we will no longer lose the insights of women scholars and scientists because of such prejudices.
      But we are making headway. A few decades back (and under pressure) some of the world’s best orchestras began to conduct auditions behind a screen, so that gender could not be determined. And within a few years, the percentage of women in these orchestras went from one per cent to nearly 50 per cent.
      Now we need a double bind way of reviewing scholarly and scientific work as well so that peer reviews do not simply enforce social prejudices rather than raise the standard of published work.
      Now you may want to check out COACH, which supports women in the sciences.
      Thanks for your comment!

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s