Monday, February 14, 2011

The Decline Effect In Science

The New Yorker recently ran a pair of pieces by Jonah Lehrer on the decline effect in science.*  The decline effect is one of those topics that does not seem to receive adequate coverage because it (a) can be bedeviling  to get one's head around, (b) it makes a lot of people very uncomfortable because it speaks to a potential problem with the scientific method and with falsification, and (c) seems to open the door to specious hucksterism, such as climate-change skepticism and creationism because the decline effect seems to implicate underlying problems in all science as a whole. As a result, one sees very little written about the decline effect in either popular media or in scientific publications despite it's pressing importance.  Indeed, regarding the last of these three rationale, it is something of a shock that the political right has not seized on the decline effect as more of a cause célèbre.

In order to discuss the decline affect, we must be clear what we are talking about - both in terms of how the scientific method operates and what the decline effect indicates.  Lehrer does an excellent job of breaking down the idea in the first of his New Yorker pieces as follows:

Before the effectiveness of a drug can be confirmed, it must be tested and tested again. Different scientists in different labs need to repeat the protocols and publish their results. The test of replicability, as it’s known, is the foundation of modern research. Replicability is how the community enforces itself. It’s a safeguard for the creep of subjectivity. Most of the time, scientists know what results they want, and that can influence the results they get. The premise of replicability is that the scientific community can correct for these flaws.

But now all sorts of well-established, multiply confirmed findings have started to look increasingly uncertain. It’s as if our facts were losing their truth: claims that have been enshrined in textbooks are suddenly unprovable. This phenomenon doesn’t yet have an official name, but it’s occurring across a wide range of fields, from psychology to ecology. In the field of medicine, the phenomenon seems extremely widespread, affecting not only antipsychotics but also therapies ranging from cardiac stents to Vitamin E and antidepressants: [Professor of psychiatry at the University of Illinois at Chicago, JohnDavis has a forthcoming analysis demonstrating that the efficacy of antidepressants has gone down as much as threefold in recent decades.

Members of the scientific community have reacted to the Lehrer piece with irritation, while others have stated that the decline effect is simply 'science correcting itself' - a statement that could not more completely miss the point.  While scientists are entitled to feel defensive about their methods - indeed the rigor required of a double-blind study would be very difficult to improve upon - it still seems that there is some element of 'gaming' results. The tendency of academic publishers to favor studies that see positive correlations rather than negative does not help matters.

Lehrer seems to point to the decline effect being most likely a product of the combination of manipulations and inconsistencies of study sample sizes, etc, on the part of scientists in order verify their suspicions taken with a  statistical error produced by using that semi-arbitrary old chestnut:
α = 0.05
due to the ease it provides in calculations (most notably via the old standard of the slide rule).  However, errors introduced by computational mistakes or manipulation thus providing significantly stronger correlations than may exist have been procedurely corrected for over time and the emphasis on reproducibility of studies is supposed to insulate against this.  The decline effect seems other in that correlations appear to disappear over time, and to do so gradually.  As Lehrer notes, the falsification process outlined by Karl Popper in The Nature of Scientific Discovery was designed to take place in a single grand experiment rather than through this piecemeal process.

The question becomes, with certain types of errors  - most specifically manipulations by researchers, consciously or unconsciously to get the results they want perhaps acting as known errors, to what extent do unknown errors play into the decline effect?  With science being our best device for understanding and interpreting the world around us, to what extent does the decline effect seem to question the very basis that we are capable of carrying out objective science altogether? Certainly, it must beg questions about the efficacy of almost any study.

Despite this, we also have thousands of studies in which the science and data do some to be rigorous, results are constantly reproducible with similar statistical spreads and results appear to be intuitively right, in that they reflect what we see through less rigorous forms of observation.  Certainly, it would be fools errand to use the decline effect as a rationale to dismiss science as a whole as a flawed ontological mechanism.  Importantly, those elements of science, such as climate change or evolution that are more commonly dismissed for ideological reasons tend to be those that have been the most widely studied and which have shown repeated and non-declining confirmation

What needs to be understood about the decline effect is, what can be learned from it in order to determine objectively what is going wrong and thus how to correct for it.  Certainly, an adjustment of the α used as the norm may be an important step, as would still tighter controls upon methodology to prevent insertion of researcher bias are important first steps, but the phenomena of the decline effect itself needs to be more widely studied.  Thus far, meta-studies of the studies that have 'suffered' from the decline effect appear inconclusive.  This however, merely speaks to the need for further study.

Perhaps what should be understood in discussing the decline effect is that, while questioning certain assumptions within science it does not devalue science or the scientific method.  That the decline effect seems to produce seemingly patterned errors speaks to an error or series of errors that is systematically being introduced to rather than one that is randomly occurring.  It speaks to the need for greater oversight in study construction and execution to guard against error.

As humans, we have a genetic tendency to want to intuit things.  We appreciate the observable and tactile and our brains seek out patterns that appeal to us emotionally or aesthetically.  Science does not operate under the same set of limitations, however as science is conducted by humans, the occasional error will be introduced.  It is the roll of the scientific method to limit the number of errors that can occur, however, as Popper noted, definitive Truth remains unknowable, and we must perpetually falsify (in the Popperian sense) those truths that we do arrive at as a means of bringing us ever closer to that unreachable position of what is.

The decline effect is perhaps best interpreted as a feedback mechanism.  It illuminates certain shortcomings in our current modus operandi and demands of us that we falsify and refine anew.  We must perpetually destroy the intellectual frame-works that we build in order to better understand.  Science exists as a method of discovery rather than speaking to absolutes. This process is almost, to a the definition postmodern, in its rationale.‡  We should not despair of this, but rather to respond and learn as such are the wages of discovery.

______

* The whole of Lehrer's first piece should be read as a primer to this piece for those that have not previously encountered the decline effect.  He does a wonderful job of breaking down the idea and presenting it within context.
 Many thanks to my friend Nick 'Arkan' Meyers for these links and for his thoughts on the decline effect which have helped to shape my thinking on this subject.
 Post modernism is best defined, from a scientific perspective, as the perpetual negotiation and renegotiation of difference.

No comments: