Wednesday, September 05, 2007

What Does a Non-Effect Look Like, Meta-Analytically?

I've recently been reading reviews and meta-analyses of some of the (in my judgment) weaker subareas of psychology: the relation between religiosity and crime, the effectiveness of business ethics courses, the relationship between self-reports about visual imagery and performance on visual imagery tasks, and others. Reviews and meta-analyses in these areas tend to be positive, though a substantial proportion of studies show no effect.

That raises the question: What pattern of results should we expect for a psychological non-effect? Suppose religiosity has no deterrent effect whatsoever on criminal behavior. Should we expect all studies on the matter to show no effect? Of course not!

Several factors conspire to suggest that a substantial proportion of studies will nonetheless show positive results.

First, there is the "experimenter effect" famously studied by my colleague Robert Rosenthal: An experimenter who expects an effect of a certain sort is more likely to find such an effect than another experimenter, using the same method, who does not expect such an effect. For example, Rosenthal found that undergraduate experimenters who were told their rats were "bright" recorded better performance when testing the rats than others who were given "dull" rats (though the rats were from the same population). Experimenter effects can be surprisingly resistant to codification of method -- showing up, for example, even in subjects' reaction times recorded by computer.

Second, there is the well-known "file drawer problem". A study finding a relationship between two variables is substantially more likely to be published than one finding no relationship. This effect probably runs through the entire research process: If a pilot study or an RA's research project yields no results, it will often be dropped. Researchers will often not seek to publish negative results, and those that do seek publication may have difficulty having their essays accepted.

Third, the world is full of spurious correlations of all sorts, often for reasons having nothing to do with the hypothesis under study.

Here, then, is what I'd expect from a psychological literature in which there really is no effect between two variables:
(1.) Some null results, but maybe not even as many as half the published studies.
(2.) Positive results, but not falling into a clearly interpretable pattern.
(3.) Some researchers tending consistently to find positive results, across a variety of methods and subtopics, while others do not.
(4.) A substantial number of methodologically dubious studies driving the apparent effect.
(5.) (Maybe) a higher rate of null effects found in the sophomore years of the research (after the first experiments that generated excitement about the area but before referees start complaining that there have already been a number of null effect studies published).

I see most of these features in the dubious literatures I've mentioned above. Yet such literatures will tend to be reviewed positively because a mathematical meta-analysis will generally yield positive results and because reviewers will find reasons to explain away the null effects (especially by saying that the effect is not seen "in that condition").

Reviews and meta-analyses are typically performed by experts in the subfield. You might think this is good -- and it is, in several ways. But it's worth noting that experts in any subfield are usually committed to the value of research in that subfield, and they are commenting on the work of friends and colleagues they may not want to offend for both personal and self-serving reasons.

2 comments:

Genius said...

Now if you can just convince everyone else of this we can flush some of the rubbish out of our literature.

Or then again you might get countered by hundreds of published authors with incorrect conclusions.

Eric Schwitzgebel said...

Right! ;)