Okay, I'm now officially a John Ioannidis fanboy.
You might have heard of Ioannidis. In 2005 he published the notorious and elegant "Why Most Published Research Findings Are False". But what he has been up to recently, I wondered, and went to his homepage. Just reading the titles and abstracts of his recent publications was a pleasure for a nerdy, stats-loving skeptic like me!
Let me share just one: Schoenfeld and Ioannidis 2013: "Is Everything We Eat Associated with Cancer? A Systematic Cookbook Review". Schoenfeld and Ioannidis chose recipes at random from a recent cookbook and compiled a list of 50 ingredients used in those recipes (almonds, bacon, baking soda, bay leaf, beef, bread, butter, carrot, celery...). Then they searched the medical literature for evidence that the selected ingredients were associated with cancer. For 40 of the 50 ingredients, there was at least one study. 39% of published studies concluded that the ingredient was associated with an increased risk of cancer; 33% concluded a decreased risk; 5% concluded a borderline risk; and only 23% found no evidence of an association. Of the 40 ingredients, 36 had at least one study claiming increased or decreased risk.
Schoenfeld and Ioannidis do not conclude that almost everything we eat either causes or prevents cancer. Rather, they conclude that the methodology and reporting standards in the field are a mess. About half of the ingredients are exonerated in meta-analyses, but Schoenfeld and Ioannidis argue that even meta-analyses might tend to overstate associations given standard practices in the field.
A similar tendency to report spurious positive findings probably distorts psychology (see, e.g., here). Certainly, that has been my own impression when I have systematically reviewed various subliteratures such as those attempting to associate self-reports of visual imagery vividness with performance on visual tasks and attempting to demonstrate the effectiveness of university-level ethics instruction. (I'm currently updating my knowledge of the latter literature -- new post soon, I hope.)
Okay, I can't resist mentioning Ioannidis's delightful piece on grant funding. First sentence: "The research funding system is broken: scientists don't have time for science any more". (I've written about this here and Helen De Cruz has a nice post here.)
Oh, and here he is bashing the use of statistics in neuroscience. I guess I couldn't mention only one study after all. You know, because fanboys lack statistical discipline.