Should we trust published research?

I saw this quotation in John Naughton’s excellent blog last week which he took from an article by the editor of The Lancet

“The case against science is straightforward: much of the scientific literature, perhaps half, may simply be untrue. Afflicted by studies with small sample sizes, tiny effects, invalid exploratory analyses, and flagrant conflicts of interest, together with an obsession for pursuing fashionable trends of dubious importance, science has taken a turn towards darkness. As one participant put it, “poor methods get results”. The Academy of Medical Sciences, Medical Research Council, and Biotechnology and Biological Sciences Research Council have now put their reputational weight behind an investigation into these questionable research practices. The apparent endemicity of bad research behaviour is alarming. In their quest for telling a compelling story, scientists too often sculpt data to fit their preferred theory of the world. Or they retrofit hypotheses to fit their data. Journal editors deserve their fair share of criticism too. We aid and abet the worst behaviours. Our acquiescence to the impact factor fuels an unhealthy competition to win a place in a select few journals. Our love of “significance” pollutes the literature with many a statistical fairy-tale. We reject important confirmations. Journals are not the only miscreants. Universities are in a perpetual struggle for money and talent, endpoints that foster reductive metrics, such as high-impact publication. National assessment procedures, such as the Research Excellence Framework, incentivise bad practices.”

Mea culpa.  Given my personal experience of research in real estate, I’m afraid that it all sounds horribly familiar.  I can relate to the combination of relief and happiness you get with a statistically significant result.  It suggests that a large proportion of the published research in leading real estate journals could be systematically biased and misleading.  Are there too many incentives for researchers to produce unambiguous, clear-cut findings in order to get published?  I tend to think so.   As long ago as 1959, T.D. Sterling in his paper “Publication Decision and the Possible Effects on Inferences Drawn from Tests of Significance-or Vice Versa” found that over 97% of all published papers in psychology confirmed their hypothesis. Publication bias can mean  findings that are not significant or inconclusive are hard to publish and important results don’t see the light of day.  It would be interesting to see some similar work done  on the leading real estate journals.

Advertisements

One thought on “Should we trust published research?

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s