The current TAS (The American Statistician, February 2010) has an article by Stanley, Jarrel and Doucouliagos arguing that often a meta analysis would be better if we threw out 90% of the studies and only kept the 10% most precise.
In "Could it be better to discard 90% of the data? A statistical paradox" they note that there is often selection bias in publishing studies. If studies that show a statistically significant effect tend to be published more often (and they are!) then there is a bias in favor of showing that two things are related, even if they are not.
This isn't just an arcane argument; it touches policy toward medical advances and economic policies. For example: "Conventional economic theory holds that raising the minimum wage causes a reduction of employment ... but once publication selection is accounted for, [there is] little or no evidence of any employment effect [due to changes in the minimum wage."
In "Could it be better to discard 90% of the data? A statistical paradox" they note that there is often selection bias in publishing studies. If studies that show a statistically significant effect tend to be published more often (and they are!) then there is a bias in favor of showing that two things are related, even if they are not.
This isn't just an arcane argument; it touches policy toward medical advances and economic policies. For example: "Conventional economic theory holds that raising the minimum wage causes a reduction of employment ... but once publication selection is accounted for, [there is] little or no evidence of any employment effect [due to changes in the minimum wage."
It’s glad to see good information being convey. Its a very nice written, and i really like these blog. Thanks for the info.
ReplyDeleteHow can we account for publication selection bias without knowledge of the unpublished studies?
ReplyDelete