Monday, August 1, 2011

Open Source Database?: The decline effect, selective scientific reporting, significance chasing and underpowered tests


This article discusses how many statistically significant results that are published in scientific literature are based on random errors. They are published because of publication bias towards positive vs. null results. But after repeating the trial, we very often witness some reversion of the result to its mean and the t-stat on the variable's effect approaches zero.

The article uses some psychologist's research that made him famous, who later couldn't replicate his results.

It makes we wonder if Emily Oster's work which falsely attributed the missing women in the world to the presence of Hep B in areas where male-fem ratios were higher, suffered from a decline effect as well. Her work was discounted by later research that showed that high male-fem ratios are the outcome of sex-selection abortion after the first child is born.

Furthermore, given that many trials eventually suffer from the decline effect, since our samples are not that big, we test and experiment with what we want to see in data, and hence are more likely to find it than if we never questioned it at all...i.e. finding an effect is not 50/50 at the outset.

No comments:

Post a Comment