Loading...
Loading...
In 2015, the Open Science Collaboration attempted to replicate 100 published psychology studies. Results: only 36% produced the same results. Effect sizes in successful replications were half the originals. Similar replication failures have been documented in cancer biology (11% replication rate), economics, and pharmacology.
This means the majority of published findings in some fields may be unreliable — not because researchers are fraudulent, but because the incentive structure of academic science produces unreliable results systematically.
The mechanisms: p-hacking (testing many hypotheses until one produces p < 0.05), HARKing (Hypothesizing After Results are Known — presenting post-hoc findings as if they were predicted), small sample sizes (underpowered studies that detect noise, not signal), publication bias (positive results published, null results filed away), and the garden of forking paths (many defensible analytical choices, each of which could produce different results).
The replication crisis isn't unfixable — it's a structural incentive problem with structural solutions.
Pre-registration: declaring hypotheses and methods BEFORE collecting data prevents p-hacking and HARKing. If you commit to your analysis plan in advance, you can't retrospectively mine your data for significant results.
Registered reports: journals commit to publishing papers based on methodology, regardless of results. This eliminates publication bias — null results get published because the journal committed before seeing them.
Open data and open methods: sharing raw data and analysis code allows others to verify, replicate, and build on findings. Transparency is the disinfectant.
Larger samples: adequately powered studies that can reliably detect real effects and distinguish them from noise.
What this means for you: when evaluating a scientific claim, check: was the study pre-registered? Has it been replicated? Is the data publicly available? These markers distinguish robust science from the institutional output of a broken incentive system.
Tip
The easiest way to check if a study was pre-registered: look for a pre-registration link in the methods section, or search the study title on OSF.io (Open Science Framework) or AsPredicted.org. Pre-registered studies are substantially more trustworthy than non-pre-registered ones.
More than half of published psychology findings fail to replicate. The causes are structural: p-hacking, publication bias, small samples, and perverse incentive systems. Solutions exist: pre-registration, registered reports, open data. When evaluating claims, check for pre-registration and replication — these markers distinguish robust science from institutional noise.
Keep reading to complete