“Standing on the shoulders of giants” – this phrase, attributed to Bernard of Chartres and prominently used by Isaac Newton, emphasises the role of past discoveries in ongoing scientific research.
For modern research, where communication is predominately done through scientific publications, a high reliability of published findings is instrumental.
Yet surprisingly little is known about how reliable findings in scientific publications actually are.
In 2005, John Ioannidis argued in an influential essay that a large fraction of published findings are false – a claim that has been supported by recent systematic replication studies in several research fields. The Replication Project: Psychology (RPP), for instance, found that only 36 of 100 replicated studies came to the same conclusions as the originals.
Key factors which contribute to a low rate of replicability include low power, publication bias, flexible experimental or statistical designs that permit ‘p-hacking’, and low pre-study odds for the tested hypotheses – i.e. the testing of ‘unlikely’ hypothesis.
While the notion that science is facing a ‘replication crisis’ has been contested, it is becoming clear that a deeper understanding of factors that influence the reliability of published research is important.
It informs the design of conventions and policies in scientific research, but also the use of research findings by decision makers. Our engagement with DARPA’s SCORE programme is driven by our desire to contribute to this knowledge.