1. Godlee F. Milestones on the long road to knowledge. BMJ. 2007;334(Suppl 1):s2–3.[PubMed]
2. Howes N, Chagla L, Thorpe M, et al. Surgical practice is evidence based. Br. J. Surg. 1997;84:1220–1223.[PubMed]
3. Sackett DL, Rosenberg WM, Gray JA, et al. Evidence based medicine: What it is and what it isn't. BMJ. 1996;312:71–72.[PMC free article][PubMed]
4. Loiselle F, Mahabir RC, Harrop AR. Levels of evidence in plastic surgery research over 20 years. Plast. Reconstr. Surg. 2008;121:207e–11e.[PubMed]
5. Chang EY, Pannucci CJ, Wilkins EG. Quality of clinical studies in aesthetic surgery journals: A 10-year review. Aesthet. Surg. J. 2009;29:144–7. discussion 147-9. [PubMed]
6. Dictionary.com; http://dictionary.reference.com/browse/bias.
7. Merriam-Webster.com; http://www.merriam-webster.com/dictionary/bias.
8. Gerhard T. Bias: Considerations for research practice. Am. J. Health. Syst. Pharm. 2008;65:2159–2168.[PubMed]
9. Stampfer MJ, Colditz GA. Estrogen replacement therapy and coronary heart disease: A quantitative assessment of the epidemiologic evidence. Prev. Med. 1991;20:47–63.[PubMed]
10. Rossouw JE, Anderson GL, Prentice RL, et al. Risks and benefits of estrogen plus progestin in healthy postmenopausal women: Principal results from the women's health initiative randomized controlled trial. JAMA. 2002;288:321–333.[PubMed]
11. Hulley S, Grady D, Bush T, et al. Randomized trial of estrogen plus progestin for secondary prevention of coronary heart disease in postmenopausal women. heart and Estrogen/progestin replacement study (HERS) research group. JAMA. 1998;280:605–613.[PubMed]
12. Burkhardt BR, Eades E. The effect of biocell texturing and povidone-iodine irrigation on capsular contracture around saline-inflatable breast implants. Plast. Reconstr. Surg. 1995;96:1317–1325.[PubMed]
13. Caprini JA. Thrombosis risk assessment as a guide to quality patient care. Dis. Mon. 2005;51:70–78.[PubMed]
14. Davison SP, Venturi ML, Attinger CE, et al. Prevention of venous thromboembolism in the plastic surgery patient. Plast. Reconstr. Surg. 2004;114:43E–51E.[PubMed]
15. Pusic AL, Klassen AF, Scott AM, et al. Development of a new patient-reported outcome measure for breast surgery: The BREAST-Q. Plast. Reconstr. Surg. 2009;124:345–353.[PubMed]
16. Barnett HJ, Taylor DW, Eliasziw M, et al. Benefit of carotid endarterectomy in patients with symptomatic moderate or severe stenosis. north american symptomatic carotid endarterectomy trial collaborators. N. Engl. J. Med. 1998;339:1415–1425.[PubMed]
17. Ferguson GG, Eliasziw M, Barr HW, et al. The north american symptomatic carotid endarterectomy trial : Surgical results in 1415 patients. Stroke. 1999;30:1751–1758.[PubMed]
18. Hennekens CH, Buring JE. Epidemiology in Medicine. Little, Brown, and Company; Boston: 1987.
19. Lobo FS, Wagner S, Gross CR, et al. Addressing the issue of channeling bias in observational studies with propensity scores analysis. Res. Social Adm. Pharm. 2006;2:143–151.[PubMed]
20. Paradis C. Bias in surgical research. Ann. Surg. 2008;248:180–188.[PubMed]
21. Davis RE, Couper MP, Janz NK, et al. Interviewer effects in public health surveys. Health Educ. Res. 2009
22. Andrews N, Miller E, Taylor B, et al. Recall bias, MMR, and autism. Arch. Dis. Child. 2002;87:493–494.[PMC free article][PubMed]
23. McDowell I, Newell C. Measuring Health. Oxford University Press; Oxford: 1996.
24. Ultee J, van Neck JW, Jaquet JB, et al. Difficulties in conducting a prospective outcome study. Hand Clin. 2003;19:457–462.[PubMed]
25. Anderson FA, Jr, Wheeler HB, Goldberg RJ, et al. A population-based perspective of the hospital incidence and case-fatality rates of deep vein thrombosis and pulmonary embolism. the worcester DVT study. Arch. Intern. Med. 1991;151:933–938.[PubMed]
26. Gaitini D. Current approaches and controversial issues in the diagnosis of deep vein thrombosis via duplex doppler ultrasound. J. Clin. Ultrasound. 2006;34:289–297.[PubMed]
27. Winer-Muram HT, Rydberg J, Johnson MS, et al. Suspected acute pulmonary embolism: Evaluation with multi-detector row CT versus digital subtraction pulmonary arteriography. Radiology. 2004;233:806–815.[PubMed]
28. Schoepf UJ. Diagnosing pulmonary embolism: Time to rewrite the textbooks. Int. J. Cardiovasc. Imaging. 2005;21:155–163.[PubMed]
29. DeAngelis CD, Drazen JM, Frizelle FA, et al. Clinical trial registration: A statement from the international committee of medical journal editors. JAMA. 2004;292:1363–1364.[PubMed]
30. Laine C, Horton R, DeAngelis CD, et al. Clinical trial registration--looking back and moving ahead. N. Engl. J. Med. 2007;356:2734–2736.[PubMed]
31. Chang CC, Wong CH, Wei FC. Free-style free flap. Injury. 2008;39(Suppl 3):S57–61.[PubMed]
32. Wei FC, Mardini S. Free-style free flaps. Plast. Reconstr. Surg. 2004;114:910–916.[PubMed]
33. Koshima I, Inagawa K, Urushibara K, et al. Paraumbilical perforator flap without deep inferior epigastric vessels. Plast. Reconstr. Surg. 1998;102:1052–1057.[PubMed]
34. Godwin M, Ruhland L, Casson I, et al. Pragmatic controlled clinical trials in primary care: The struggle between external and internal validity. BMC Med. Res. Methodol. 2003;3:28.[PMC free article][PubMed]
35. Bonell C, Oakley A, Hargreaves J, et al. Assessment of generalisability in trials of health interventions: Suggested framework and systematic review. BMJ. 2006;333:346–349.[PMC free article][PubMed]
36. Bornhoft G, Maxion-Bergemann S, Wolf U, et al. Checklist for the qualitative evaluation of clinical studies with particular focus on external validity and model validity. BMC Med. Res. Methodol. 2006;6:56.[PMC free article][PubMed]
37. Jadad AR, Moore RA, Carroll D, et al. Assessing the quality of reports of randomized clinical trials: Is blinding necessary? Control. Clin. Trials. 1996;17:1–12.[PubMed]
38. Gurusamy KS, Gluud C, Nikolova D, et al. Assessment of risk of bias in randomized clinical trials in surgery. Br. J. Surg. 2009;96:342–349.[PubMed]
39. Moher D, Schulz KF, Altman D. The CONSORT statement: revised recommendations for improving the quality of reports of parallel-group randomized trials. JAMA. 2001;285:1987–1991.[PubMed]
40. Masia J, Kosutic D, Clavero JA, et al. Preoperative computed tomographic angiogram for evaluation of deep inferior epigastric artery perforator flap breast reconstruction. J. Reconstr Microsurg. Epub ahead of print. [PubMed]
When an experiment fails to produce an interesting effect, researchers often shelve the data and move on to another problem. But withholding null results skews the literature in a field, and is a particular worry for clinical medicine and the social sciences.
Researchers at Stanford University in California have now measured the extent of the problem, finding that most null results in a sample of social-science studies were never published. This publication bias may cause others to waste time repeating the work, or conceal failed attempts to replicate published research. Although already recognized as a problem, “it’s previously been hard to prove because unpublished results are hard to find”, says Stanford political scientist Neil Malhotra, who led the study.
His team investigated the fate of 221 sociological studies conducted between 2002 and 2012, which were recorded by Time-sharing Experiments for the Social Sciences (TESS), a US project that helps social scientists to carry out large-scale surveys of people's views.
Only 48% of the completed studies had been published. So the team contacted the remaining authors to find out whether they had written up their results, or submitted them to a journal or conference. They also asked whether the results supported the researchers’ original hypothesis.
Of all the null studies, just 20% had appeared in a journal, and 65% had not even been written up. By contrast, roughly 60% of studies with strong results had been published. Many of the researchers contacted by Malhotra’s team said that they had not written up their null results because they thought that journals would not publish them, or that the findings were neither interesting nor important enough to warrant any further effort.
“When I present this work, people say, ‘These findings are obvious; all you've done is quantify what we knew anecdotally’,” says Malhotra. But social scientists often underestimate the magnitude of the bias, or blame journal editors and peer reviewers for rejecting null studies, he says. His team's findings are published today in Science1.
Poisoned by success
The problem may be bigger than the TESS sample suggests. Each survey design proposed to TESS is peer-reviewed, to ensure that it has sufficient statistical power to test an interesting hypothesis; weaker studies in these fields would probably have an even lower rate of publication. “It’s very likely that this study underestimates the true extent of the problem,” says Daniele Fanelli, an evolutionary biologist who studies publication bias and misconduct, and is currently a visiting professor at the University of Montreal in Canada.
In 2010, Fanelli surveyed the publication bias across a range of disciplines, and found that psychology and psychiatry had the greatest tendency to publish positive results2. “But it’s not just a social-science issue — it’s also common in the biomedical sciences,” says Hal Pashler, a psychologist at the University of California, San Diego, in La Jolla. “Both are really poisoned by only hearing about the successes.” (See '‘Ethical failure’ leaves one-quarter of all clinical trials unpublished'.)
Social scientists are already trying to tackle publication bias (see ‘Replication studies: Bad copy’). Malhotra is involved in the Berkeley Initiative for Transparency in the Social Sciences, which advocates a range of strategies to strengthen social-science research. One option is to log all social-science studies in a registry that tracks their outcome — a model that is already used to help ensure that null results from drug trials see the light of day. Meanwhile, Pashler has set up a website, PsychFileDrawer, to capture null results generated by attempts to replicate findings in experimental psychology.
These remedies have not been universally welcomed, however. “There’s been a lot of pushback,” says Malhotra. Some social scientists are worried that sticking to a registered-study plan might prevent them from making serendipitous discoveries from unexpected correlations in the data, for example. But most accept the need for change, adds Pashler: “We’re all waking up to this.”
Masses of experimental results lie unpublished in social scientists' file drawers, potentially skewing the reliability of those that do get into print.
- Journal name: