Vedran Katavic [*] [1]

Show more about author

In lieu of an introduction

How many times does it happen that a prejudice is confirmed scientifically? I would have to guess, but I would say not often, but when it is confirmed it is doubly reinforced. For an intellectual exercise let’s, for a moment, pretend that one such prejudice is that there are two types of scientists – 1) the goody two-shoes scientists with their noses in books and beakers, without a worry outside the laboratory, and 2) the evil, mad scientists scheming to create a “monster” and make it “live”. The former, ultra altruistic do-gooders who do not care about worldly goods or personal gains, and the latter self-centered egomaniacs who would do anything to have their name carved in the fabric of human history. Farfetched? It may seem that way, but when in 1993 scientist were asked (1) why they publish (and publishing is the only way someone’s research gets any recognition), their answers could be split down according to the proposed prejudice. Around 50% of the scientists reported that their first choice for publishing was for the benefit of dissemination of knowledge, and thus for the general good. The other half published to further their career, improve their funding, protect patents, or for nothing less than to boost their ego! Not very altruistic of them, now is it? But how very human of them (2). A fluke result? A one-time chance finding that has little to do with real life, and even less to do with scientists’ reasons to do and publish research? Maybe, but some 6 years later in a repeated study by ALPSP (The Association of Learned and Professional Society Publishers), the same results surfaced (3).
So it may seem that there is an underlying truthfulness in the proposed prejudice about the altruistic and the selfish scientist. Of course, these data need to be observed from other view-points also, because scientists (beside their intrinsic motivations) are also exposed to external pressures to publish their research – to stay “afloat” in the efforts to get funding, maintain employment (“publish or perish”), and the like. Apart from these pressures scientists are also exposed to expectations – they are expected to know (beside their own scientific fields and usage of all the latest scientific equipment, word processing, figure preparations, statistics…) the rules and regulations of their institutions, the applicable laws, how to write well and often, how to send manuscripts for review (and how to take the reviewers’ and referees’ criticism, as well as editors’ letters of rejection in stride), the rules that govern scientific publishing and communication, (non-native) usage of the language of modern, international science etc. Some might argue, especially in times of financial crises, that it is also expected of scientists (and more worthy of the public funds) to do research that can be used to directly “cure” acute problems, e.g. various diseases and climate change, and that scientists who do not propose applicable research (but perform curiosity driven research) should have a harder time getting funding. So the scientists, for the privilege of doing science (for whatever intrinsic motivation they might have), also have to endure (and even thrive) under huge (extrinsic) expectations. With so many pressures, few (external) rewards [apart from the intellectual reward and recognition publications bring (4)], and the fact that even scientists are human, it is a wonder more things don’t go wrong in the process.
I am not the first to notice that scientists are human, and their motivations (to do good and bad) may be as diverse as for any other endeavor or walk of life (5). The reasons, determination, and prediction whether one is going to be integer or perform scientific or professional misconduct, as well as ways of prevention are of high importance for the institutions, policy makers, and scientists themselves (6,7). Sometimes, “interesting” things about scientists of the past creep up to haunt ….. science – Gregor Mendel’s “too perfect” results, Sigmund Freud’s patient “histories”, or Isaac Newton’s calculations among others (8). Such things create the appearance that it is acceptable to “cut corners” and do sloppy “science” (9). One can argue that if a pattern of any behavior within a society (of any scale – from the governmental institutions, academia, all the way down to kindergarten) is omnipresent it becomes a “culture” – it may be wrong but everyone is “doing it”, so for some, dishonesty and cheating may be acceptable (10). From that point on it starts being the norm, something everyone tries to emulate. Interestinglyenough, it is easy to correlate such views with a countries’ corruption perception index/ranking (11).

Basic RCR terminology – misconduct and QRPs

This leads us, finally, to the basic terminology used in describing responsible conduct of research (research integrity) and its antipode, scientific misconduct or fraud in science (12). As it is easier to describe the meaning of fraud and misconduct than the ideals of integrity I will start with them. Usually, the term scientific misconduct includes the existence of fabrication, falsification and plagiarism (F, F, P) in proposing, performing, and reporting research. Briefly and basically, fabrication (results are created/invented out of thin air) and falsification (misreporting of performed research, omitting/obfuscating/manipulating data) are crimes against science itself, because they injure the fabric of future research by influencing its foundations – valid data. Plagiarism (taking of words/ideas of others and representing as own) is a crime against fellow researchers and although many consider it less pernicious for science, it is, nevertheless, detrimental to the overall feeling of trust/respect towards one’s colleagues and the scientific community. Not to mention how deeply fabrication, falsification, and plagiarism affect the public trust in science and scientists. For the most part, it is the public that finances the research, and for their sacrifice (i.e. public funding) they deserve fairness, openness, and accountability.
Besides these, the terrible three (F, F, P), there are also other forms of misconduct in research that can be called questionable research/publication practices (QRPs) – they are all serious deviations from the norm, and involve a wide variety of behaviors. Such practices are not so much destructive to science itself, rather to the research process. They create no added value to published research, introduce unnecessary uncertainty into an already complicated process, change the perception of who performed the research, and cast doubt on the truthfulness of the involved (conflicts of interest).
Typically QRPs include, but are not limited to (and it is not my goal to be exhaustive):
1) Simultaneous submission of manuscripts to several journals (trying to “reduce” the time until a final acceptance/publication) – thus performing peer-review abuse – draining the limited resources of journal editors and reviewers/referees, (ab)using their time, and running the risk of having the manuscript accepted in several journals (a QRP of its own accord).
2) Attempting to publish already published (or accepted) manuscripts – duplicate (triplicate…) or redundant publications – especially dangerous in biomedicine where such publication can influence therapeutic or diagnostic criteria by biasing future systematic reviews or meta-analyses.
3) Self plagiarism – although plagiarism involves “stealing” from others (as described earlier), stealing from oneself is (legally & logically) impossible (unless one has a multiple personality disorder). Hence, it is not a form of theft, but is a less than honest practice of recycling own words with an intention to deceive or mislead (13,14) about its originality (usually “intended” for different audiences/journals, may include copyright infringements). Sometimes it can be of such order of magnitude to be named redundant publication. Usually acceptable for adequately cited complex methodology sections of one’s previously published work.
4) Salami publications – publishing manuscripts by “slicing” data from a single study that could/should have been published as a single, complete manuscript. Best described by an example – one publishes “slices” in an infectology, pediatric, and neurologic journal after performing research on neurologic side effects of vaccination in children.
5) Questionable authorship – with all its “pathologies” (no matter if demanded or awarded) – guest, gift, planted, ghost – the only way to get on the byline of a manuscript should be earned by fulfilling intellectual criteria of authorship. Different professional bodies have somewhat differing definitions of the criteria, but in biomedicine the consensus is that the author has contributed enough to be an author when he/she is able to publicly defend the manuscript’s intellectual content – for an exhaustive list of authorship criteria see reference (15).
6) Sloppy research – inadequate keeping/sharing of records/materials/data/notes, biased statistics, sloppy/biased citing of earlier research, mis(over)representation of the importance of one’s research, etc.
7) Conflicts of interest – failure to disclose significant invested interests which compete with/corrupt the research (e.g. withholding information on ownership in a company while publishing positive research on same company’s products).
8) Mentorship exploitation, unethical relation to human/animal subjects, peer-review, financial accountability, CV “boosting”…

Not everyone is “on the same page”

Although most of the researchers can agree on the terms used to describe misconduct (F, F, P) and questionable research practices(usually within the same scientific field – sometimes there is dissonance between the “hard” and “soft” sciences), it is still disputed how important they are for science in general, if such things happen in all fields of science (16) or mostly in biomedicine, if watchfulness over integrity is basis for future research (17-19), what happens to scientific articles that are a product of scientific misconduct (18-20), how often these breaches happen (21), how often they happen and are not reported (22), how harshly they have to be dealt with (23), [if it is harsh enough? (24)], who should (if anyone) deal with such issues (25-28), is science (and academia) self-regulating, can integrity and integer writing be taught (29-31), does such education have a lasting effect (32-34), how such breaches affect the public trust (35), etc.
How often is fraud perpetrated? Difficult to say. Some evidence we can get is in the biomedical field – by checking the number of papers retracted from the Medline database for reasons of misconduct or by looking at the number of cases the US government (in the form of the Office of Research Integrity) has confirmed/sanctioned. And those are just the instances that have been discovered. Some cases of plagiarism can be ferreted out using on-line databases (36). But without the help of whistleblowers, it is not easy to find out serious misconduct just by looking at the numbers. How many more cases go unreported/undiscovered? One could start doing large-scale number crunching for all published manuscripts – checking for inconsistencies in the frequency of appearance of numbers in the results (37) – an exceedingly costly enterprise with very little actual benefit. Another, oblique way of finding out about the possible cases is by sending out/handing out questionnaires/surveys. These would then, ideally, give us a rough estimate of how many scientists are willing to engage in misconduct and QRPs. Recently a systematic review and meta-analysis of all survey data on fabrication and falsification of research was performed (21), and revealed that approximately 2% of scientists (who answered the surveys) admitted to some sort of scientific misconduct at least once, with a further 30% admitting to some form of QRPs. The number of papers retracted from the public databases gives a rough estimate of fraudulent research at somewhere around 0.02 %. It seems obvious that the numbers to what the scientists admitted are (far) larger than what was actually discovered/confirmed/sanctioned, and could be (far) less than what was actually perpetrated.
These numbers probably leave us with a strange feeling of having access (through public databases) to a lot of (not yet discovered/confirmed) fraudulent research. It is already difficult keeping up with all the important publications within one’s field without having to worry which of the publications are true, and which publications are going to end up being retracted. But these retractions do happen. Unfortunately, a lot of the retracted publications still influence modern science because, even years later, they are (regularly) cited (18-20).

Some of the big cases of the last decade

In the last decade there have been several (actually many) high-profile international cases of sanctioned scientific misconduct, with tangible repercussions for the involved perpetrators (and science in general). Presented here are short backgrounds of 4 such cases (of note is that in all the cases none of the perpetrators’ co-authors were ever convicted of fraud).
Hwang Woo Suk (24)
A South Korean stem cell scientist who claimed (among other things) having developed a human embryonic stem (hES) cell line. His downfall started when unethical behavior was suspected (“donation” of oocytes by lab members), while duplicated images and questionable DNA fingerprints were identified in his publications. Eventually, his research was declared fraudulent, his publications (in Science) were retracted, and he was fired from the Seoul National University. He was then indicted on charges of fraud, bioethics violations, and embezzlement which led to a convicted to a 2-year suspended prison sentence for bioethics violations and embezzlement (but not for fraud).
Jan Hendrik Schön (16)
A German physics star working for Lucent Technologies at the Bell Laboratories who claimed to have made breakthroughs in materials science and nanotechnology by having successfully performed experiments others only dreamed of (creating transistors on the molecular scale from plastic based materials – organic electronics). His “discoveries” initially brought him several prestigious prizes, fame, and an incredibly impressive list of publications in highly cited journals (one publication every 8 days!). After no one could replicate his research and anomalous data were discovered in his publications his work was scrutinized by a committee set up by Bell Labs. Their discoveries led to multiple retractions of published papers (8 Science, 7 Nature, and 6 Phy Rev B papers). Schön left the US and returned to Germany, where he was stripped of his doctoral degree from the University of Konstanz, and was also sanctioned by the German Research Fund (DFG, Deutsche Forschungsgemeinschaft).
Jon Sudbø (38)
A Norwegian dentist and physician at the University of Oslo who claimed (in publications to New England Journal of Medicine, the Lancet, Journal of Clinical Oncology) his research suggested that non-steroidal anti-inflammatory drugs (NSAIDs) diminished the risk of oral cancer, based on statistical analyses patients from a cancer patient registry. It was discovered he fabricated most of his research and at least 15 publications (as well as his doctoral dissertation). In the end he lost his doctoral degree, his publications were retracted, and licenses to work as a physician and dentist were revoked.
Eric Poehlman (23)
A US scientist from the University of Vermont who published some 200 articles on metabolic changes in aging and menopause, obesity, and exercise. It was discovered (after Walter DeNino, a former lab technician, exposed it) he falsified at least 17 grant applications to the NIH, and 10 papers he published were retracted. In the aftermath he was sentenced to serve a sentence in a federal prison (one year and a day) for using falsified data in proposals for federal research grants.
These cases are just some of the most prominent cases that have plagued science in the last decade, but unfortunately they are (by far) not the only ones that have drained the public funds and abused the public trust. The best way (a nice mouthful) to describe the bottom line of the effects of the biggest cheaters is to paraphrase a personal communication – people who perform scientific misconduct are, if not the lowest of the low, the worst of the worst, and the cancer of the scientific community, then they are at least pustulent (a combination of pestilence and pus!) boils on its clunium.

Final thoughts

Publishing one’s research (or being a scientist) has always been a mixture of pleasure and pain, temptation and restraint, altruism and selfishness, recognition and anonymity – a final test of withstanding the critique of one’s learned peers. Those who intentionally do wrong should also keep in mind that publishing one’s research has to stand the test of time. To paraphrase a saying [most likely Abraham Lincoln, 16th president of the US (1809 - 1865) and P.T. Barnum, businessman and showman (1810 – 1891) (39)]: “You can fool some of the people all of the time, and all of the people some of the time, but you cannot fool all of the people all of the time.” So inevitably, someone, somewhere, sometime in the (closer or farther) future is going to want to/need to redo some part of their research, and it had better be right. Otherwise, by standing on the shoulders of (such) giants[paraphrased from words attributed to Bernard de Chartres by John of Salisbury (40,41) and Isaac Newton (39)] we would not be able to see very far.
Not to end on a sour, myopic, vertically-challenged, infectious note, it maybe of significance to know that scientists are not the only ones who are (occasionally) tempted to cheat. Just like in the “prisoners’ dilemma” scenarios (42), it has recently been, quite elegantly, shown in the relationship of fig trees and fig wasps (their possible pollinators) that for their mutualism (biological interaction between two organisms where each individual derives a benefit) it would be costly to allow for cheating (43), i.e. the figs have had to find ways of “punishing” the wasps who were not pollinating them (the cheaters) and “rewarding” the ones that were. The stronger the punishment, the less cheaters there were, and the better the benefit/mutualism for the whole group.
For agencies that check if scientists cheat (to reinforce their trust that what they are doing is right) as well as for those opposed to such notions of external regulation (to overturn their views), I can end by noting two things: 1) if the public cares enough to give to science what science needs most – its trust and money (sometimes a lot of it) then, by using a token phrase of Ronald Reagan, 40th president of the US (1911-2004), there has to be a body that will “Trust, but verify” (39) that the public funds are used according to the approved proposals; and 2) if “providing benefits to a host is costly” (43), and the host is the public, then there have to be both incentives and sanctions that promote integer behavior and punish cheating. Nature devised ways of controlling and penalizing cheating, so why not learn from nature – it is high time, isn’t it?
And finally – to reuse the words of Kenneth D. Pimple – one of the ways of making sure that you (as a reader/scientist/funder/policy maker) know that a piece of research is not scientific misconduct, all you have to do is truthfully and affirmatively answer 3 simple questions – Is it true?, Is it fair?, and Is it wise? (44).

Notes

Potential conflict of interest
None declared.

References

1. Coles B. The STM Information System in the UK. BL: Royal Society, 1993.
 2. Katavic V. The “cheating”.com academic society: A personal View. The Write Stuff 2006;15:118-9.
 3. Swan A. ‘What authors want’: the ALPSP research study on the motivations and concerns of contributors to learned journals. Learned Publishing 1999;12:170-2.
 4. Wager E. Recognition, reward and responsibility: why the authorship of scientific papers matters. Maturitas 2009;62:109-12.
 5. Adams D, Pimple KD. Research misconduct and crime lessons from criminal science on preventing misconduct and promoting integrity. Account Res 2005;12:225-40.
 6. Martinson BC, Anderson MS, de Vries R. Scientists behaving badly. Nature 2005;435:737-8.
 7. Martinson BC, Crain AL, Anderson MS, De Vries R. Institutions’ expectations for researchers’ self-funding, federal grant holding, and private industry involvement: manifold drivers of self-interest and researcher behavior. Acad Med 2009;84:1491-9.
 8. Montgomerie B, Birkhead T. A beginner’s guide to scientific misconduct. ISBE Newsletter 2005;17:16-24.
 9. Kreutzberg GW. The rules of good science. EMBO Rep 2004;5:330-2.
10. Jensen LA, Arnett JJ, Feldman SS, Cauffman E. It’s Wrong, But Everybody Does It: Academic Dishonesty among High School and College Students. Contemp Educ Psychol 2002;27:209-28.
11. Magnus JR, Polterovich VM, Danilov DL, Savvateev AV. Tolerance to cheating: an analysis across countries. J Econom Edu 2002;33:125-33.
12. Steneck NH. Fostering integrity in research: definitions, current knowledge, and future directions. Sci Eng Ethics 2006;12:53-74.
13. Bird S. Self-plagiarism, recycling fraud, and the intent to mislead. J Med Toxicol 2008;4:69-70.
14. Scanlon PM. Song from myself: an anatomy of self-plagiarism. Plagiary: Cross-Disciplinary Studies in Plagiarism, Fabrication, and Falsification 2007;2:57-66.
15. ICMJE. Uniform Requirements for Manuscripts Submitted to Biomedical Journals: Ethical Considerations in the Conduct and Reporting of Research: Authorship and Contributorship. Available at: http://www.icmje.org/ethical_1author.html. Accessed March 10, 2010.
16. Reich ES. Plastic fantastic: how the biggest fraud in physics shook the scientific world. 1st ed. New York: Palgrave Macmillan, 2009.
17. Bilic-Zulle L. Scientific integrity - the basis of existence and development of science. Biochem Med 2007;17:143-50.
18. Korpela KM. How long does it take for the scientific literature to purge itself of fraudulent material?: the Breuning case revisited. Curr Med Res Opin 2010;26:843-7.
19. Unger K, Couzin J. Scientific misconduct. Even retracted papers endure. Science 2006;312:40-1.
20. Couzin J, Unger K. Scientific misconduct. Cleaning up the paper trail. Science 2006;312:38-43.
21. Fanelli D. How many scientists fabricate and falsify research? A systematic review and meta-analysis of survey data. PLoS One 2009;4:e5738.
22. Titus SL, Wells JA, Rhoades LJ. Repairing research integrity. Nature 2008;453:980-2.
23. Dalton R. Obesity expert owns up to million-dollar crime. Nature 2005;434:424.
24. Normile D. Scientific misconduct. Hwang convicted but dodges jail; stem cell research has moved on. Science 2009;326:650-1.
25. Katavic V. Five-Year Report of Croatian Medical Journal’s Research Integrity Editor - Policy, Policing, or Policing Policy. Croat Med J 2006;47:220-7.
26. Kondro W. Research misconduct agency would undermine “academic self-governance,” study says. CMAJ 2009;181:887-8.
27. Marusic A, Katavic V, Marusic M. Role of editors and journals in detecting and preventing scientific misconduct: strengths, weaknesses, opportunities, and threats. Med Law 2007;26:545-66.
28. Wager E, Fiack S, Graf C, Robinson A, Rowlands I. Science journal editors’ views on publication ethics: results of an international survey. J Med Ethics 2009;35:348-53.
29. Hren D, Vujaklija A, Ivanisevic R, Knezevic J, Marusic M, Marusic A. Students’ moral reasoning, Machiavellianism and socially desirable responding: implications for teaching ethics and research integrity. Med Educ 2006;40:269-77.
30. Marusic A. Author misconduct: editors as educators of research integrity. Med Educ 2005;39:7-8.
31. Roig M. Ethical writing should be taught. BMJ 2006;333:596-7.
32. Anderson MS, Martinson BC, De Vries R. Normative dissonance in science: results from a national survey of U.S. Scientists. J Empir Res Hum Res Ethics 2007;2:3-14.
33. Plemmons DK, Brody SA, Kalichman MW. Student perceptions of the effectiveness of education in the responsible conduct of research. Sci Eng Ethics 2006;12:571-82.
34. Turrens JF. Teaching research integrity and bioethics to science undergraduates. Cell Biol Educ 2005;4:330-4.
35. Climate of fear. Nature 2010;464:141.
36. Errami M, Sun Z, Long TC, George AC, Garner HR. Deja vu: a database of highly similar citations in the scientific literature. Nucl Acids Res 2009;37:D921-4.
37. Benford F. The law of anomalous numbers. Proc Amer Phil Soc 1938;78:551-72.
38. Horton R. Retraction--Non-steroidal anti-inflammatory drugs and the risk of oral cancer: a nested case-control study. The Lancet 2006;367:382.
39. Knowles E. Oxford dictionary of quotations. 7th ed. Oxford: New York: Oxford University Press, 2009.
40. John of Salisubry, McGarry DD. The metalogicon, a twelfth-century defense of the verbal and logical arts of the trivium. Berkeley (CA): University of California Press, 1955.
41. Nederman CJ. John of Salisbury. Tempe, Ariz.: Arizona Center for Medieval and Renaissance Studies, 2005.
42. Henrich J, McElreath R, Barr A, Ensminger J, Barrett C, Bolyanatz A, et al. Costly Punishment Across Human Societies. Science 2006;312:1767-70.
43. Jander KC, Herre EA. Host sanctions and pollinator cheating in the fig tree-fig wasp mutualism. Proc Biol Sci 2010; 277:1481-8.
44. Pimple KD. Six domains of research ethics. A heuristic framework for the responsible conduct of research. Sci Eng Ethics 2002;8:191-205.