This article appears in the December 1, 2000 issue of Science  magazine.
How often does scientific misconduct occur? There seems to be no consensus on the answer, although a range of estimates were presented at a conference called last month * by a key federal watchdog agency to announce a $1 million grants program to investigate the prevalence of fraud, data fabrication, plagiarism, and other questionable practices in science. The 8-year-old Office of Research Integrity hopes to support studies gauging the frequency of misconduct and assessing efforts to raise ethical standards.
Charles Turner still doesn't know whether his experience was like finding a rare bad apple in the barrel. But he is sure that there was something rotten in the survey data going into his federally funded study of sexual behavior. And he knows that it has taken him 2 years to pluck out the spoiled fruit and piece together a clean report for publication.
Turner, a social scientist at City University of New York/Queens College, offered his cautionary story last month at a conference* called by a key federal watchdog agency to announce a $1 million grants program to investigate the prevalence of fraud, data fabrication, plagiarism, and other questionable practices in science. The 8-year-old Office of Research Integrity (ORI), a small unit within the Department of Health and Human Services, hopes to support studies aimed at gauging the frequency of misconduct and how to raise ethical standards.
Turner's story was the most dramatic of a series of case studies presented at the ORI conference. In 1997, he explained, the National Institutes of Health funded his proposal to ask 1800 Baltimore residents about their sexual behavior. The project, an epidemiological look at AIDS and other sexually transmitted diseases such as gonorrhea and chlamydia, was managed by the Research Triangle Institute (RTI) of Research Triangle Park, North Carolina. Eleven months into the study, Turner, who has an appointment at RTI, got a call from a data-collection manager who was troubled by the apparent overproductivity of one interviewer. A closer look revealed that the worker was faking results; the address of one interview site, for example, turned out to be an abandoned house. The worker was dismissed, and others came under suspicion.
After "a horrible 6 months" pulling apart the entire study, Turner and his colleagues discovered an "epidemic of falsification" that they linked to a cessation of random quality checks. As the schedule slipped, says Turner, some staffers may have felt pressure to hurry up. Despite a "significant" loss of money and time, the investigators painstakingly plucked out data from tainted sources, sorted the remains, and pieced together a final report that has been submitted for publication.
Turner says the exercise taught him several hard lessons, the most important being to "validate the work yourself." Scientists should start analyzing survey data as soon as it is submitted, he says, with a sharp eye for anomalies. Turner says he doesn't know if other projects have faced similar problems, because most journal articles don't discuss the issue. And the incident never became public, he says, because no one was ever publicly accused of wrongdoing and the institute chose to avoid the risk of litigation.
How often does misconduct like this occur? There appears to be no consensus on the answer, although science historian Nicholas Steneck of the University of Michigan, Ann Arbor, co-chair of the conference, has drawn up a range of estimates. At the low end is an estimate of 1 fraud per 100,000 scientists per year. That's based on 200 official federal cases that fit a narrow definition that counts only fraud, data fabrication, and plagiarism, out of a community of 2 million active researchers over 20 years.
At the same time, Steneck notes that 1 in 100 researchers "consistently report" in surveys that they know about an instance of misconduct. A broader definition yields even more hands. There is a "troubling discrepancy," Steneck observed, "between public statements about how 'rare' misconduct in research supposedly is and the more private belief on the part of many researchers that it is fairly common."
A study of students at one campus suggests that the practice of massaging data is common, but the behavior decreases as students advance toward a career in science. Biologist Elizabeth Davidson and colleagues at Arizona State University in Tempe asked students in seven introductory biology and zoology courses whether they manipulated lab data to obtain desired results. A huge majority--84% to 91%--admitted to manipulating lab data "almost always" or "often." Most said they did this to get a better grade. Other studies, however, show that the willingness to fake data declines sharply as students move on to graduate and professional-level work, leading Davidson to speculate that their behavior improves as the "research becomes important to them personally."
Some institutions have attempted to remedy the problem of scientific misconduct with special education programs. The University of Minnesota, for example, reported on an ambitious ethics training program at the medical school that in 1 year spent $500,000 on 60 workshops and signed up 2200 researchers as participants. But Steneck and others say that it's hard to measure the effectiveness of such training, and that the meager results to date are disheartening.
A study of 172 University of Texas students enrolled in a "responsible conduct of research" course, for example, found "no significant change" in attitudes after training, says Elizabeth Heitman of the University of Texas School of Public Health in Houston. The finding is consistent with what Steneck has seen, including a 1996 study that found that people who had gone through a training course were actually more willing to grant "honorary authorship" to colleagues who had not performed research than were those who had not been trained.
ORI director Chris Pascal says his office has received several favorable comments about the new grants program and that 70 scientists interested in the topic showed up last month for an ORI workshop on how to apply for biomedical research grants. The first round of winners will be announced next year.