It is rare for a single hospital death to become the subject of multiple-day stories in the major national newspapers and play on one evening television news program after another. But the tragic demise of a young volunteer subject at Johns Hopkins Hospital did. Why? In matters medical, a lot depends on trust. In the public mind, Johns Hopkins may be the medical gold standard; the very week of the event saw it atop U.S. News and World Report's "best medical center" list. Dramatic failings by trusted institutions attract attention.

So does disagreement. The dramatic ban on all human subjects research handed down by the Department of Health and Human Services (HHS) Office of Human Research Protection (OHRP) drew a feisty, outraged response from Johns Hopkins management. Here were the government of the United States and a leading academic institution fighting about a rule violation. For the press, it doesn't get much better than that. The frenzy subsided only when Johns Hopkins was let out of jail, with parole conditions, after serving 3 days. The controversy hasn't been good for our profession either: The New York Times editorial page headline was "Death at the Hands of Science."

What can be learned from all this? Several things about this case are special. It was a rare kind of study: a "challenge" trial in which hexamethonium, a drug not currently approved by the Food and Drug Administration (FDA), was used to induce asthma-like symptoms in healthy subjects. Federally funded experiments such as the Hopkins study are overseen by OHRP, an office that lived at NIH until last year but was moved downtown to the HHS secretary's office after a well-publicized death last year during a gene therapy trial at the University of Pennsylvania. (Clinical trials performed to support a new drug or device application are regulated by the FDA; some centers have dual oversight for everything.)

The report of the OHRP asserts that the Hopkins Institutional Review Board (IRB) (required under law to approve protocols and monitor informed consent for experiments involving human subjects) and/or the principal investigator (i) failed to find earlier literature demonstrating risks associated with hexamethonium, (ii) didn't check its regulatory status, and (iii) didn't use a proper informed consent procedure. It is hard to read the OHRP findings without concluding that the process at Hopkins was out of compliance. But the report contains charges strangely reminiscent of those in well-publicized past actions against other medical centers. That suggests that the problem may be a systemic one. (It also makes one wonder why we don't learn more from the misfortunes of others!)

The heated Hopkins response put me in mind of the tensions that arose when the FDA inspected IRBs in the late 1970s. People with badges had to be sent into academic institutions that were pursuing fine work with high purpose, saying that they were from the government and had come to help. The investigators probably thought: "Who are these folks to tell us how to do our experiment?" Most of the time, the regulators and the regulated managed to work things out, but it did become clear that a lot was being asked of the IRBs.

We are asking even more now. Conversations with IRB members reveal deep concern about the workload imposed on busy faculty, and the increasing requirements for support staff. Universities are doing more and more clinical trials, and the dual oversight by the FDA and OHRP adds confusion. And as the transaction volume is increasing, a series of adverse events has intensified oversight, adding to the pressure on the institutions and their IRBs.

The problem thus looks like a system problem rather than only a Hopkins problem. A retrofit of the oversight process is overdue; HHS needs to decide how to manage IRBs without bureaucratic collision. Universities, having been told for years to invest more resources in patient protection, need to deliver more administrative leadership and staff support; or faculty members, who dislike growth in administrative budgets, may have to give the process even more of their time. And the public may have to suspend its predilection for placing blame and accept that even a perfect system, alas, will not be risk-free.