Eric Mamajek, a postdoc in the Radio & Geoastronomy division of the Harvard-Smithsonian Center for Astrophysics in Cambridge, Massachusetts, has applied for several research staff and junior faculty jobs this year. In all his applications, he has listed his '" h-index"' at the top of his curriculum vitae (CV). Named after physicist Jorge Hirsch, who suggested it in 2005, the h-index is intended to capture scientific productivity and impact, in a nutshell. Mamajek's h-index--19--shows that he has published widely and that people are reading--and citing--his papers at an impressive rate.

The use of such metrics to evaluate institutions and individuals is on the rise in the United States, especially in physics and related fields. Several institutions are applying their own, often homegrown, variations to rating their faculty members during evaluations for tenure, promotion, and other milestones. And many younger scientists with strong publishing records, like Mamajek, have started volunteering their h-index, or similar metrics, to boost their job prospects. It's an appealing idea, but these metrics have some serious disadvantages, especially when they're applied to scientists very early in their careers.

Gaining popularity

To figure out his h-index, Mamajek entered his name in the author query field of the Astrophysics Data System from NASA and the Smithsonian Astrophysical Observatory. The database returned a list of publications ranked in descending order by the number of citations. He scrolled down the list until the order number equaled the number of citations. That number is his h-index.

The profile of such metrics is rising, especially in physics, and its influence is spreading. At least for the top programs hiring this year, sources say, about a tenth of job applicants in physics volunteer this statistic, up from practically zero just a couple of years ago. Some writers of recommendation letters have also taken to mentioning a candidate's h-index, according to one source.

Modern citation metrics have been around since the 1960s, but before the advent of Internet-based citation databases they weren't nearly as useful as they are today. Hirsch's index caught on partly because when it made its debut, it was very easy to calculate. "When my department needs to hire another faculty member, I always look at the publication list and at citations," Hirsch says. "And, of course, I always look at the application and try to read and understand the papers, but that's not always easy." So in 2005, he invented the h-index, an easy-to-employ, objective measure of scientific productivity and impact. Thomson Scientific's Web of Science added the h-index to its long list of citation metrics in October 2006 in response to subscriber demand. Web of Science's "analyze" function allows the h-index to be calculated easily for any individual, group, department, institution, field, or country.

A welcome but limited dose of objectivity


Jorge Hirsch

The simplicity of the h-index is appealing, but it has some limitations. It's a clever and concise way of capturing two key pieces of information: the number of publications and their influence, as measured by citations in other peer-reviewed articles. Yet people in different subfields publish and cite at different rates, so it's hard to make comparisons across subfields using the h-index.

And the h-index puts less experienced scientists at a disadvantage. Marc Kastner, dean of the School of Science at the Massachusetts Institute of Technology (MIT) in Cambridge, points out that a high-impact paper accumulates citations over many years, favoring older researchers over younger ones. "The only way the index could be informative," he adds, "is in a plot of h-index as a function of time since Ph.D., restricted to a specific subfield."

The real value of the h-index, Hirsch says, is in the evaluation of experienced scientists: "If the h-index is high at tenure, I would say this is very positive, but if the h-index is exceedingly low, I would say this raises a question mark about the candidate." At the University of Hawaii's Institute for Astronomy (IfA), home of David Sanders, another citation-index buff, a set of metrics is incorporated in the standard review process.

Hirsch says his index is pretty useless if you're hiring a postdoc, say, and that it's most effective for senior hires, where the numbers of publications and citations are fairly large. But in between those two extremes--for researchers who, like Mamajek, are applying for an assistant professorship--the h-index has some utility, Hirsch says, as long as it's properly employed. "If a job applicant has a particularly high h-index, it is very positive," he says. "However, if the h-index is not high, this is not necessarily negative" for early-career scientists. Either way, Hirsch says, it's important to look carefully at all of a candidate's other qualifications and not to focus too much on a simple metric.

Deepto Chakrabarty, an associate professor of physics at MIT who headed a faculty search committee last year, reached a similar conclusion after he calculated the h-index for each of the short-list candidates. The exercise wasn't helpful, he says, until he broadened it out a bit. "I found lots of people who, it was clear to me from reading their application, were really good, and they didn't have particularly high h-indices." So instead of using the h-index by itself, he applied a more varied menu of citation metrics and had more success.

A menu of alternatives

At IfA, Sanders developed a menu of metrics to assess the department as a group; Sanders's metrics correct for subfield, length of service, and self-citation. These metrics can be found in IfA's Self-Study Report and include measures such as rank by high-impact paper and citations by subfield. Sanders's metrics were developed to review groups of scientists, but they apply just as well to evaluating faculty-job applications or senior faculty up for merit review. The tradeoff for their greater rigor is added complexity. It's easy to type your h-index at the top of your CV; Sanders's multimetric approach is probably more useful, but it's less concise.

And even Sanders's suite of metrics fails to capture some important points, such as whether a scientist works alone or in a large group, or whether service breaks due to parental leave, say, have left a scientist short of publications and citations. Another problem is that these metrics lack an effective method to deal with multiauthor papers; they give every author full credit for every publication, regardless of the contribution. So, for example, a dull scientist in a spectacular lab probably would have a much higher index than a spectacular scientist working on someone else's projects in a dull lab.

Perhaps the toughest problem of all to overcome, at least for young scientists, is the fact that, as with any probability problem, small numbers of publications and citations inevitably mean big error bars. This uncertainty may offset any gains in objectivity.

Applied wisely, the h-index and similar metrics provide a welcome dose of objectivity in an inherently subjective process. "The value of citation metrics," says Sanders, "is that they allow us to really compare apples to apples." But scientists come in many varieties, and not all are outstanding in the same way. "Many scientists do good work, make progress, and don't have a high h-index," Chakrabarty says. The difficulties inherent in applying the h-index, or even a suite of more complex impact measurements, especially to junior job candidates, make it unlikely that such quantitative measures will supplant well-established subjective criteria any time soon.

Genevive Bjorn writes from Honolulu, Hawaii.

Comments, suggestions? Please send your feedback to our editor.

Photos courtesy of the subjects.

DOI: 10.1126/science.caredit.a0800035

10.1126/science.caredit.a0800035