Last month in this space, I wrote that physicians enjoy a more favorable labor market—better outcomes and better incomes—than academic scientists because organized medicine controls the supply of new entrants to the profession. A reader I know to be very knowledgeable about scientific workforce issues asked me what I meant. If this well-informed person missed my point, I realized, other readers might not have gotten it, either.

So, herewith, a brief history of two reports: one written more than 100 years ago and the other more than 60 years ago, by Abraham Flexner and Vannevar Bush, respectively. These documents defined the parameters of the worlds that physicians and academic scientists inhabit to this day. To understand the reports’ impact, it helps to keep in mind a point that economist George Borjas has made in this space more than once: No matter how many degrees someone has, “the supply-demand textbook model is correct after all.”

Flexner reshapes medicine

Known to history as the Flexner Report, Medical Education in the United States and Canada was published in 1910 by the Carnegie Foundation for the Advancement of Teaching. Its author, a schoolmaster and writer with a classics degree from Johns Hopkins University, later became the founding director of the Institute for Advanced Study in Princeton, New Jersey. He knew something about science from his brother Simon, a prominent medical researcher who himself was later the first director of the institute that became Rockefeller University.

The story of Flexner’s famous report begins in 1908, when the Council on Medical Education (CME) of the American Medical Association (AMA) asked the Carnegie Foundation to look into the allegedly deplorable state of most medical education. The foundation commissioned Flexner to do a report. He spent the next year and a half visiting all of the 155 schools then purporting to train doctors and composing a document that sharply criticized the great majority. His report also proposed a framework for the proper preparation of physicians that closely resembled the curriculum of his alma mater’s prominent medical college.

The Flexner Report’s lasting importance is that it fueled a popular and political movement that convinced state legislatures across the country to undertake thorough reforms of medical education and licensure along the lines of Flexner’s model. This culminated a process, underway since the mid-19th century, that changed medical practice from a disorganized and generally ill-paid business into what historian Paul Starr, in his magisterial 1984 book The Social Transformation of American Medicine, calls a “sovereign profession”—authoritative, autonomous, prestigious, highly lucrative, and in control of the entry gates.

In the years following the Flexner reforms, a large majority of the nation’s medical schools closed, including five of the seven established to train African Americans and all but one of those established to train women, sharply limiting opportunities for those two groups until the passage of civil rights laws in the 1960s made discrimination by race and sex unlawful. The remaining schools followed the now-standard program of 2 years of academic studies and 2 years of hospital-based clinical training.

Until the Flexner reforms, medical education in the United States had been a hodgepodge of programs lacking any common standards or curriculum. Some doctors trained by apprenticing with practitioners. Some got diplomas from proprietary schools owned by doctors who taught as they saw fit. Some received the Medicinae Doctor degree after rigorous, science-based programs of classroom and clinical education at university-affiliated medical schools.

“Because of the heterogeneity of educational experiences and the paucity of licensing examinations, physicians in America at the turn of the 20th century varied tremendously in their medical knowledge, therapeutic philosophies, and aptitudes for healing the sick,” writes Andrew H. Beck of Brown University in the Journal of the American Medical Association. Along with the science-based medicine now considered standard, schools teaching “diverse types of medicine, [including] osteopathic, homeopathic, chiropractic, eclectic, physiomedical, botanic and Thomsonian” flourished, with students and patients choosing the one they liked best. Crowded with practitioners ranging from self-taught healers to graduates of prestigious universities, medicine was not an especially well-paying occupation.

Meanwhile, science-based successes like vaccination and antiseptic surgical techniques were increasing the prestige of scientific medicine. Throughout the 19th century, university-trained M.D.s had struggled for control of treatment against competitors they considered ill-trained, ignorant quacks. Their cause’s main proponent was the AMA, established in 1847 to “elevate the standard of medical education in the United States.”


CREDIT: Rockefeller Foundation Archive
Abraham Flexner

Flexner-inspired state laws brought medical education and practice under strict accreditation and licensing rules and cemented the hegemony of M.D.-style scientific medicine. The nationwide Federation of State Medical Boards accepted the authority of CME to set educational standards, essentially giving its decisions, in Starr’s words, “the force of law.” This gave organized medicine a decisive say over which schools could graduate doctors. These reforms drove all but two other types of physicians out of legitimate general medical practice, reducing the numbers of practitioners competing for patients’ dollars.

Flexner argued for reform as a public health measure rather than an economic program, and the past century’s vast improvements in health care are one powerful result of his efforts. Another powerful result is strict limitation on the number of people entering the profession and—consequently—higher incomes and status for those who do.

Bush organizes research

On the other hand, Vannevar Bush’s report, Science, The Endless Frontier, inadvertently produced a nearly opposite result: a system that lodges control of the supply of new scientists outside of the profession and contains incentives that favor ever-increasing numbers. Bush wrote his report because in late 1944, as World War II neared its end, President Franklin Roosevelt sent him a letter asking, “What can the government do now and in the future to aid research activities by public and private organizations?”

Roosevelt asked because scientific research had played a decisive role in winning the war. The government’s top-secret Manhattan Project had developed and built, at breakneck speed, the atomic bombs that forced Japan’s abrupt surrender and forestalled a bloody and dreaded American land invasion of the island empire. Other recent advances, including radar, penicillin, and machine encryption, had saved countless Allied lives. Before the war, the U.S. research establishment had been small and scantily funded, with little government support for university-based research.

It quickly became clear that research would play a major role in the postwar world, and Bush was the obvious person to ask about it. With a Ph.D. in engineering jointly issued by the Massachusetts Institute of Technology and Harvard University, he was a major figure in national science policy and was crucial to organizing the Manhattan Project. Roosevelt died before Bush sent his answer in July 1945, but Harry Truman, the new president, supported the Bush plan, making it the framework for federal support of academic research ever since.

Bush proposed that the government award competitive grants for specific civilian projects to researchers in nongovernmental institutions such as universities. He listed five guiding principles, four of which Congress followed. First, nonpartisan experts would select and administer the projects to be funded. Second, money would go to civilian researchers “through contracts or grants to organizations outside the federal government.” Third, the institutions receiving the grants would determine “policy, personnel, and the method and scope of the research.” Fourth, the funding agencies, though “responsible to the president and Congress,” would retain “independence and freedom” with regard to the research. Bush’s fifth principle, predictable and stable science funding, never came to pass.

Under Bush’s plan, professors receiving grants would do their research aided by graduate students. Money spent on research would thereby serve dual purposes: advancing science and fostering education.

The system had many advantages. Competitive funding for fixed periods assured high-quality research and great flexibility in the projects the government could support. It also freed the government from maintaining its own large, permanent research facilities and staff.

The system’s crucial disadvantage only became apparent decades later. The plan depends on what have been called “self-replicating” professors, who produce new Ph.D.s as byproducts of their grant-supported research and take on students (and, later, postdocs) based on the amount of funding they have. The amount of available grant funding—and not the supply of career opportunities for young Ph.D.s—therefore determines the number of new scientists that universities train.

A real shortage of scientists existed when Bush was writing. He nonetheless advised against drawing into science more young people than the nation could effectively use. Congress, universities, and faculty members ignored that suggestion and proceeded to make funding decisions without regard to the career prospects of those ostensibly training for faculty positions. Academia grew explosively in the early postwar years, so aspiring academic scientists moved easily and directly from graduate school to tenure-track jobs. But as research funding continued to grow, faculty hiring slowed. By the 1970s, the number of new Ph.D.s so far outstripped the supply of faculty openings that postdoc positions, until then relatively rare, began proliferating to absorb the excess.

Which brings us back to Borjas and his iron law, which explains why the career prospects and incomes of M.D.s and Ph.D.s—both highly trained experts in highly technical fields—are so different. It’s as simple as supply and demand.

Beryl Lieff Benderly writes from Washington, D.C.

10.1126/science.caredit.a1200049