History Supports Government Funding for Public Health

As a medical historian asked to respond to Professor Kealey’s essay, I was struck by his focus on technology rather than basic science and his lack of discussion about research for human well being, which, in economic terms, would presumably be the health of the labor force.  Labor appears as a given in his descriptions of what makes economies flourish, and perhaps that is because the economist he praises, Adam Smith, and the one he reviles, Francis Bacon, both wrote at a time when medicine had little power to intervene in most aspects of human health.  He quotes Smith as recognizing the existence of public goods, those “of such a nature, that the profit could never repay the expense to any individual or small number of individuals.”  Research benefitting public health certainly qualifies as a public good, but it is also easier to take public health measures for granted than it is to be grateful that one did not suffer or die from cholera, typhoid, undulant fever, botulism, and lead poisoning, to name just a few. 

Components of  biomedical research include epidemiology, the determination of the purity and efficacy of drugs, and basic knowledge of how the body works in health and disease that can lead to interventions to prevent disease or treat individuals.  Libertarians support government funding of scientific research to benefit defense, but such a position implies that one could separate research that benefits soldiers from that benefitting the rest of the population.  This hardly seems possible.

Epidemiological studies related to clean water and sewage management underlie the prevention of many epidemics.  John Snow’s 1854 epidemiological demonstration that the Broad Street water pump in London was the focus of that year’s cholera epidemic fueled interest in preventing disease by identifying how it was transmitted a quarter century before Pasteur and Koch demonstrated a single microorganism as the cause of a single disease.  Government officials were persuaded to remove the pump’s handle.  After the epidemic waned, however, officials were pressured to replace the handle on the pump and reject Snow’s theory, reflecting society’s unwillingness to entertain the idea of fecal-oral transmission of disease.  This situation foreshadowed the kind of future tensions—sometimes outright battles—between commercial interests, which prefer to deny the existence of disease that might threaten profits, and governments, which prefer to enforce public health regulations to contain disease spread.

Government-funded drug trials that may slow adoption of candidate therapies are also dismissed as wasteful to taxpayers, but numerous studies have shown that pharmaceutical companies are loath to publish negative findings about potentially profitable drugs even with government oversight.  Similarly, research on vaccines that become mandatory may be anathema to many libertarians as limiting freedom of choice, but zero government support would have a devastating effect on the economies of today’s highly concentrated populations, as epidemics would recur and decimate the work force.  Producing effective vaccines is extremely difficult and not often cost effective for pharmaceutical companies.  For most vaccines, one or a few doses are all a patient needs, and a company may incur sizeable liability for patients who suffer severe side effects.  The drugs most worth the millions of dollars of R&D are those that must be taken long term, such as statin drugs, drugs to manage stomach acid, and antidepressants.  The current dearth of new antibiotic drugs attests to the choices made by pharmaceutical companies.

Kealey also argues that if government funding for science was halted, “there would be an armamentarium of private philanthropic funders of university and of foundation science by which non-market, pure research (including orphan diseases) would be funded.”  I counter that this model has already been tried and found wanting, at least as it applies to medical research in the United States.  Beginning early in the twentieth century, as leading scientists hoped to exploit the germ theory of infectious disease to save lives, medical research was regarded as an activity that could produce public good, and the private philanthropic sector was indeed first to lend support.  In 1904, John D. Rockefeller opened the Rockefeller Institute in New York City; in 1911, the Otho S. A. Sprague Memorial Institute was founded in Chicago. The first experience of U.S. scientists with government-coordinated research came in 1917, when the U.S. Army created a Chemical Warfare Service to fund projects by chemists at universities and other institutions aimed at defending  troops from gas attacks.  This military research effort played a large role in changing the minds of U.S. scientists about whether government could support peacetime scientific research without impeding their freedom to pursue novel scientific ideas related to the work.

After the war, chemists pressed for the creation of a privately funded institution to conduct chemical research that would benefit medicine.  This effort foundered because of many conflicting interests: academic chemists and pharmacologists refused to associate with their industrial colleagues.   Existing institutes, such as the Rockefeller and the Sprague, opposed support for a competing institute.   Chemists, pharmacologists, and physicians disagreed about which discipline should have administrative control of any institute.  Industry was reluctant to commit resources to basic medical research.  Eventually, supporters turned to the U.S. Congress, which in a 1930 act expanded and renamed an existing public health laboratory as the National Institute of Health.

After World War II, as Kealey notes, the U.S. government greatly expanded support for science through the National Science Foundation and the grants program of the National Institutes of Health (the NIH became plural with the creation of new institutes in 1948).  Kealey’s argument that society would have benefitted more during the last sixty years by leaving money in the pockets of taxpayers than by investing in government-funded science is based on his belief the private sector would have performed all the basic research needed for the good of the public.  I would like to take the example of research that underlay the response to the pandemic of acquired immune deficiency syndrome (AIDS) as refutation of that argument.

A society can respond to epidemic disease only on the basis of medical knowledge it has accumulated by the time the epidemic occurs.  In the case of AIDS, the basic medical knowledge rested on the fields of molecular immunology and virology, which had become fruitful research areas in the 1970s.  The mechanism by which the AIDS virus destroyed the immune system was not completely understood in 1981, when AIDS was first recognized as a new disease, but molecular immunology provided the mental model via which the disease was defined and initially addressed.  Knowledge about human retroviruses was stunningly recent (1979 and 1980 were when the first two human retroviruses were definitively demonstrated), and without their discovery, it is unlikely that physicians would even have considered the possibility that a retrovirus might be the cause of AIDS.

With the serendipity that sometimes happens in basic medical science, much of this knowledge emerged not from infectious disease research but from U.S. government funding for  cancer research.  A Special Virus Cancer Program begun in the 1960s had sought to identify viruses as a cause of cancer.  The program was largely shut down in the mid 1970s after no virus could be conclusively linked to a human cancer.  Of course, shortly afterwards, hepatitis B was linked to liver cancer and the human papilloma virus to cervical cancer.  In December 1971, however, U.S. government-funded cancer research had expanded greatly with enactment of the National Cancer Act.  Under the auspices of this legislation, research on human retroviruses continued in National Cancer Institute (NCI)  laboratories in Bethesda, Maryland.  Every one of the retrovirologists involved in the identification of the human immunodeficiency virus (HIV) as the cause of AIDS either directly trained or spent time working with colleagues in Bethesda.  Furthermore, a screening program established under the National Cancer Act to test large numbers of compounds for their cancer-fighting potential was repurposed to test candidate drugs against AIDS.  Research utilizing this program identified the first drugs with any effectiveness against AIDS—AZT, ddI, and ddC.  Could all this work have been produced by privately funded science?  Possibly, but given the uncertainty regarding results inherent in basic science and the impetus to pursue only activities with near-term profit possibilities, it is doubtful that medicine could have responded to AIDS as quickly as it did on the basis of basic knowledge built up through government funding.


Also from this issue

Lead Essay

  • Terence Kealey argues that we don’t need public funding for science. Not only are many of the common historical examples of the benefits of public funding false, the economic model of publicly funded scientific research is fundamentally flawed. Empirically, public R&D appears to have a negligible effect on economic growth. Private science is likely to be more responsive to consumers’ needs, and the costs of duplicating it are often high enough that we need not worry about free riders on the discoveries of others.

Response Essays

  • Victoria Harden offers several historical examples of successful funding for public health initiatives. These programs, including the prevention of cholera, basic research on chemical warfare agents and cancer, and the identification of the virus that causes AIDS, might conceivably have happened under purely private auspices. But she finds it implausible that private actors would have responded as quickly or effectively.

  • Patrick J. Michaels discusses the public choice aspects of scientific funding, which introduce systematic bias into research: Scientists need grant money to advance in their careers, and only the government provides it in sufficient quantities. Yet the government’s agenda is never neutral, and the scientists’ agendas tend strongly to fall into line. The result is a consensus built not on scientific fact, but on the alignment of personal interests.

  • David Guston rejects the public goods argument for scientific research. He nonetheless argues that it is essential for any government to conduct such research. Governments are constantly called upon to regulate and adjudicate disputes among scientifically and technologically savvy actors. They are obliged to make laws that take into account scientific laws. Indeed, no one would want to live under a state that predictably failed in these respects.