About this Issue
Adam Smith wrote that the state should do three things. It should repel foreign invasions, protect against harms to person and property — and also it should provide “certain public works and certain public institutions which it can never be for the interest of any individual, or small number of individuals, to erect and maintain; because the profit could never repay the expence to any individual or small number of individuals, though it may frequently do much more than repay it to a great society.”
It’s often said that one such public good is basic scientific research: We know for sure that research has costs, but the benefits of that research are never certain in advance. Moreover, it is argued, scientific discovery benefits everyone, not merely the firm or individual who performs it. As a result, we get less of it than we would if everyone could be made to pay their fair share. What’s needed, then, is a state subsidy for science.
This month, we consider whether that’s really the case.
Along the way, we’ll consider economic models of public goods, the costs of private research, bias and misconduct in science, and of course the opportunity costs of the matter. Our lead essayist this month, Terence Kealey of the University of Buckingham, argues that scientific research isn’t really a public good at all: private researchers can indeed reap the rewards of the effort, while others cannot do so effectively without a similar investment.
Is he right? We will have response essays this month from Victoria Harden, former chief historian for the National Institutes of Health; Patrick J. Michaels, director of the Cato Institute’s Center for the Study of Science; and David Guston, co-director of the Consortium for Science, Policy and Outcomes at Arizona State University.
The Case against Public Science
For libertarians, economic growth is the growth that barely speaks its name. That is because conventional opinion asserts that economic growth is the gift of government. Secondary issues such as the efficient distribution of goods and services can, we are assured, be entrusted to the market, but when it comes to the creation of those goods and services in the first place (especially the new goods and services that constitute economic growth) then—sorry dear libertarian—only government can supply those: we are rich today only because that nice Mr Obama and his equally nice predecessors in the White House and in Congress have been gracious enough and foresighted enough to confer that wealth upon us.
The conventional story is thus an awesome story of government largesse and wisdom, and it’s one that the great companies and the great universities and the great economists promote most assiduously. There is only one, small, itsy bitsy teeny weeny problem with it. It’s dead wrong.
The story of the longest-surviving intellectual error in western economic thought started in 1605 when a corrupt English lawyer and politician, Sir Francis Bacon, published his Advancement of Learning. Bacon, who was a man with a preternatural interest in wealth and power, wanted to know how Spain had become the richest and most powerful nation of his day. He concluded that Spain had done so by the exploitation of its American colonies. And how had Spain discovered those colonies? By scientific research: “the West Indies had never been discovered if the use of the mariner’s needle had not been first discovered.”
Scientific research, Bacon explained, was “the true ornament of mankind” because “the benefits inventors confer extend to the whole human race.” But, he wrote, therein lay a problem: the whole human race might benefit from inventions but the whole human race does not reimburse inventors, so invention will not be rewarded by the market. Research, therefore, is a public good that should be supplied by government: “there is not any part of good government more worthy than the further endowment of the world with sound and fruitful knowledge.”
Bacon’s argument was reinforced in our own day by three landmark papers published by three landmark economists, two of whom (Robert Solow and Kenneth Arrow) were to win Nobel prizes, and the third of whom, Richard Nelson, is recognized as being of a similar rank. And we need to look at what the economists—as opposed to the scientists—write, because the economists, being apparently systematic, influence policy more than do the scientists: it’s easy to dismiss scientists for special pleading and for anecdotage, but who can doubt the objective, dispassionate studies of economists?
The contemporary story starts with a 1957 paper by Robert Solow, which was an empirical study that confirmed that most economic growth in the modern world can indeed be attributed to technical change (as opposed, say, to capital deepening.) But the story was given its dirigiste twist by the papers that Nelson and Arrow published in 1959 and 1962 respectively, in which they explained that science is a public good because copying is easier and cheaper than original research: it is easier and cheaper to copy and/or leapfrog the discoveries reported in the journals and in the patent filings (or incorporated in new products) than it was to have made the original discoveries. So no private entity will invest in innovation because their investment will only help their competitors—competitors who, having been spared the costs of the original research, will undercut the original researcher.
The problem with the papers of Nelson and Arrow, however, was that they were theoretical, and one or two troublesome souls, on peering out of their economists’ eyries, noted that in the real world there did seem to be some privately funded research happening—quite a lot of it actually—so the conventional story has since been modified. That change was driven by three more landmark economists Paul Romer (who focused on industrial research) and Partha Dasgupta and Paul David (who focused on academic research.)
In a 1990 paper Paul Romer acknowledged that, in the real world of (i) commercial monopoly enforced by patents, and of (ii) apparent commercial secrecy enforced by commercial discipline, commercial researchers can indeed recoup some of the costs of their original research, so he created a mathematical model by which some original research would be rewarded by the market. Nonetheless, he still assumed that too little industrial science would be thus rewarded: “research has positive external effects. It raises the productivity of all future individuals who do research, but because this benefit is non-excludable, it is not reflected at all in the market price.”
Dasgupta and David in their 1994 paper reviewed the historical development of our universities, research societies and research conventions, and they acknowledged that such social constructs did indeed foster pure science, but because advances in basic science were too unpredictable for their discoverers to profit from them in the market, such science: “is in constant need of shoring up through public patronage.”
So here is the current dogma: scientific research is fundamentally a public good because new ideas, unlike private goods, cannot be monopolized for long, but in practice we can treat research as a merit good (which is good that requires only some of its funding from government) because conventions such as patents or industrial secrecy, to say nothing of institutions such as universities and research societies, have evolved to bolster a certain—if inadequate—degree of private funding.
But the difficulty with this new story of science as a merit good is that there is no empirical evidence that research needs any funding from government at all.
The fundamental problem that bedevils the study of the economics of science is that every contemporary actor in the story is parti pris: every contemporary actor who enters the field starts by pre-assuming that governments should fund science. Such actors are either industrialists looking for corporate welfare, or scholars looking to protect their universities’ income, or scientists (who, frankly, will look for money from any and every source—they are shameless) or economists who assume that knowledge is “non-rivalrous” and only “partially excludable” (which are posh ways of saying that copying is cheap and easy.)
But no contemporary has ever shown empirically that governments need fund science—the assertion has been made only on theoretical grounds. Remarkably, the one economist who did look at the question empirically found that the evidence showed that governments need not fund science, but his claim has been for a long time ignored, because he was notoriously a libertarian—and libertarians have no traction amongst the scholars, politicians, and corporate welfarists who dominate the field. In 1776, moreover, that economist supported a revolution, so he is not only outdated but he was, presumably, subversive of the social order.
Nonetheless, if only out of antiquarian interest, let’s look at what this empiricist reported. The evidence showed, he wrote, that there were three significant sources of new industrial technology. The most important was the factory itself: “A great part of the machines made use of in manufactures … were originally the inventions of common workmen.” The second source of new industrial technology were the factories that made the machines that other factories used: “Many improvements have been made by the ingenuity of the makers of the machines.” The least important source of industrial innovation was academia: “some improvements in machinery have been made by those called philosophers [aka academics.]” But our economist noted that that flow of knowledge from academia into industry was dwarfed by the size of the opposite flow of knowledge: “The improvements which, in modern times, have been made in several different parts of philosophy, have not, the greater part of them, been made in universities [ie, they were made in industry.]” Our empiricist concluded, therefore, that governments need not fund science: the market and civil society would provide.
Arguments for the subsidy of so-called public goods, moreover, were dismissed by our libertarian economist with: “I have never known much good done by those who have affected to trade for the public good.” In particular, arguments by industrialists for subsidies were dismissed with: “people of the same trade seldom meet together, even for merriment and diversions, but the conversation ends in a conspiracy against the public.” And our revolutionary underminer of the social order dismissed the idea that wise investment decisions could be entrusted to politicians, even to that nice Mr Obama, because he distrusted: “that insidious and crafty animal, vulgarly called a statesman or politician.”
Our long-dead economist recognized the existence of public goods, which he described as those “of such a nature, that the profit could never repay the expense to any individual or small number of individuals”, but he could not see that scientific research fell into that category.
The economist in question was, of course, Adam Smith, whose Wealth of Nations from which these quotes were drawn was published in 1776. And he is indeed long-dead. Yet the contemporary empirical evidence supports his contention that governments need not support scientific research. Consider, for example, the lack of historical evidence that government investment in research contributes to economic growth.
The world’s leading nation during the 19th century was the UK, which pioneered the Industrial Revolution. In that era the UK produced scientific as well as technological giants, ranging from Faraday to Kelvin to Darwin—yet it was an era of laissez faire, during which the British government’s systematic support for science was trivial.
The world’s leading nation during the 20th century was the United States, and it too was laissez faire, particularly in science. As late as 1940, fifty years after its GDP per capita had overtaken the UK’s, the U.S. total annual budget for research and development (R&D) was $346 million, of which no less than $265 million was privately funded (including $31 million for university or foundation science). Of the federal and states governments’ R&D budgets, moreover, over $29 million was for agriculture (to address—remember—the United States’ chronic problem of agricultural over productivity) and $26 million was for defence (which is of trivial economic benefit.) America, therefore, produced its industrial leadership, as well as its Edisons, Wrights, Bells, and Teslas, under research laissez faire.
Meanwhile the governments in France and Germany poured money into R&D, and though they produced good science, during the 19th century their economies failed even to converge on the UK’s, let alone overtake it as did the US’s. For the 19th and first half of the 20th centuries, the empirical evidence is clear: the industrial nations whose governments invested least in science did best economically—and they didn’t do so badly in science either.
What happened thereafter? War. It was the First World War that persuaded the UK government to fund science, and it was the Second World War that persuaded the U.S. government to follow suit. But it was the Cold War that sustained those governments’ commitment to funding science, and today those governments’ budgets for academic science dwarf those from the private sector; and the effect of this largesse on those nations’ long-term rates of economic growth has been … zero. The long-term rates of economic growth since 1830 for the UK or the United States show no deflections coinciding with the inauguration of significant government money for research (indeed, the rates show few if any deflections in the long-term: the long-term rate of economic growth in the lead industrialized nations has been steady at approximately 2 per cent per year for nearly two centuries now, with short-term booms and busts cancelling each other out in the long term.)
The contemporary economic evidence, moreover, confirms that the government funding of R&D has no economic benefit. Thus in 2003 the OECD (Organisation of Economic Cooperation and Development—the industrialized nations’ economic research agency) published its Sources of Economic Growth in OECD Countries, which reviewed all the major measurable factors that might explain the different rates of growth of the 21 leading world economies between 1971 and 1998. And it found that whereas privately funded R&D stimulated economic growth, publicly funded R&D had no impact.
The authors of the report were disconcerted by their own findings. “The negative results for public R&D are surprising,” they wrote. They speculated that publicly funded R&D might crowd out privately funded R&D which, if true, suggests that publicly funded R&D might actually damage economic growth. Certainly both I and Walter Park of the American University had already reported that the OECD data showed that government funding for R&D does indeed crowd out private funding, to the detriment of economic growth. In Park’s words, “the direct effect of public research is weakly negative, as might be the case if public research spending has crowding-out effects which adversely affect private output growth.”
The OECD, Walter Park, and I have therefore—like Adam Smith—tested empirically the model of science as a public or merit good, and we have found it to be wrong: the public funding of research has no beneficial effects on the economy. And the fault in the model lies in one of its fundamental premises, namely that copying other people’s research is cheap and easy. It’s not. Consider industrial technology. When Edwin Mansfield of the University of Pennsylvania examined 48 products that, during the 1970s, had been copied by companies in the chemicals, drugs, electronics, and machinery industries in New England, he found that the costs of copying were on average 65 per cent of the costs of original invention. And the time taken to copy was, on average, 70 per cent of the time taken by the original invention.
Copying is lengthy and expensive because it involves the acquisition of tacit (as opposed to explicit) knowledge. Contrary to myth, people can’t simply read a paper or read a patent or strip down a new product and then copy the innovation. As scholars such as Michael Polanyi (see his classic 1958 book Personal Knowledge) and Harry Collins of the University of Cardiff (see his well-titled 2010 book Tacit and Explicit Knowledge) have shown, copying new science and technology is not a simple matter of following a blueprint: it requires the copier actually to reproduce the steps taken by the originator. Polanyi’s famous quote is “we can know more than we can tell” but it is often shortened to “we know more than we can tell” because that captures the kernel—in science and technology we always know more (tacitly) than we can tell (explicitly). So in 1971, when Harry Collins studied the spread of a technology called the TEA laser, he discovered that the only scientists who succeeded in copying it were those who had visited laboratories where TEA lasers were already up and running: “no-one to whom I have spoken has succeeded in building a TEA laser using written sources (including blueprints and written reports) as the sole source of information.”
One long-dead person who would have been unsurprised by this modern understanding of tacit knowledge was Adam Smith, who built his theory of economic growth upon it: he explained how the division of labor was central to economic growth because so much expertise is, in modern language, tacit. “This subdivision of employment” Smith wrote “improves dexterity and saves time.”
But if it costs specialists 65 per cent of the original costs to copy an innovation, think how much more it would cost non-specialists to copy it. If an average person were plucked off the street to copy a contemporary advance in molecular biology or software, they would need years of immersion before they could do so. And what would that immersion consist of? What, indeed, does the modern researcher do to keep up with the field? Research.
In a 1990 paper with the telling title of “Why Do Firms Do Basic Research With Their Own Money?” Nathan Rosenberg of Stanford University showed that the down payment that a potential copier has to make before he or she can even begin to copy an innovation is their own prior contribution to the field: only when your own research is credible can you understand the field. And what do credible researchers do? They publish papers and patents that others can read, and they produce goods that others can strip down. These constitute their down payment to the world of copyable science.
So the true costs of copying in a free market are 100 per cent—the 65 per cent costs of direct copying and the initial 35 per cent down payment you have to make to sustain the research capacities and output of the potential copiers. Copyists may not pay the 100 per cent to the person whose work they copying, but because in toto the cost to them is nonetheless on average 100 per cent, the economists’ argument that copying is free or cheap is thus negated.
That is why, as scholars from the University of Sussex have shown, some 7 per cent of all industrial R&D worldwide is spent on pure science. This is also why big companies achieve the publication rates of medium-sized universities. Equally, Edwin Mansfield and Zvi Griliches of Harvard have shown by comprehensive surveys that the more that companies invest in pure science, the greater are their profits. If a company fails to invest in pure research, then it will fail to invest in pure researchers—yet it is those researchers who are best qualified to survey the field and to import new knowledge into the company.
And it’s a myth that industrial research is secret. One of humanity’s great advances took place during the 17th century when scientists created societies like the Royal Society in London to promote the sharing of knowledge. Before then, scientists had published secretly (by notarising discoveries and then hiding them in a lawyer’s or college’s safe—to reveal them only to claim priority when a later researcher made the same discovery) or they published in code. So Robert Hooke (1635-1703) published his famous law of elasticity as ceiiinosssttuu, which transcribed into ut tensio sic vis (stress is proportional to strain.)
Scientists did not initially want to publish fully (they especially wanted to keep their methods secret) but the private benefit of sharing their advances with fellow members of their research societies—the quid pro quo being that the other members were also publishing—so advantaged them over scientists who were not in the societies (who thus had no collective store of knowledge on which to draw) that their self-interest drove scientists to share their knowledge with other scientists who had made the same compact. Today those conventions are universal, but they are only conventions; they are not inherent in the activity of research per se.
Industrial scientists have long known that sharing knowledge is useful (why do you think competitor companies cluster?) though anti-trust law can force them to be discreet. So in 1985, reporting on a survey of 100 American companies, Edwin Mansfield found that “[i]nformation concerning the detailed nature and operation of a new product or process generally leaks out within a year.” Actually, it’s not so much leaked as traded: in a survey of eleven American steel companies, Eric von Hippel of MIT’s Sloan School of Management found that ten of them regularly swapped proprietary information with rivals. In an international survey of 102 firms, Thomas Allen (also of Sloan) found that no fewer than 23 per cent of their important innovations came from swapping information with rivals. Industrial science is in practice a collective process of shared knowledge.
And Adam Smith’s contention that academic science is only a trivial contributor to new technology has moreover been confirmed. In two papers published in 1991 and 1998, Mansfield showed that the overwhelming source of new technologies was companies’ own R&D, and that academic research accounted for only 5 per cent of companies’ new sales and only 2 per cent of the savings that could be attributed to new processes. Meanwhile, contemporary studies confirm that there is a vast flow of knowledge from industry into academic science: indeed, if it was ever real, the distinction between pure and applied science is now largely defunct, and Bacon’s so-called “linear model” (by which industrial science feeds off university science) has been discredited by the economists and historians of science.
Something else that would have surprised Smith about current scholarship is the economists’ obsession with monopoly. The economists say that unless an innovator can claim, in perpetuity, 100 per cent of the commercial return on her innovation, she will underinvest in it. But that claim is a perversion born of the modern mathematical modelling of so-called “perfect” markets, which are theoretical fictions that bear no relation to economic reality. In reality, entrepreneurs make their investments in the light of the competition, and their goal is a current edge over their rivals, not some abstract dream of immortal monopoly in fictitious “perfect” markets.
The strongest argument for the government funding of science today is anecdotal: would we have the internet, say, or the Higgs Boson, but for government funding? Yet anecdotage ignores crowding out. We wouldn’t have had the generation of electricity but for the private funding of Michael Faraday, and if government funding crowds out the private philanthropic funding of science (and it does, because the funding of pure science is determined primarily by GDP per capita, regardless of government largesse) then the advances we have lost thanks to government funding need a scribe—an omniscient one, because we can’t know what those lost advances were—to write them on the deficit side of the balance sheet. Which is also where the opportunity costs should be written: even if the government funding of science yields some benefit, if the benefit to society of having left that money in the pockets of the taxpayer would have been greater, then the net balance to society is negative.
What would the world look like had governments not funded science? It would look like the UK or the United States did when those countries were the unchallenged superpowers of their day. Most research would be concentrated in industry (from which a steady stream of advances in pure science would emerge) but there would also be an armamentarium of private philanthropic funders of university and of foundation science by which non-market, pure research (including on orphan diseases) would be funded.
And such laissez faire science would be more useful than today’s. Consider the very discipline of the economics of science. The important factor common to Solow, Nelson, and Arrow—the fathers of the modern economics of science—is that all three were associated with the RAND Corporation, which was the crucible of Eisenhower’s military-industrial complex. RAND (i.e., the R&D Corporation) was created in 1946 by the US Air Force in association with the Douglas Aircraft Company as a think tank to justify the government funding of defence and strategic research.
RAND’s initial impetus came from the 1945 book Science, The Endless Frontier, which written by Vannevar Bush, the chairman of the federal government’s Office of Scientific Research and Development. Bush argued the Baconian view that the federal government (which had poured funds into R&D during the war) should continue to do so in peace. Bush of course had very strong personal reasons for so arguing, but it was the Cold War and the upcoming space race (Sputnik was launched in 1958) that—incredibly—persuaded economists that the USSR’s publicly funded industrial base would overtake the United States’ unless the United States foreswore its attachment to free markets in research.
It was all nonsense of course—Sputnik was based on the research of Robert ‘Moonie’ Goddard of Clark College, Massachusetts, which was supported by the Guggenheims—but when RAND sponsors military-industrial complex nonsense, such nonsense has legs. That is why a potentially useful discipline such as a credible economics of science (one based on the study of optimal returns to entrepreneurs in a real, competitive market) has been forsaken for one based on public subsidies under fictitious ‘perfect’ markets.
Cui bono? Who benefits from this fictitious economics of science? It’s the economists, universities, and defence contractors who benefit, at the taxpayers’ expense.
The power of bad ideas is extraordinary. John Maynard Keynes once wrote that practical men are usually the slaves of some defunct economist, and economists do not come much more defunct than Francis Bacon. His most recent defuncting came in Sir Peter Russell’s 2000 biography of Henry the Navigator, in which Russell showed that the Iberian peninsula at the time of the great voyages of discovery was not a centre of research, only of propaganda claiming falsely to be a centre of research, which had fooled Bacon. Yet however many times Bacon is defuncted, some powerful group emerges to resurrect his false—if superficially attractive—idea that science is a public good. Unfortunately too many people have an interest in so representing science.
I hope that this little essay will be one more stake in that idea’s heart, yet I fear that this particular vampire will continue to pursue anything that smells of money for another four centuries to come.
History Supports Government Funding for Public Health
As a medical historian asked to respond to Professor Kealey’s essay, I was struck by his focus on technology rather than basic science and his lack of discussion about research for human well being, which, in economic terms, would presumably be the health of the labor force. Labor appears as a given in his descriptions of what makes economies flourish, and perhaps that is because the economist he praises, Adam Smith, and the one he reviles, Francis Bacon, both wrote at a time when medicine had little power to intervene in most aspects of human health. He quotes Smith as recognizing the existence of public goods, those “of such a nature, that the profit could never repay the expense to any individual or small number of individuals.” Research benefitting public health certainly qualifies as a public good, but it is also easier to take public health measures for granted than it is to be grateful that one did not suffer or die from cholera, typhoid, undulant fever, botulism, and lead poisoning, to name just a few.
Components of biomedical research include epidemiology, the determination of the purity and efficacy of drugs, and basic knowledge of how the body works in health and disease that can lead to interventions to prevent disease or treat individuals. Libertarians support government funding of scientific research to benefit defense, but such a position implies that one could separate research that benefits soldiers from that benefitting the rest of the population. This hardly seems possible.
Epidemiological studies related to clean water and sewage management underlie the prevention of many epidemics. John Snow’s 1854 epidemiological demonstration that the Broad Street water pump in London was the focus of that year’s cholera epidemic fueled interest in preventing disease by identifying how it was transmitted a quarter century before Pasteur and Koch demonstrated a single microorganism as the cause of a single disease. Government officials were persuaded to remove the pump’s handle. After the epidemic waned, however, officials were pressured to replace the handle on the pump and reject Snow’s theory, reflecting society’s unwillingness to entertain the idea of fecal-oral transmission of disease. This situation foreshadowed the kind of future tensions—sometimes outright battles—between commercial interests, which prefer to deny the existence of disease that might threaten profits, and governments, which prefer to enforce public health regulations to contain disease spread.
Government-funded drug trials that may slow adoption of candidate therapies are also dismissed as wasteful to taxpayers, but numerous studies have shown that pharmaceutical companies are loath to publish negative findings about potentially profitable drugs even with government oversight. Similarly, research on vaccines that become mandatory may be anathema to many libertarians as limiting freedom of choice, but zero government support would have a devastating effect on the economies of today’s highly concentrated populations, as epidemics would recur and decimate the work force. Producing effective vaccines is extremely difficult and not often cost effective for pharmaceutical companies. For most vaccines, one or a few doses are all a patient needs, and a company may incur sizeable liability for patients who suffer severe side effects. The drugs most worth the millions of dollars of R&D are those that must be taken long term, such as statin drugs, drugs to manage stomach acid, and antidepressants. The current dearth of new antibiotic drugs attests to the choices made by pharmaceutical companies.
Kealey also argues that if government funding for science was halted, “there would be an armamentarium of private philanthropic funders of university and of foundation science by which non-market, pure research (including orphan diseases) would be funded.” I counter that this model has already been tried and found wanting, at least as it applies to medical research in the United States. Beginning early in the twentieth century, as leading scientists hoped to exploit the germ theory of infectious disease to save lives, medical research was regarded as an activity that could produce public good, and the private philanthropic sector was indeed first to lend support. In 1904, John D. Rockefeller opened the Rockefeller Institute in New York City; in 1911, the Otho S. A. Sprague Memorial Institute was founded in Chicago. The first experience of U.S. scientists with government-coordinated research came in 1917, when the U.S. Army created a Chemical Warfare Service to fund projects by chemists at universities and other institutions aimed at defending troops from gas attacks. This military research effort played a large role in changing the minds of U.S. scientists about whether government could support peacetime scientific research without impeding their freedom to pursue novel scientific ideas related to the work.
After the war, chemists pressed for the creation of a privately funded institution to conduct chemical research that would benefit medicine. This effort foundered because of many conflicting interests: academic chemists and pharmacologists refused to associate with their industrial colleagues. Existing institutes, such as the Rockefeller and the Sprague, opposed support for a competing institute. Chemists, pharmacologists, and physicians disagreed about which discipline should have administrative control of any institute. Industry was reluctant to commit resources to basic medical research. Eventually, supporters turned to the U.S. Congress, which in a 1930 act expanded and renamed an existing public health laboratory as the National Institute of Health.
After World War II, as Kealey notes, the U.S. government greatly expanded support for science through the National Science Foundation and the grants program of the National Institutes of Health (the NIH became plural with the creation of new institutes in 1948). Kealey’s argument that society would have benefitted more during the last sixty years by leaving money in the pockets of taxpayers than by investing in government-funded science is based on his belief the private sector would have performed all the basic research needed for the good of the public. I would like to take the example of research that underlay the response to the pandemic of acquired immune deficiency syndrome (AIDS) as refutation of that argument.
A society can respond to epidemic disease only on the basis of medical knowledge it has accumulated by the time the epidemic occurs. In the case of AIDS, the basic medical knowledge rested on the fields of molecular immunology and virology, which had become fruitful research areas in the 1970s. The mechanism by which the AIDS virus destroyed the immune system was not completely understood in 1981, when AIDS was first recognized as a new disease, but molecular immunology provided the mental model via which the disease was defined and initially addressed. Knowledge about human retroviruses was stunningly recent (1979 and 1980 were when the first two human retroviruses were definitively demonstrated), and without their discovery, it is unlikely that physicians would even have considered the possibility that a retrovirus might be the cause of AIDS.
With the serendipity that sometimes happens in basic medical science, much of this knowledge emerged not from infectious disease research but from U.S. government funding for cancer research. A Special Virus Cancer Program begun in the 1960s had sought to identify viruses as a cause of cancer. The program was largely shut down in the mid 1970s after no virus could be conclusively linked to a human cancer. Of course, shortly afterwards, hepatitis B was linked to liver cancer and the human papilloma virus to cervical cancer. In December 1971, however, U.S. government-funded cancer research had expanded greatly with enactment of the National Cancer Act. Under the auspices of this legislation, research on human retroviruses continued in National Cancer Institute (NCI) laboratories in Bethesda, Maryland. Every one of the retrovirologists involved in the identification of the human immunodeficiency virus (HIV) as the cause of AIDS either directly trained or spent time working with colleagues in Bethesda. Furthermore, a screening program established under the National Cancer Act to test large numbers of compounds for their cancer-fighting potential was repurposed to test candidate drugs against AIDS. Research utilizing this program identified the first drugs with any effectiveness against AIDS—AZT, ddI, and ddC. Could all this work have been produced by privately funded science? Possibly, but given the uncertainty regarding results inherent in basic science and the impetus to pursue only activities with near-term profit possibilities, it is doubtful that medicine could have responded to AIDS as quickly as it did on the basis of basic knowledge built up through government funding.
State-Funded Science: It’s Worse Than You Think!
Terence Kealey’s insightful essay is likely to provoke a vigorous debate among libertarians on the utility of publicly funded science. He concludes that “the public funding of research has no beneficial effects on the economy.” I will argue that the situation, at least in a prominent environmental science, is worse, inasmuch as the more public money is disbursed, the poorer the quality of the science, and that there is a direct cause-and-effect relationship.
This is counter to the reigning myth that science, as a search for pure truth, is ultimately immune from incentivized distortion. In fact, at one time James M. Buchanan clearly stated that he thought science was one of the few areas that was not subject to public choice influences. In his 1985 essay The Myth of Benevolence, Buchanan wrote:
Science is a social activity pursued by persons who acknowledge the existence of a nonindividualistic, mutually agreed-on value, namely truth…Science cannot, therefore, be modelled in the contractarian, or exchange, paradigm.
In reality, public choice influences on science are pervasive and enforced through the massive and entrenched bureaucracies of higher education. The point of origin is probably President Franklin Roosevelt’s November 17, 1944 letter to Vannevar Bush, who, as director of the wartime Office of Scientific Research and Development, managed and oversaw the Manhattan Project.
Roosevelt expressed a clear desire to expand the reach of the government far beyond theoretical and applied physics, specifically asking Bush, “What can the Government do now and in the future to aid research activities by public and private organizations.” In response, in July, 1945, Bush published Science, The Endless Frontier, in which he explicitly acknowledged Roosevelt’s more inclusive vision, saying,
It is clear from President Roosevelt’s letter that in speaking of science that he had in mind the natural sciences, including biology and medicine…
Bush’s 1945 report explicitly laid the groundwork for the National Science Foundation, the modern incarnation of the National Institutes of Health, and the proliferation of federal science support through various federal agencies. But, instead of employing scientists directly as the Manhattan Project did, Bush proposed disbursing research support to individuals via their academic employers.
Universities saw this as a bonanza, adding substantial additional costs. A typical public university imposes a 50% surcharge on salaries and fringe benefits (At private universities the rate can approach 70%.)
These fungible funds often support faculty in the many university departments that do not recover all of their costs; thus does the Physics Department often support, say, Germanic Languages. As a result, the universities suddenly became wards of the federal government and in the thrall of extensive programmatic funding. The roots of statist “political correctness” lie as much in the economic interests of the academy as they do in the political predilections of the faculty.
As an example, I draw attention to my field of expertise, which is climate change science and policy. The Environmental Protection Agency claims to base its global warming regulations on “sound” science, in which the federal government is virtually the sole provider of research funding. In fact, climate change science and policy is a highly charged political arena, and its $2 billion/year public funding would not exist save for the perception that global warming is very high on the nation’s priority list.
The universities and their federal funders have evolved a codependent relationship. Again, let’s use climate change as an example. Academic scientists recognize that only the federal government provides the significant funds necessary to publish enough original research to gain tenure in the higher levels of academia. Their careers therefore depend on it. Meanwhile, the political support for elected officials who hope to gain from global warming science will go away if science dismisses the issue as unimportant.
The culture of exaggeration and the disincentives to minimize scientific/policy problems are an unintended consequence of the way we now do science, which is itself a direct descendent of Science, The Endless Frontier.
All the disciplines of science with policy implications (and this is by far most of them) compete with each other for finite budgetary resources, resources that are often allocated via various congressional committees, such as those charged with responsibilities for environmental science, technology, or medical research. Thus each of the constituent research communities must engage in demonstrations that their scientific purview is more important to society than those of their colleagues in other disciplines. So, using this example, global warming inadvertently competes with cancer research and others.
Imagine if a NASA administrator at a congressional hearing, upon being asked if global warming were of sufficient importance to justify a billion dollars in additional funding, replied that it really was an exaggerated issue, and the money should be spent elsewhere on more important problems.
It is a virtual certainty that such a reply would be one of his last acts as administrator.
So, at the end of this hypothetical hearing, having answered in the affirmative (perhaps more like, “hell yes, we can use the money”), the administrator gathers all of his department heads and demands programmatic proposals from each. Will any one of these individuals submit one which states that his department really doesn’t want the funding because the issue is perhaps exaggerated?
It is a virtual certainty that such a reply would be one of his last acts as a department head.
The department heads now turn to their individual scientists, asking for specific proposals on how to put the new monies to use. Who will submit a proposal with the working research hypothesis that climate change isn’t all that important?
It is a virtual certainty that such a reply would guarantee he was in his last year as a NASA scientist.
Now that the funding has been established and disbursed, the research is performed under the obviously supported hypotheses (which may largely be stated as “it’s worse than we thought”). When the results are submitted to a peer-reviewed journal, they are going to be reviewed by other scientists who, being prominent in the field of climate change by virtue of their research productivity, are funded by the same process. They have little incentive to block any papers consistent with the worsening hypothesis and every incentive to block one that concludes the opposite.
Can this really be true? After all, what I have sketched here is simply an hypothesis that public choice is fostering a pervasive “it’s worse than we thought” bias in the climate science literature, with the attendant policy distortions that must result from relying upon that literature.
It is an hypothesis that tests easily.
Let us turn to a less highly charged field in applied science to determine how to test the hypothesis of pervasive bias, namely the pedestrian venue of the daily weather forecast.
Short-range weather models and centennial-scale climate models are largely based upon the same physics derived from the six interacting “primitive equations” describing atmospheric motion and thermodynamics. The difference is that, in the weather forecasting models, the initial conditions change, being a simultaneous sample of global atmospheric pressure, temperature, and moisture in three dimensions, measured by ascending weather balloons and, increasingly, by downward-sounding satellites. This takes place twice a day. The “boundary conditions,” such as solar irradiance and the transfer of radiation through the atmosphere, do not change. In a climate model, the base variables are calculated, rather than measured, and the boundary conditions—such as the absorption of infrared radiation in various layers of the atmosphere (the “greenhouse effect”) change over time.
It is assumed that the weather forecasting model is unbiased—without remaining systematic errors—so that each run, every twelve hours, has an equal probability of predicting, say, that it will be warmer or colder next Friday than the previous run. If this were not he case, then the chance of warmer or colder is unequal. In fact, in the developmental process for forecast models, the biases are subtracted out and the output is forced to have a bias of zero and therefore an equal probability of a warmer or colder forecast.
Similarly, if the initial results are unbiased, successive runs of climate models should have an equal probability of producing centennial forecasts that are warmer or colder than previous one, or projecting more or less severe climate impacts. It is a fact that the climate change calculated by these models is not a change from current or past conditions, but is the product of subtracting the output of the model with low greenhouse-gas concentrations from the one with higher ones. Consequently the biasing errors have been subtracted out, a rather intriguing trick. Again, the change is one model minus another, not the standard “predicted minus observed.”
The climate research community actually believes its models are zero-biased. An amicus brief in the landmark Supreme Court case Massachusetts v. EPA, by a number of climate scientists claiming to speak for the larger community, explicitly stated this as fact:
Outcomes may turn out better than our best current prediction, but it is just as possible that environmental and health damages will be more severe than best predictions…”
The operative words are “just as possible,” indicating that climate scientists believe they are immune to public choice influences.
This is testable, and I ran such a test, publishing it in an obscure journal, Energy & Environment, in 2008. I, perhaps accurately, hypothesized that a paper severely criticizing the editorial process at Science and Nature, the two most prestigious general science journals worldwide, was not likely to be published in such prominent places.
I examined the 115 articles that had appeared in both of these journals during a 13-month period in 2006 and 2007, classifying them as either “worse than we thought,” “better,” or “neutral or cannot determine.” 23 were neutral and removed from consideration. 9 were “better” and 83 were “worse.” Because of the hypothesis of nonbiased equiprobability, this is equivalent to tossing a coin 92 times and coming up with 9 or fewer heads or tails. The probability that this would occur in an unbiased sample can be calculated from the binomial probability distribution, and the result is striking. There would have to be 100,000,000,000,000,000 iterations of the 92 tosses for there to be merely a 50% chance that one realization of 9 or fewer heads or tails would be observed.
In subsequent work, I recently assembled a much larger sample of the scientific literature and, while the manuscript is in preparation, I can state that my initial result appears to be robust.
Kealey tells us that there is no relationship between the wealth of nations and the amount of money that taxpayers spend on scientific research. In reality, it is in fact “worse than he thought.” At least in a highly politicized field such as global warming science and policy, the more money the public spends, the worse is the quality of the science.
The State Will Always Need Science
I begin my defense of a role for public funding of scientific research from a counterintuitive position: I agree with Terence Kealey in the general outline, if not all the particulars, of his comments about scientific research and public goods. Nevertheless it is insufficient, practically and logically, to say that because scientific research is not a public good, it therefore does not deserve public patronage.
Like Professor Kealey, I want to set aside the work of defunct economists, but more so. Looking to Francis Bacon, or even to Adam Smith, on contemporary research funding is about as helpful as asking Georges Louis Lesage, who patented the telegraph in 1774, to program your iPhone.
The situation with respect to the more contemporary economists upon which Professor Kealey relies is similar. The empiricists among them of necessity look backward into a world that – in terms of the nature of scientific research and development (R&D), its role in the economy, and the importance of various sectors (especially but not limited to information and communication technology) – is neither the world we live in, nor more certainly the world we are going to live in. The assumption that the past is like the present is like the future is spread quite thin, “like butter scraped over too much bread” as the aging Bilbo Baggins puts it.
Further, I share with Professor Kealey a curiosity about who benefits; capitalists who advocate capitalism are just as suspect as scientists who advocate science. Who benefits from the lack of publicly funded research? It is large corporations who have the resources to bear the costs of research, and who also stand to threaten individual rights and liberties as much as or more than government. In this, I am also with Keynes when he writes that the reliance on markets to produce good outcomes is “the astonishing belief that the nastiest motives of the nastiest men somehow or other work for the best results in the best of all possible worlds.” The cui bono is an opening bid, not the final trump.
So why should there be public sponsorship of scientific research? Not because without it, there would be no research, but because without it, there would only be private sponsorship of research. We require non-market ways of sponsoring and setting priorities for R&D, both for supporting the development of private enterprise and for other critical purposes besides.
Even the “minimal state” (in Nozick’s sense) has important knowledge inputs that require it to support scientific research. Analytically, functions like the census and standards of weights and measures, for example, are necessary even for a minimal state, for in determining its citizenry a state needs to enumerate citizens and in being available to enforce and adjudicate contracts it needs to have standards as references. Research in forensics is a necessary function of the fair and precise administration of criminal justice (especially when standards are stringent, like “beyond a reasonable doubt” – a libertarian safeguard). And because there is – both in today’s world and in Kealey’s imagined one – a strong private sector research enterprise that keeps advancing, the government’s activities in these areas will likewise have to become more sophisticated. It is plausible to argue that, even in a minimal state, the government would have to maintain access to research at least as sophisticated as the most sophisticated research being done in the private sector in order to enforce contracts involving that research. Do Apple and Samsung really want a stupid court system?
The case for defense research is obvious, but let me add a less obvious point: To the extent that defense is included in the minimal state, health will be included, and education, as well as such technical areas as meteorology, mathematics, hydrology, materials science, many fields of engineering, and so on, because the well-being, sophistication, and ultimately the superiority of the troops are importantly related to their ability to know the weather, plot a trajectory, ford or even move a river, design armor, and so on. And note that neither the civilian nor defense functions have anything directly to do with increasing GDP.
Historically, we see many of these examples playing out in the early republic, largely supported by both Federalist and Republican factions. While the nascent American state rejected ideas about supporting, say, a national university, it still supported functions that were cutting-edge science in their time. And in this understanding of the historical role of science and the American political economy – which is generally well-documented in A. Hunter Dupree’s Science in the Federal Government: A History of Policies and Activities – is where I disagree perhaps most strongly with Professor Kealey.
The American Century was grounded not on a lack of government support for scientific research, but rather on robust, mission-oriented research in areas of particular importance to the state and the economy. It was grounded on the mapping of ports and harbors and the challenge of geodesy by the Coast and Geodetic Survey; on the exploration and physical and ethnographic mapping of the interior, first by Lewis and Clark and then by the U.S. Geological Survey; and on the addition of wisdom to government by the Library of Congress, the Smithsonian Institution, and the National Academy of Sciences. It was grounded on the expertise dedicated to defense through the Surgeon General, the uniformed Public Health Service, the Army Corps of Engineers and the Army Signal Service’s Meteorological Service; on the public dedication to technical education through the Land Grant Colleges, West Point and the Naval Academy, and all the technological developments from ironclad ships and submersibles to Gatling guns and primitive tanks introduced during the Civil War. (And these examples are only at the national level; states had a large role, especially in geology and higher education.)
This was no research laissez faire.
The Edisons, Wrights, Bells, and Teslas that Professor Kealey rightly lionizes all follow these state-led developments. Even in their stereotypical conception of self-made inventors, they all required the patent system to protect and build their work. Both immigrants, Bell and Tesla brought high-end intellectual capital (Edinburgh and UCL, and the Technical University of Gratz, respectively) from Europe and developed it within the infrastructure that the United States had established. Edison, a drop-out, helped establish that infrastructure, but as the inventor of the industrial laboratory, he makes Dupree’s case that “before the rise of the universities, private foundations, and industrial laboratories, the fate of science rested more exclusively with the government than it did later.”
This 19th Century R&D funding did not crowd out private R&D because it was largely infrastructural, and besides, there was little so organized until Edison or later. While there may currently be some crowding out, there are also clear examples to the contrary, e.g., a natural experiment of sorts in 2004 when billions of dollars in corporate funds were repatriated thanks to a tax holiday: firms then spent this money on executive pay and dividends, not on R&D or jobs.
But crowding out is less important than Professor Kealey makes it seem because research differs profoundly in the contexts of sponsorship and performance. The entirety of the empirical literature on research demonstrates that context – profit or not-for-profit; intramural or extramural; mission-oriented or blue-sky – matters for the nature of the work performed. The problem with the over-emphasis on fundamental, exploratory, curiosity-driven research in the academy is not that the private sector would discover the Higgs boson if private investment in high-energy physics were not thwarted by the public expenditures. Rather, there are more socially useful purposes to which high-energy physics money might go. Some of those purposes might be other fundamental research topics (as some physicists claimed), or more mission-oriented or applied research topics, or paying down the debt, or putting money back into the taxpayers’ pockets. Making undifferentiated arguments that giving all public R&D money back to the taxpayers would result in the greatest GDP growth is rather like responding “food” to the question, “what would you like for dinner?” and expecting to have your hunger satisfied with your favorite dishes.
Similarly, as Victoria Harden nicely argues in her posting, the private sector would not bear the costs of many infrastructural R&D projects, particularly public health surveillance, monitoring, and humane interventions. To punctuate her concluding example of HIV/AIDS, one might imagine what the character of the private sector response might have been to AIDS in the early 1980s: Driven by “consumer” fear of contagion and prejudice against homosexuals and immigrants, and absent of sound morbidity and mortality data from the Centers for Disease Control and the technocratic but also compassionate mindset of folks like Tony Fauci at the National Institute for Allergies and Infectious Disease, the response would have been (and nearly was anyway) brutality toward gay men, drug users, and Haitians (and later Africans). Many thousands or tens of thousands more would have died, and not just in the gay community. We would also likely not have seen the historic emancipation – and true libertarian success – of gay people that we have since seen because so many more gay men would be in the ground and not out of the closet.
So, scientific research is not a pure public good. So what? In even a minimal state, there is an important role for a host of research activities related to particular public missions, and 19th century America was full of such research, providing the groundwork for American supremacy in the 20th century. GDP growth, however, is not all that we want out of research. If large positive changes in GDP were all we wanted, we would encourage the wanton destruction of coastal cities by hurricanes, because the GDP growth in rebuilding, greater than the lost production, is pre-defined as a benefit.
Let’s have a constructive dialogue about priorities within R&D spending, rather than a silly one about zeroing it out across the board.
Replies on Public Goods and Crowding Out
Why Philanthropy Comes Up Short
I am certainly pleased to not be categorically wrong, as Professor Kealey allows. The challenge now is to understand to what extent philanthropic research support might be able to replace public research expenditures. I am not aware of what evidence for crowding out might exist, but I would hazard to say that any such evidence would be stronger in Professor Kealey’s United Kingdom, where there might be real competition between the research councils of the UK and philanthropic giants like The Wellcome Trust and the Leverhulme Trust. In the US, however, the story is otherwise.
Historically, there were virtually no large philanthropies capable of supporting scientific research until the end of the age of the robber barons. So the support of research by the U.S. federal government that I chronicled in my opening response could not have been crowding out anything. This observation leads to an important claim: The kind of scientific work that it is important and legitimate for a government to sponsor is dependent on the nature and state of economic and social development in the country, rather than on some argument about public goods or other economic abstractions. It would not make much sense for the U.S. federal government to spend money in the 21st century on the kinds of infrastructural knowledge projects (mapping the coasts and interior) that occupied a large portion of its 19th century concern, but it probably does make sense for it to do other infrastructural work (e.g., the Internet in the recent past, GPS in the more recent past, perhaps the “Internet of things” in the future) and continue with others, for example, with high-quality weather observation, because such observation requires expensive satellites for which there is currently no complete private capacity to design, build, and safely launch. When such a capacity exists, and NASA is working closely with the private sector on building its launch capacity currently, it might – just might – make sense to spin such activities off to the private sector.
Moreover, private philanthropies – and certainly those at a large enough scale to foster any significant research enterprises – are themselves creatures of the state. Their creation and the policies governing their payouts are determined (historically and functionally) by the tax code and related public laws (many of which many libertarians probably find anathema). Thus, it is utterly unclear to me that in a coherent libertarian state, there actually would be philanthropies like the Carnegie Corporation of New York, the Sloan Foundation, the Ford Foundation, the Gates Foundation, and the relatively small number of large foundations having the wherewithal to support scientific research. I am willing to concede that there would be robust corporate philanthropy, but since Professor Kealey has conceded that the private sector cannot be entrusted to support scientific research in its entirety, and since there is no suggestion that corporate philanthropy is anything more than self-serving, I think this can safely be neglected.
Even if such philanthropies existed in the libertarian state, at what scale might they actually operate? In the United States in 2012, according to “Giving USA 2013,” the total of charitable giving was roughly $316 billion. Most of these contributions came from individuals, and most of them went to religious institutions—again supported by a tax credit whose very existence might be jeopardized. Foundations provided about $47 billion of these funds and corporations about $19 billion, not all of either of course devoted to research. On short notice I’m having trouble finding out precisely what share of philanthropic and corporate giving goes to scientific research rather than, say human and social services, education, the arts, etc. But one hint is the amount that foundations provide to universities, which in 2010 was according to the AAAS R&D budget project no more than $4.3 billion (this is the “other” category – other than the federal government, state and local governments, corporations, and universities themselves, and is dominated but is not exclusively constituted by foundations). Given that most research well-enough established to attract philanthropic attention occurs in universities, it is reasonable to estimate that perhaps 10% of what private philanthropies spend goes to scientific research.
In 2012, the U.S. federal government spent roughly $62 billion on civilian nondefense research. So in order to believe that they would make up for what the government stopped doing, philanthropies would have to increase their spending on research more than tenfold. Now it might be possible that this $62 billion is inflated, and that it would be reasonable to fund less research in addition to funding it differently. But the gap between about $4 billion and more than $60 billion strikes me as too large fill on a supposition.
Finally, there is the question of whether it is desirable to have private philanthropies take over the role of government in funding scientific research. As I argued in the original posting, the character of research varies among patrons and performers. While I suspect that foundation funding is closer to government funding in its quality than corporate funding (and here I don’t necessarily mean “scientific quality” but rather in terms of pressure for results, freedom to publish, longer time horizons, willingness to support students, etc.), there are real differences. The federal government in the United States pays the indirect as well as the direct costs of research performed on grants—meaning averaged estimates of the additional increments of libraries, physical plant, administration, utilities, etc., required to conduct the research. Foundations often insist on lower or no such “overhead” on their research grants, putting greater financial stress on universities. Foundations’ payout is also determined by income from endowment (usually averaged over three years), and so as was the case in the recent Great Recession, when endowments performed poorly, payouts shrank as well, and even many large foundations cancelled existing commitments and stopped making grants altogether. As with corporate funding, the most desirable way to fund research is not to be caught in the business cycle.
Dialogue among Participants
The Editors are pleased to present a lighly edited e-mail exchange that took place recently among some of the particpants.
Victoria Harden: David Guston writes,
[O]ne might imagine what the character of the private sector response might have been to AIDS in the early 1980s: Driven by ‘consumer’ fear of contagion and prejudice against homosexuals and immigrants, and absent sound morbidity and mortality data from the Centers for Disease Control and the technocratic but also compassionate mindset of folks like Tony Fauci at the National Institute of Allergy and Infectious Diseases, the response would have been (and nearly was anyway) brutality toward gay men, drug users, and Haitians (and later Africans). Many thousands or tens of thousands more would have died, and not just in the gay community.
Activists like Larry Kramer of course castigated the Reagan administration for its neglect of the crisis. But one should be careful not to conflate policy decisions in the White House, which affect the NIH grants program, with research done in the NIH intramural program, which has the flexibility to reorient research quickly. Also, political decisions affected medical care for people with AIDS and their civil rights, which are important issues but not directly related to the question of funding for biomedical research. The AIDS activists were critical in maintaining political pressure on Congress for AIDS funding, but they sometimes failed to understand that money alone would not produce a cure or vaccine. If that were true, cancer would have been cured long ago.
Terence Kealey: Private foundations have indeed done work on infectious diseases, even on diseases that carried with them moral disapproval – like syphilis. Howard Florey’s major funder when he was developing penicillin was the Rockefeller Foundation.
Ah ha, I hear you say, but penicillin wasn’t the first antibiotic, Salvarsan 606 was. Quite so, it was. Here is an extract from Wikipedia:
In 1906 Ehrlich became the director of the Georg Speyer House in Frankfurt, a private research foundation affiliated with his institute. Here he discovered in 1909 the first drug to be targeted against a specific pathogen: Salvarsan, a treatment for syphilis, which was at that time one of the most lethal and infectious diseases in Europe.
Victoria Harden: Yes, penicillin was initially developed with private sector funding. Once its value was demonstrated, however, Florey traveled to the United States and got A. N. Richards, head of the Committee on Medical Research of the Office of Scientific Research and Development (OSRD—Vannevar Bush’s wartime U.S. government science entity) to be the central point for information about penicillin throughout the war, and the U.S. War Production Board oversaw collaboration among the network of U.S. companies producing penicillin. For more on this, see Robert Bud, Penicillin: Triumph and Tragedy (Oxford, 2007).
And to get technical, Salvarsan was not an antibiotic; antibiotics are substances produced by or derived from certain fungi, bacteria, or other microorganisms that destroy or inhibit the growth of other microorganisms. Salvarsan was an antimicrobial chemical which Ehrlich found targeted spirochetes. The treatment was long and not pleasant, so many people never completed it. But it was viewed as a “magic bullet,” and scientists continued to search for other antimicrobial drugs. They were unsuccessful until the late 1930s, when the sulfa drugs proved effective against streptococci. All this research was funded by the private sector. World War II marked the final advent of significant government funding for science in the United States.
One more story about World War II and the transition to government activity in science: In 1937, Max Theiler at the Rockefeller Institute attenuated a strain of yellow fever virus that was used to develop an effective vaccine. The Rockefeller Foundation began producing the vaccine, and a U.S. Public Health Service physician, Mason Hargett, was sent to the Foundation’s production lab in Brazil to learn the method. While there, he developed a way to make the vaccine without needing human serum to stabilize it (he called it an “aqueous-based” vaccine). After 1943, when U.S. troops fell ill with “jaundice” (hepatitis B, but hepatitis viruses had not been isolated, so it wasn’t differentiated) after receiving injections of the Rockefeller yellow fever vaccine and the contamination was traced to the human serum used in production, the U.S. government sent Hargett to the PHS Rocky Mountain Laboratory in Montana (where yellow fever mosquitoes wouldn’t survive the winter if they got loose), and he made the yellow fever vaccine without human serum for the remainder of the war. Afterwards, the government turned production over to the private sector.
Terence Kealey: First, let me thank you for the interesting history that you provided. But I might also just say that you’ve – we’ve – touched a British nerve over penicillin. We Brits think you Americans took advantage of the fact that we were then fighting World War II alone, and that we simply didn’t have the resources to develop penicillin on top of everything else, so you … er … stole it from us (wasn’t there something about patents, I can’t now remember the details?) similar to the way Roosevelt used lend-lease to requisition all our imperial bases.
I expect there’s another, more benign, explanation for the fact that we in Britain ended up paying licence and patent fees to America for decades for the use of penicillin, but if we can’t nourish the odd sense of grievance, what pleasure can we derive from life?
David Guston: Very funny, Terence. What Victoria also leaves out from the above, if I recall my penicillin history correctly, is that when war-time production dropped the cost of the drug significantly, the U.S. NIH was left with residual funds that were supposed to buy penicillin but now could be re-purposed, and thus (in part) was born the grants program at NIH.
Science in Both the Public and the Private Sector
With respect to public funding for biomedical research and its impact on morbidity and mortality, I must take issue with Terrence Kealey. In the United States, government funding for science did not expand until after World War II, so its impact must be judged only within the last half century or so. The polio vaccine, which dramatically lowered morbidity and mortality from polio, was wholly developed in the private sector, with the basic research underlying it—successful cultivation of poliovirus strains—occurring in the late 1940s. There is no question that the first polio vaccine and antibiotics were developed without federal funding.
Since that time, however, three other vaccines that have had worldwide impact were developed with U.S. government funding: the rubella vaccine that significantly cut birth defects of babies born to mothers who contracted the disease in early pregnancy, the human papilloma vaccine against cervical cancer, and the rotavirus vaccine against childhood diarrhea that annually saves hundreds of thousands of lives of children living in the developing world. There are numerous other improvements in human health that have flowed from government funding. I will mention only three more, to avoid sounding like a laundry list: the reduction in heart disease as demonstrated through analyses of the Framingham long-term population study, the dramatic reduction in dental caries as a result of fluoridating civic water supplies, and the Women’s Health Initiative that demonstrated hormone replacement therapy to have heart and stroke risks not previously recognized. For a long list of other publicly funded biomedical research contributions to health, consult the Selected Research Advances of NIH, 1887-2011. In addition, see the basic research contributions funded by NIH as catalogued through receipt of Nobel prizes; the Nobel website will provide details about each award. Although Professor Kealey may see these as mere facts or anecdotes, I view them as data points building a compelling argument for contributions that were highly unlikely to have been produced by private philanthropy or industry.
I want to address the point made by both Patrick J. Michaels and Terrence Kealey that scientists were so afraid of having their research controlled by government that they resisted government funding. Indeed, during the legislative process for creating the National Science Foundation and the NIH grants program, scientists resisted government funding until they had negotiated a mechanism for scientists to control decisions about who received the funds—the peer review system. Peer review has its faults, just like democracy, but it has been studied and criticized and tweaked as may be seen in the long list of internal and external studies you can find on the Office of NIH History’s website. Scientists may indeed feel compelled to prepare grant proposals in light of whatever biases they perceive in this system. That is no different, however, from preparing grant proposals in light of the perceived biases of industrial or philanthropic funders whose decisions are made by a single individual or by a small committee.
The real question to me is not whether funding for science must be exclusively private or public, but what sort of balance produces an optimal outcome for society. With respect to AIDS, for example, the difference between private and public annual contributions is the difference between the ability to marshal millions of dollars and billions of dollars. And again, as David Guston noted, economic and tax policies that favor industry are, in effect, government support for the entities that produce private philanthropy, and the interests of the private sector cannot be trusted to ensure the greatest good for society. With luck, both private and public institutions will continue to work towards improved human health, and crowding out will not be an issue.
Climate and HIV Science - Why the Different Trajectories?
I continue to ponder the extent to which public funding, in Terence’s words, “has introduced perverse incentives and has damaged the intellectual autonomy of the universities,” and I am pleased that he feels I am “obviously right” about that.
But, I ask, how universally does this apply? My experiential perspective may be quite biasing. My doctorate was in Ecological Climatology (University of Wisconsin, 1979) and I am ABD at University of Chicago in Biological Sciences (Ecology). Both fields are highly politicized with a substantial “it’s worse than we thought” leitmotif. One needs only to look at the painfully incomplete summaries of climate science regularly minted by the U.S. Global Change Research Program (USGCRP) or the writings of Stanford’s Paul Ehrlich to appreciate the depth of the bias. But what of other fields?
I mentioned the fact that programmatic science—beginning with the Manhattan Project— has replaced individualistic effort as science became, in many fields, supported by the monopoly provider of the federal government. Anyone who has attended faculty meetings knows that what individuals would say compared to what the relevant department says are often radically different things. Group science has much larger budgets and numbers of individuals to support—and, while it is unfortunate, it is only logical that in the group process individuals will work together to continue that support.
I mention the USGCRP because it is a discrete but large entity whose purview is defined largely by a single issue, which is climate change. If climate change is not important, neither is the USGCRP, nor its funding ($2.4 billion per year). Chip Knappenberger and I have an upcoming publication on the problems with the USGCRP here.
Again, I ask the question, how correct are my ideas? I would criticize them by noting that the degree to which federal funding biases science or scientific assessments may in fact vary from one subject area to another. While I know, and can provide quantitative data that can be used to test the hypothesis of bias, and can show that this occurs in climate science, I don’t think it happened in biomedical science with the AIDS issue.
AIDS and climate change began in much the same way—with end-of-the-world rhetoric from the research community at the beginning of the funding trajectory, visible on Sunday morning television. But something different from what happened in climate change then evolved. Instead of continuously being “worse than we thought,” research in fact began to handle the disease to the point of some controllability. The rhetoric changed, and the research community obviously perceived no threat from that.
Why these two issues could begin so similarly but evolve so differently is worth thinking about. My working hypothesis is that the global warming community knows there is no “cure,” only the (obviously ongoing) adaptation that ensues for many environmental fluctuations. In this case the issue remains politically viable (unsolved problems always seem to require more funding), while doomsaying on AIDS (still something that causes great suffering) could not be maintained in the light of scientific progress.
Any comments from the biomedicos out there?