The Case against Public Science

For libertarians, economic growth is the growth that barely speaks its name. That is because conventional opinion asserts that economic growth is the gift of government. Secondary issues such as the efficient distribution of goods and services can, we are assured, be entrusted to the market, but when it comes to the creation of those goods and services in the first place (especially the new goods and services that constitute economic growth) then—sorry dear libertarian—only government can supply those: we are rich today only because that nice Mr Obama and his equally nice predecessors in the White House and in Congress have been gracious enough and foresighted enough to confer that wealth upon us.

The conventional story is thus an awesome story of government largesse and wisdom, and it’s one that the great companies and the great universities and the great economists promote most assiduously. There is only one, small, itsy bitsy teeny weeny problem with it. It’s dead wrong.

The story of the longest-surviving intellectual error in western economic thought started in 1605 when a corrupt English lawyer and politician, Sir Francis Bacon, published his Advancement of Learning. Bacon, who was a man with a preternatural interest in wealth and power, wanted to know how Spain had become the richest and most powerful nation of his day. He concluded that Spain had done so by the exploitation of its American colonies. And how had Spain discovered those colonies? By scientific research: “the West Indies had never been discovered if the use of the mariner’s needle had not been first discovered.”

Scientific research, Bacon explained, was “the true ornament of mankind” because “the benefits inventors confer extend to the whole human race.” But, he wrote, therein lay a problem: the whole human race might benefit from inventions but the whole human race does not reimburse inventors, so invention will not be rewarded by the market. Research, therefore, is a public good that should be supplied by government: “there is not any part of good government more worthy than the further endowment of the world with sound and fruitful knowledge.”

Bacon’s argument was reinforced in our own day by three landmark papers published by three landmark economists, two of whom (Robert Solow and Kenneth Arrow) were to win Nobel prizes, and the third of whom, Richard Nelson, is recognized as being of a similar rank. And we need to look at what the economists—as opposed to the scientists—write, because the economists, being apparently systematic, influence policy more than do the scientists: it’s easy to dismiss scientists for special pleading and for anecdotage, but who can doubt the objective, dispassionate studies of economists?

The contemporary story starts with a 1957 paper by Robert Solow, which was an empirical study that confirmed that most economic growth in the modern world can indeed be attributed to technical change (as opposed, say, to capital deepening.) But the story was given its dirigiste twist by the papers that Nelson and Arrow published in 1959 and 1962 respectively, in which they explained that science is a public good because copying is easier and cheaper than original research: it is easier and cheaper to copy and/or leapfrog the discoveries reported in the journals and in the patent filings (or incorporated in new products) than it was to have made the original discoveries. So no private entity will invest in innovation because their investment will only help their competitors—competitors who, having been spared the costs of the original research, will undercut the original researcher.

The problem with the papers of Nelson and Arrow, however, was that they were theoretical, and one or two troublesome souls, on peering out of their economists’ eyries, noted that in the real world there did seem to be some privately funded research happening—quite a lot of it actually—so the conventional story has since been modified. That change was driven by three more landmark economists Paul Romer (who focused on industrial research) and Partha Dasgupta and Paul David (who focused on academic research.)

In a 1990 paper Paul Romer acknowledged that, in the real world of (i) commercial monopoly enforced by patents, and of (ii) apparent commercial secrecy enforced by commercial discipline, commercial researchers can indeed recoup some of the costs of their original research, so he created a mathematical model by which some original research would be rewarded by the market. Nonetheless, he still assumed that too little industrial science would be thus rewarded: “research has positive external effects. It raises the productivity of all future individuals who do research, but because this benefit is non-excludable, it is not reflected at all in the market price.”

Dasgupta and David in their 1994 paper reviewed the historical development of our universities, research societies and research conventions, and they acknowledged that such social constructs did indeed foster pure science, but because advances in basic science were too unpredictable for their discoverers to profit from them in the market, such science: “is in constant need of shoring up through public patronage.”

So here is the current dogma: scientific research is fundamentally a public good because new ideas, unlike private goods, cannot be monopolized for long, but in practice we can treat research as a merit good (which is good that requires only some of its funding from government) because conventions such as patents or industrial secrecy, to say nothing of institutions such as universities and research societies, have evolved to bolster a certain—if inadequate—degree of private funding.

But the difficulty with this new story of science as a merit good is that there is no empirical evidence that research needs any funding from government at all.

The fundamental problem that bedevils the study of the economics of science is that every contemporary actor in the story is parti pris: every contemporary actor who enters the field starts by pre-assuming that governments should fund science. Such actors are either industrialists looking for corporate welfare, or scholars looking to protect their universities’ income, or scientists (who, frankly, will look for money from any and every source—they are shameless) or economists who assume that knowledge is “non-rivalrous” and only “partially excludable” (which are posh ways of saying that copying is cheap and easy.)

But no contemporary has ever shown empirically that governments need fund science—the assertion has been made only on theoretical grounds. Remarkably, the one economist who did look at the question empirically found that the evidence showed that governments need not fund science, but his claim has been for a long time ignored, because he was notoriously a libertarian—and libertarians have no traction amongst the scholars, politicians, and corporate welfarists who dominate the field. In 1776, moreover, that economist supported a revolution, so he is not only outdated but he was, presumably, subversive of the social order.

Nonetheless, if only out of antiquarian interest, let’s look at what this empiricist reported. The evidence showed, he wrote, that there were three significant sources of new industrial technology. The most important was the factory itself: “A great part of the machines made use of in manufactures … were originally the inventions of common workmen.” The second source of new industrial technology were the factories that made the machines that other factories used: “Many improvements have been made by the ingenuity of the makers of the machines.” The least important source of industrial innovation was academia: “some improvements in machinery have been made by those called philosophers [aka academics.]” But our economist noted that that flow of knowledge from academia into industry was dwarfed by the size of the opposite flow of knowledge: “The improvements which, in modern times, have been made in several different parts of philosophy, have not, the greater part of them, been made in universities [ie, they were made in industry.]” Our empiricist concluded, therefore, that governments need not fund science: the market and civil society would provide.

Arguments for the subsidy of so-called public goods, moreover, were dismissed by our libertarian economist with: “I have never known much good done by those who have affected to trade for the public good.” In particular, arguments by industrialists for subsidies were dismissed with: “people of the same trade seldom meet together, even for merriment and diversions, but the conversation ends in a conspiracy against the public.” And our revolutionary underminer of the social order dismissed the idea that wise investment decisions could be entrusted to politicians, even to that nice Mr Obama, because he distrusted: “that insidious and crafty animal, vulgarly called a statesman or politician.”      

Our long-dead economist recognized the existence of public goods, which he described as those “of such a nature, that the profit could never repay the expense to any individual or small number of individuals”, but he could not see that scientific research fell into that category.

The economist in question was, of course, Adam Smith, whose Wealth of Nations from which these quotes were drawn was published in 1776. And he is indeed long-dead. Yet the contemporary empirical evidence supports his contention that governments need not support scientific research. Consider, for example, the lack of historical evidence that government investment in research contributes to economic growth.

The world’s leading nation during the 19th century was the UK, which pioneered the Industrial Revolution. In that era the UK produced scientific as well as technological giants, ranging from Faraday to Kelvin to Darwin—yet it was an era of laissez faire, during which the British government’s systematic support for science was trivial.

The world’s leading nation during the 20th century was the United States, and it too was laissez faire, particularly in science. As late as 1940, fifty years after its GDP per capita had overtaken the UK’s, the U.S. total annual budget for research and development (R&D) was $346 million, of which no less than $265 million was privately funded (including $31 million for university or foundation science). Of the federal and states governments’ R&D budgets, moreover, over $29 million was for agriculture (to address—remember—the United States’ chronic problem of agricultural over productivity) and $26 million was for defence (which is of trivial economic benefit.) America, therefore, produced its industrial leadership, as well as its Edisons, Wrights, Bells, and Teslas, under research laissez faire.

Meanwhile the governments in France and Germany poured money into R&D, and though they produced good science, during the 19th century their economies failed even to converge on the UK’s, let alone overtake it as did the US’s. For the 19th and first half of the 20th centuries, the empirical evidence is clear: the industrial nations whose governments invested least in science did best economically—and they didn’t do so badly in science either.

What happened thereafter? War. It was the First World War that persuaded the UK government to fund science, and it was the Second World War that persuaded the U.S. government to follow suit. But it was the Cold War that sustained those governments’ commitment to funding science, and today those governments’ budgets for academic science dwarf those from the private sector; and the effect of this largesse on those nations’ long-term rates of economic growth has been … zero. The long-term rates of economic growth since 1830 for the UK or the United States show no deflections coinciding with the inauguration of significant government money for research (indeed, the rates show few if any deflections in the long-term: the long-term rate of economic growth in the lead industrialized nations has been steady at approximately 2 per cent per year for nearly two centuries now, with short-term booms and busts cancelling each other out in the long term.)

The contemporary economic evidence, moreover, confirms that the government funding of R&D has no economic benefit. Thus in 2003 the OECD (Organisation of Economic Cooperation and Development—the industrialized nations’ economic research agency) published its Sources of Economic Growth in OECD Countries, which reviewed all the major measurable factors that might explain the different rates of growth of the 21 leading world economies between 1971 and 1998. And it found that whereas privately funded R&D stimulated economic growth, publicly funded R&D had no impact.

The authors of the report were disconcerted by their own findings. “The negative results for public R&D are surprising,” they wrote. They speculated that publicly funded R&D might crowd out privately funded R&D which, if true, suggests that publicly funded R&D might actually damage economic growth. Certainly both I and Walter Park of the American University had already reported that the OECD data showed that government funding for R&D does indeed crowd out private funding, to the detriment of economic growth. In Park’s words, “the direct effect of public research is weakly negative, as might be the case if public research spending has crowding-out effects which adversely affect private output growth.”

The OECD, Walter Park, and I have therefore—like Adam Smith—tested empirically the model of science as a public or merit good, and we have found it to be wrong: the public funding of research has no beneficial effects on the economy. And the fault in the model lies in one of its fundamental premises, namely that copying other people’s research is cheap and easy. It’s not. Consider industrial technology. When Edwin Mansfield of the University of Pennsylvania examined 48 products that, during the 1970s, had been copied by companies in the chemicals, drugs, electronics, and machinery industries in New England, he found that the costs of copying were on average 65 per cent of the costs of original invention. And the time taken to copy was, on average, 70 per cent of the time taken by the original invention.

Copying is lengthy and expensive because it involves the acquisition of tacit (as opposed to explicit) knowledge. Contrary to myth, people can’t simply read a paper or read a patent or strip down a new product and then copy the innovation. As scholars such as Michael Polanyi (see his classic 1958 book Personal Knowledge) and Harry Collins of the University of Cardiff (see his well-titled 2010 book Tacit and Explicit Knowledge) have shown, copying new science and technology is not a simple matter of following a blueprint: it requires the copier actually to reproduce the steps taken by the originator. Polanyi’s famous quote is “we can know more than we can tell” but it is often shortened to “we know more than we can tell” because that captures the kernel—in science and technology we always know more (tacitly) than we can tell (explicitly). So in 1971, when Harry Collins studied the spread of a technology called the TEA laser, he discovered that the only scientists who succeeded in copying it were those who had visited laboratories where TEA lasers were already up and running: “no-one to whom I have spoken has succeeded in building a TEA laser using written sources (including blueprints and written reports) as the sole source of information.”

One long-dead person who would have been unsurprised by this modern understanding of tacit knowledge was Adam Smith, who built his theory of economic growth upon it: he explained how the division of labor was central to economic growth because so much expertise is, in modern language, tacit. “This subdivision of employment” Smith wrote “improves dexterity and saves time.”

But if it costs specialists 65 per cent of the original costs to copy an innovation, think how much more it would cost non-specialists to copy it. If an average person were plucked off the street to copy a contemporary advance in molecular biology or software, they would need years of immersion before they could do so. And what would that immersion consist of? What, indeed, does the modern researcher do to keep up with the field? Research.

In a 1990 paper with the telling title of  “Why Do Firms Do Basic Research With Their Own Money?” Nathan Rosenberg of Stanford University showed that the down payment that a potential copier has to make before he or she can even begin to copy an innovation is their own prior contribution to the field: only when your own research is credible can you understand the field. And what do credible researchers do? They publish papers and patents that others can read, and they produce goods that others can strip down. These constitute their down payment to the world of copyable science.

So the true costs of copying in a free market are 100 per cent—the 65 per cent costs of direct copying and the initial 35 per cent down payment you have to make to sustain the research capacities and output of the potential copiers. Copyists may not pay the 100 per cent to the person whose work they copying, but because in toto the cost to them is nonetheless on average 100 per cent, the economists’ argument that copying is free or cheap is thus negated.

That is why, as scholars from the University of Sussex have shown, some 7 per cent of all industrial R&D worldwide is spent on pure science. This is also why big companies achieve the publication rates of medium-sized universities. Equally, Edwin Mansfield and Zvi Griliches of Harvard have shown by comprehensive surveys that the more that companies invest in pure science, the greater are their profits. If a company fails to invest in pure research, then it will fail to invest in pure researchers—yet it is those researchers who are best qualified to survey the field and to import new knowledge into the company.

And it’s a myth that industrial research is secret. One of humanity’s great advances took place during the 17th century when scientists created societies like the Royal Society in London to promote the sharing of knowledge. Before then, scientists had published secretly (by notarising discoveries and then hiding them in a lawyer’s or college’s safe—to reveal them only to claim priority when a later researcher made the same discovery) or they published in code. So Robert Hooke (1635-1703) published his famous law of elasticity as ceiiinosssttuu, which transcribed into ut tensio sic vis (stress is proportional to strain.)

Scientists did not initially want to publish fully (they especially wanted to keep their methods secret) but the private benefit of sharing their advances with fellow members of their research societies—the quid pro quo being that the other members were also publishing—so advantaged them over scientists who were not in the societies (who thus had no collective store of knowledge on which to draw) that their self-interest drove scientists to share their knowledge with other scientists who had made the same compact. Today those conventions are universal, but they are only conventions; they are not inherent in the activity of research per se.

Industrial scientists have long known that sharing knowledge is useful (why do you think competitor companies cluster?) though anti-trust law can force them to be discreet. So in 1985, reporting on a survey of 100 American companies, Edwin Mansfield found that “[i]nformation concerning the detailed nature and operation of a new product or process generally leaks out within a year.” Actually, it’s not so much leaked as traded: in a survey of eleven American steel companies, Eric von Hippel of MIT’s Sloan School of Management found that ten of them regularly swapped proprietary information with rivals. In an international survey of 102 firms, Thomas Allen (also of Sloan) found that no fewer than 23 per cent of their important innovations came from swapping information with rivals. Industrial science is in practice a collective process of shared knowledge.

And Adam Smith’s contention that academic science is only a trivial contributor to new technology has moreover been confirmed. In two papers published in 1991 and 1998, Mansfield showed that the overwhelming source of new technologies was companies’ own R&D, and that academic research accounted for only 5 per cent of companies’ new sales and only 2 per cent of the savings that could be attributed to new processes. Meanwhile, contemporary studies confirm that there is a vast flow of knowledge from industry into academic science: indeed, if it was ever real, the distinction between pure and applied science is now largely defunct, and Bacon’s so-called “linear model” (by which industrial science feeds off university science) has been discredited by the economists and historians of science.

Something else that would have surprised Smith about current scholarship is the economists’ obsession with monopoly. The economists say that unless an innovator can claim, in perpetuity, 100 per cent of the commercial return on her innovation, she will underinvest in it. But that claim is a perversion born of the modern mathematical modelling of so-called “perfect” markets, which are theoretical fictions that bear no relation to economic reality. In reality, entrepreneurs make their investments in the light of the competition, and their goal is a current edge over their rivals, not some abstract dream of immortal monopoly in fictitious “perfect” markets.

The strongest argument for the government funding of science today is anecdotal: would we have the internet, say, or the Higgs Boson, but for government funding? Yet anecdotage ignores crowding out. We wouldn’t have had the generation of electricity but for the private funding of Michael Faraday, and if government funding crowds out the private philanthropic funding of science (and it does, because the funding of pure science is determined primarily by GDP per capita, regardless of government largesse) then the advances we have lost thanks to government funding need a scribe—an omniscient one, because we can’t know what those lost advances were—to write them on the deficit side of the balance sheet. Which is also where the opportunity costs should be written: even if the government funding of science yields some benefit, if the benefit to society of having left that money in the pockets of the taxpayer would have been greater, then the net balance to society is negative.

What would the world look like had governments not funded science? It would look like the UK or the United States did when those countries were the unchallenged superpowers of their day. Most research would be concentrated in industry (from which a steady stream of advances in pure science would emerge) but there would also be an armamentarium of private philanthropic funders of university and of foundation science by which non-market, pure research (including on orphan diseases) would be funded.

And such laissez faire science would be more useful than today’s. Consider the very discipline of the economics of science. The important factor common to Solow, Nelson, and Arrow—the fathers of the modern economics of science—is that all three were associated with the RAND Corporation, which was the crucible of Eisenhower’s military-industrial complex. RAND (i.e., the R&D Corporation) was created in 1946 by the US Air Force in association with the Douglas Aircraft Company as a think tank to justify the government funding of defence and strategic research.

RAND’s initial impetus came from the 1945 book Science, The Endless Frontier, which written by Vannevar Bush, the chairman of the federal government’s Office of Scientific Research and Development. Bush argued the Baconian view that the federal government (which had poured funds into R&D during the war) should continue to do so in peace. Bush of course had very strong personal reasons for so arguing, but it was the Cold War and the upcoming space race (Sputnik was launched in 1958) that—incredibly—persuaded economists that the USSR’s publicly funded industrial base would overtake the United States’ unless the United States foreswore its attachment to free markets in research.

It was all nonsense of course—Sputnik was based on the research of Robert ‘Moonie’ Goddard of Clark College, Massachusetts, which was supported by the Guggenheims—but when RAND sponsors military-industrial complex nonsense, such nonsense has legs. That is why a potentially useful discipline such as a credible economics of science (one based on the study of optimal returns to entrepreneurs in a real, competitive market) has been forsaken for one based on public subsidies under fictitious ‘perfect’ markets.

Cui bono? Who benefits from this fictitious economics of science? It’s the economists, universities, and defence contractors who benefit, at the taxpayers’ expense.

The power of bad ideas is extraordinary. John Maynard Keynes once wrote that practical men are usually the slaves of some defunct economist, and economists do not come much more defunct than Francis Bacon. His most recent defuncting came in Sir Peter Russell’s 2000 biography of Henry the Navigator, in which Russell showed that the Iberian peninsula at the time of the great voyages of discovery was not a centre of research, only of propaganda claiming falsely to be a centre of research, which had fooled Bacon. Yet however many times Bacon is defuncted, some powerful group emerges to resurrect his false—if superficially attractive—idea that science is a public good. Unfortunately too many people have an interest in so representing science.

I hope that this little essay will be one more stake in that idea’s heart, yet I fear that this particular vampire will continue to pursue anything that smells of money for another four centuries to come.

Also from this issue

Lead Essay

  • Terence Kealey argues that we don’t need public funding for science. Not only are many of the common historical examples of the benefits of public funding false, the economic model of publicly funded scientific research is fundamentally flawed. Empirically, public R&D appears to have a negligible effect on economic growth. Private science is likely to be more responsive to consumers’ needs, and the costs of duplicating it are often high enough that we need not worry about free riders on the discoveries of others.

Response Essays

  • Victoria Harden offers several historical examples of successful funding for public health initiatives. These programs, including the prevention of cholera, basic research on chemical warfare agents and cancer, and the identification of the virus that causes AIDS, might conceivably have happened under purely private auspices. But she finds it implausible that private actors would have responded as quickly or effectively.

  • Patrick J. Michaels discusses the public choice aspects of scientific funding, which introduce systematic bias into research: Scientists need grant money to advance in their careers, and only the government provides it in sufficient quantities. Yet the government’s agenda is never neutral, and the scientists’ agendas tend strongly to fall into line. The result is a consensus built not on scientific fact, but on the alignment of personal interests.

  • David Guston rejects the public goods argument for scientific research. He nonetheless argues that it is essential for any government to conduct such research. Governments are constantly called upon to regulate and adjudicate disputes among scientifically and technologically savvy actors. They are obliged to make laws that take into account scientific laws. Indeed, no one would want to live under a state that predictably failed in these respects.