About this Issue

Lawrence Lessig’s Code and Other Laws of Cyberspace is widely regarded as one of the foundational texts of Internet law. A book this important will always draw both defenders and critics, and Lessig himself has gone as far as to produce a free, open-source revision of the text, entitled Code Version 2.0. (We’d be remiss if we failed to point out that Lessig helped pioneer the very idea of “open source” distribution.)

In the original Code, Lessig was at pains to distance himself from cyberlibertarians; although he championed a relatively permissive regulatory regime for the Internet, Lessig insisted on the importance of politics in shaping this new area of human action. He warned that without carefully constructed regulations, corporate and other special interests stood to capture the online experience — with results that would be anything but free. His provocative final chapter was entitled “What Declan Doesn’t Get”; it called out journalist Declan McCullagh as just the type of over-optimistic cyberlibertarian who didn’t appreciate these growing threats.

In 2009, Code turns ten years old, and it is just as relevant as ever. It seemed fitting, then, to invite Declan McCullagh to help re-assess what Lessig and others were predicting at the time. We’ve invited Internet law experts Jonathan Zittrain and Adam Thierer to comment as well; each has a somewhat different perspective on the future of the Internet and how best to preserve its free and creative character. And the discussion wouldn’t be complete without Lawrence Lessig himself, who will respond to his critics and offer his own assessment of where things stand, ten years after his remarkable book.

 

Lead Essay

What Larry Didn’t Get

Science fiction fans had William Gibson’s Neuromancer, the archetypal cyberpunk novel. Beginning in the early 1970s, futurists had Alvin Toffler’s Future Shock and Third Wave.

And ten years ago, in 1999, cyber-law buffs had Stanford University law professor Larry Lessig’s Code And Other Laws of Cyberspace. It’s difficult to overstate the influence that Lessig’s book has had on discussions of regulating technology, especially the Internet and computer software, among academics, activists, programmers, and Silicon Valley types. (It never really caught on in political circles, especially in Washington, D.C.)

Lessig wrote Code at a time of Internet turmoil, when memories of legal disputes over free speech in the form of the Communications Decency Act and political battles over mandatory eavesdropping on electronic communications were fresh. The V-Chip was new; the organization charged with Internet governance was forming; encryption products without backdoors for the government might be banned; and a college student named Shawn Fanning was about to write a little program called Napster.

Code offered a burgeoning protest movement this unifying theme and philosophy: Code is a form of law and can be a potent force for social liberation or control. How the Internet’s underlying architecture is shaped, Lessig argued, is important enough that lawyers, programmers, and government officials should pay close attention to choices made in software design.

This was both insightful and prescient. Long before a variant of the GNU/Linux operating system called Ubuntu found its way into Best Buy stores, Lessig was advocating the philosophical merits of free software that, if you were sufficiently clever, you could rewrite or expand yourself. He anticipated the problems of expansive copyright law and wrapping data in layers of cryptographic copy protection, which led Apple to abandon it on iTunes. And he previewed the privacy problems of the FBI tracking mobile phones, which we found out in 2005 that federal agents believe they can do without even evidence of probable cause.

No wonder Code became assigned reading for computer science and law students, and then was followed by four followup books, including a Wiki-assisted and free-to-download revision called Codev2. Along the way, Lessig became a technopolitical celebrity, once flirting with a running for a vacant congressional seat.

Few lawyers or programmers would take issue with Lessig’s point that choices in Internet architecture are important. Electronic discussion forums that discourage anonymity possess a different tone than ones that encourage it, and each approach has its tradeoffs. An internal discussion list for expectant mothers at one large Silicon Valley firm strips identifying information from messages, which encourages colleagues to broach awkward topics but also inhibits a sense of community from forming.

That argument holds for traditional architecture too: If parents are designing a house, they may choose to place an infant’s room next to theirs to aid in monitoring. And if a teenage son has a habit of nocturnal adventuring that should be discouraged, his room might be better placed on a top floor rather than by the back door. While a posted speed limit might ask drivers to remain under 25 miles an hour, placing speedbumps on the road might make the rule stick (although at the cost of lives, if ambulances are delayed).

If that were the entirety of Code, this would be a short essay, and Code would be less provocative than it proved to be. But the last chapters of the book go further and claim that the choice of rules is too important to be “left to the market.” Instead, it should be entrusted to politicians and (even better) judges, who are better equipped to confront constitutional questions.

Lessig writes: “Not only can the government take these steps to reassert its power to regulate, but (it) should. Government should push the architecture of the Net to facilitate its regulation, or else it will suffer what can only be described as a loss of sovereignty… We need to be able to make political decisions at the level of the Net. A political judgment needs to be made about the kind of freedom that will be built into the Net.”

These are not exactly libertarian sentiments, and in fact Lessig goes out of his way to assail libertarianism and “policy-making by the invisible hand.” He prefers what probably could be called technocratic philosopher kings, of the breed that Plato’s The Republic said would be “best able to guard the laws and institutions of our State–let them be our guardians.” These technocrats would be entrusted with making wise decisions on our behalf, because, according to Lessig, “politics is that process by which we collectively decide how we should live.”

Compare these high ideals to the actual laws that the solons in Washington, D.C. enacted over the last decade. We were blessed with the CAN-SPAM Act, which legalized bulk junk e-mail rather than doing what its name might suggest. A bipartisan majority in Congress approved the Patriot Act, its renewal, and immunity for telecommunications companies that illegally cooperated with the National Security Agency. The Real ID Act became law once it was glued onto an Iraq “emergency” appropriations bill, and Hollywood lobbyists continued to expand copyright law beyond what both liberals and libertarians might prefer. Meanwhile, bills that Lessig and his allies backed on topics such as Net neutrality, orphaned works, and copyright anti-circumvention proved uniformly unsuccessful.

So much for our elected leaders making an informed “political judgment” about “the kind of freedom that will be built into the Net.”

One response might be that the right philosopher-kings have not yet been elevated to the right thrones. But assuming perfection on the part of political systems (especially when sketching plans to expand their influence) is less than compelling. The field of public choice theory has described many forms of government failure, and there’s no obvious reason to exempt Internet regulation from its insights about rent-seeking and regulatory capture.

Meanwhile, as the federal government seized more power over the last decade — derailing the creation of an .xxx top-level domain is another example — Lessig’s predictions of private sector abuses proved to be a bit wide of the mark. Code claimed that commercial firms “will push for a certificate architecture that would enable its own form of control” and that would “enable some forms of state control.” Instead, Microsoft Passport (now Windows Live ID) died an unlamented death. Code fretted that over time, “code writing becomes commercial” and “the product of a smaller number of large companies,” which has not happened yet, and would not obviously pose a threat even if it had.

The last chapter of the 1999 edition of Code is titled “What Declan Doesn’t Get,” and says, about the arguments I was making at the time: “There is one unifying theme to Declan’s posts: let the Net alone” and says that my plea is that we “do nothing.” He adds: “Do-nothingism is not an answer; something can and should be done.”

But critiquing bad policy proposals (and there were plenty of them in the late 1990s) is hardly do-nothingism. It’s possible to agree with Lessig that code is important, that architectures promoting liberty are worth defending, and that some government actions are preferable to inaction — while being suspicious of sweeping indictments of “commercial” activities and calls for expansions of government authority.

It may be telling, perhaps, that the updated edition of Code that Lessig published seven years after the original strikes a more conciliatory tone. “I am not a libertarian in the sense Declan is, though I share his skepticism about government,” Lessig writes in the new version. “But we can’t translate skepticism into disengagement. We have a host of choices that will affect how the Internet develops and what values it will embed.”

That’s true enough, of course. But if the experience of the last decade has taught us anything, it’s that Internet companies have proven to be flexible and responsible in crafting code in a way that benefits their users. And we’ve learned that technocratic philosopher-kings in Washington, D.C. are very difficult to find.

Response Essays

How to Get What We All Want

OK, enough with who doesn’t get what. The arguments over cyberlibertarianism sparked by the release of Code aren’t due to gaping ignorance or even dueling ideologies. They’re more about emphasis. It didn’t have to be that way: there’s a separate, straightforward anti-libertarian case that lots of people would want to make for increased government policing of the Internet because of the bad things that can and do take place on it. This week’s example is the “Craigslist killer,” who assaulted people he met through that site. In his wake, several U.S. state attorneys general are pressuring Craigslist to shut down its “erotic services” section. There are hundreds of others of examples, not least of which have been the various efforts by the music industry to shut down peer-to-peer technologies and sue users who share copyrighted songs without permission.

The debate between Larry and the libertarians is more subtle. Larry says: I’m with you on the aim — I want to maintain a free Internet, defined roughly as one in which bits can move between people without much scrutiny by the authorities or gatekeeping by private entities. Code’s argument was and is that this state of freedom isn’t self-perpetuating. Sooner or later government will wake up to the possibilities of regulation through code, and where it makes sense to regulate that way, we might give way — especially if it forestalls broader interventions. So, for example, Larry has favored government incentives to private bounty hunters to track down spammers. Declan’s been skeptical, but more because he thinks it won’t work very well. His preferred alternatives are technical measures and … suing spammers. Which, if it’s allowed, seems like another way of saying: a bounty awarded by the state to those who step forward with evidence against the bad guys.

So where do they differ the most? As Declan points out, Lessig sees value in having democratic political systems shape and ratify our technological choices, while the cyberlibertarian might just as soon let chance (which is to say, the market) take its course. On technologies that might allow people to bypass government regulation of content, Larry says:

Of course, my view is that citizens of any democracy should have the freedom to choose what speech they consume. But I would prefer they earn that freedom by demanding it through democratic means than that a technological trick give it to them for free. … If a restriction on liberty is resented by a people, let the people mobilize to remove it. (p. 309)

My guess is that the cyberlibertarian figures the freedom to choose content is worth securing by any means available, and that such freedoms shouldn’t have to be “earned” on a regular basis — that’s what a Bill of Rights is for. But by de-emphasizing the role of government — either because it’s thought to be comparatively powerless on a global Internet (as John Perry Barlow’s stirring Declaration of the Independence of Cyberspace had it in 1996, and to which Code was in part a response) or because it’s thought to be poor at achieving one’s desired outcomes (as Declan’s opener here suggests) — we take on certain risks.

The first risk is that government won’t stay powerless. For example, Code raised the possibility of a “zoned” Internet, one where your location would greatly define what you can and can’t do. If you’re in China or one of dozens of other states, there are Web sites you can’t access — and increasingly the sites themselves are cooperating with such restrictions. Thailand blocks all of YouTube over videos that mock its king, and then to earn an unblocking, YouTube cooperates with the government to help prevent those videos from reaching Thai citizens — while still available to everyone else. That some people with enough technical skill and determination can evade these blocks doesn’t do much for the vast majority who shrug and move on to other, safer content.

The second risk is that abandonment of the political arena in favor of technical means to achieve liberty cedes too much. Larry is under few illusions about how easy it is for the voices of regular citizens to be heard even by democratic governments — this is the guy who announced he would shift his intellectual efforts away from cyberlaw and towards confronting the perfectly legal corruption that has broken our political system, where the flow of even modest amounts of money results in poor and even reckless policies. Here, too, though, the differences are smaller than they might appear, since, as Declan points out, cyberlibertarians are among the first to critique bad policy proposals. (That they may be inclined to think that all policy proposals are likely bad doesn’t have to matter.) But if skepticism slides to a confident disengagement, decisions emanating from the political arena have fewer checks on them, the public at large isn’t exposed to libertarian arguments, and the means of intervening in people’s activities can be through the very companies in whom Declan places his trust.

That’s where I worry about today’s emerging technology environment. It may feel free and diverse and responsive to consumers — I too love the iPhone and Kindle and cloud app platforms like that of Facebook. But these platforms are constructed to privilege their vendors in deciding what code will run on them. I think we can get locked into these platforms as we (rightly, unfortunately) fear the wildness of the open Internet and general purpose PC, and as we shift and accumulate more and more of our data and relationships there. After the markets coalesce to these tamer gated communities, governments can later come along and insist that these platforms be tuned towards surveillance and control far more successfully than the wilder Internet that preceded them. Thus, as Declan once broke the story, cell phone mics can be used as eavesdropping tools. Our car GPS systems can be made to quietly relay everything said in the car to the authorities. And the appliances we buy for our homes can be disabled at a distance if they’re later found to be contraband. This is the future of the Internet that I want to stop, and it’s small solace that geeks can avoid it for themselves if they can’t easily bring everyone else with them.

Market-driven firms that respond to consumer demand and democratic governments that respond to voters (and campaign contributions) are not the only way to reflect our aspirations. What has made the Internet special is that it is a civic technology. By “civic” I mean its success has depended on an astounding amount of goodwill and cooperation, phenomena not completely accounted for by markets and regulations. Routers help get data to its destination by sharing what they know about what’s nearby with other routers. If just one participant in this dance chooses to lie — as one Pakistani ISP did about YouTube’s address in an attempt to filter YouTube in that country — the entire system can unravel. In that case, YouTube ended up blocked around the world. What brought it back was not anything Google or YouTube did, but quick reaction by mid-level employees at ISPs who themselves informally share information about the Internet’s health on obscure lists like NANOG.

So, too, has Wikipedia succeeded as a civic technology: it has more editors cooperating to deal with vandalism and other problems than there are people (and bots) creating them. Moreover, Wikipedia licenses all its content so that anyone can walk away with a copy of the whole encyclopedia and start a competing one at any time. Those who see Wikipedia governance as corrupt can take everyone’s ball and start anew. These enterprises are not only made possible by civic arrangements among strangers, but they give hope that people can come together for civic purposes in realspace, at a time when our social fabric is fraying. I look to projects like the unlikely CouchSurfing, or the revival of hitchhiking through, yes, Craigslist (wisely called “ride sharing” instead), as ways in which technology can cultivate new social connections. As they become more popular, they will need to continually evolve civic defense tech and social practices to deal with the bad actors who inevitably show up. These practices aren’t exactly “market” since they don’t involve the exchange of cash – rather it’s the mutual reinforcement and implementation of goodwill.

In that sense, I get the limitations both of traditional regulation and of the classical firm-based market in producing some of the platforms we’ve come to hold dear, and in dealing with some of the problems that come up within them. That’s why I’m part of efforts to forge technologies that can help a critical mass of people contribute to some of the Net’s most pressing problems. Civic technologies seek to integrate a respect for individual freedom and action with the power of cooperation. Too often libertarians focus solely on personal freedoms rather than the serious responsibilities we can undertake together to help retain them, while others turn too soon to government regulation to preserve our values. I don’t think .gov and .com never work. We too easily underestimate the possibilities of .org — the roles we can play as netizens rather than merely as voters or consumers.

Code, Pessimism, and the Illusion of “Perfect Control”

The problem with peddling tales of a pending techno-apocalypse is that, at some point, you may have to account for your prophecies — or false prophecies as the case may be. Hence, the problem for Lawrence Lessig ten years after the publication of his seminal book, Code and Other Laws of Cyberspace.

In Code, Lessig painted an extraordinarily gloomy picture of the unfolding digital age. Early cyber-theorists such as Ithiel de Sola Pool, John Perry Barlow, George Gilder, and Nicholas Negroponte had foretold of a world in which the invisible hand of code would generally be an agent of empowerment and liberation. Lessig, by contrast, viewed code as an agent of control; the prime regulator of our modern digital ecosystem. “Left to itself,” he warned, “cyberspace will become a perfect tool of control.”

Luckily for us, Lessig’s lugubrious predictions proved largely unwarranted. Code has not become the great regulator of markets or enslaver of man; it has been a liberator of both. Indeed, the story of the past digital decade has been the exact opposite of the one Lessig envisioned in Code. Cyberspace has proven far more difficult to “control” or regulate than any of us ever imagined. More importantly, the volume and pace of technological innovation we have witnessed over the past decade has been nothing short of stunning.

Had there been anything to the Lessig’s “code-is-law” theory, AOL’s walled-garden model would still be the dominant web paradigm instead of search, social networking, blogs, and wikis. Instead, AOL — a company Lessig spent a great deal of time fretting over in Code — was forced to tear down those walls years ago in an effort to retain customers, and now Time Warner is spinning it off entirely. Not only are walled gardens dead, but just about every proprietary digital system is quickly cracked open and modified or challenged by open source and free-to-the-world Web 2.0 alternatives. How can this be the case if, as Lessig predicted, unregulated code creates a world of “perfect control”?

Similarly, Lessig forecast a world in which “trusted systems” (like Digital Rights Management) would give copyright holders “far more [protection] than the law did.” Ten years later, peer-to-peer piracy is rampant, DRM and micropayment schemes have failed miserably, and content creators are forced to put all their content online at increasingly lower prices, if they can charge anything at all. Again, so much for code spawning “perfect control.”

Most ominously, Lessig warned of companies pushing “architectures of identity” (such as digital certificates) upon us as a condition of commerce, thus requiring the surrender of anonymity and privacy. Meanwhile, back in the real world, “technologies of evasion” continue to proliferate and anonymity remains the online norm, leaving both the private and public sectors struggling to cope with a variety of thorny online problems: spam, viruses, online harassment, copyright piracy, and so on. Some might even claim that “perfect control” has instead become something more akin to “perfect anarchy,” although I wouldn’t go quite that far.

So why have Lessig’s predictions proven so off the mark? Lessig failed to appreciate that markets are evolutionary and dynamic, and when those markets are built upon code, the pace and nature of change becomes unrelenting and utterly unpredictable. With the exception of some of the problems identified above, a largely unfettered cyberspace has left digital denizens better off in terms of the information they can access as well as the goods and services from which they can choose. Oh, and did I mention it’s all pretty much free-of-charge? Say what you want about our cyber-existence, but you can’t argue with the price!

In the preface to the second edition of Code, Lessig admits things haven’t turned out to quite as miserably as he predicted they would, yet he quickly reassumes his skunk-at-the-cyber-libertarian-garden-party posture by noting, “I concede that some of the predictions made there have not come to pass — yet. But I am more confident today than I was then,” he proclaims. More confident? Can he muster any evidence to support that assertion? I suppose we’ll have to wait another decade or so to see if Lessig’s continuing cyber-pessimism is warranted, but I remain an unrepentant techno-optimist — and, at least so far, I generally have history on my side.

Of course, if things do turn out badly, one wonders if Prof. Lessig won’t be partially to blame for inviting regulators in to play a much greater role in policing cyberspace. The central paradox of Code is that Lessig goes to great pains to prove how “regulable” cyberspace is, but it is he who would make it so through new government regulation. Lessig spends so much time trying to prove that “code is law” that he seems utterly oblivious to the fact that “law is law,” too, and has a much greater impact in shaping markets and human behavior. Yes, private code can also help shape things, but to nowhere near the extent that government force can. And when code does shape market trends, or even market power, those developments typically prove fleeting as fickle cyber-citizens “vote with their feet” — or keyboards, as the case may be. With code, escape is possible. Law, by contrast, tends to lock in and limit; spontaneous evolution is supplanted by the stagnation of top-down, one-size-fits-all regulatory schemes.

In his lead essay in this debate, Declan McCullagh cites several examples of regulatory schemes gone awry and correctly notes that the true danger of Code lies in Lessig’s apparent preference for rule by “technocratic philosopher kings.” That vividly comes through in Lessig’s chapter on speech regulation (greatly expanded in Code Version 2.0), in which he outlines how the government might mandate changes in both code and web browser functionality to label and then censor online content deemed “harmful to minors.” Lessig doesn’t bother explaining how that will be defined, apparently preferring to leave those devilish details to his technocratic philosopher kings. Regardless, again, this isn’t “code-as-law;” this is code being mandated by law (or “law-as-code”).

In the years since the book’s release, Lessig has attempted to distance himself from some of his old positions and even tried to cast himself as a cyber-libertarian at times — suggesting at one point that we should “blow up the FCC.” Alas, Lessig is no libertarian convert. He has continued to show an affinity for government intervention in certain contexts and even when he toys with FCC abolition he quickly follows up with a replacement plan modeled on the Environmental Protection Agency!

This brings me to what I believe is the most important impact of Code: the philosophical movement it has spawned. As Declan noted in his opening essay, Code “offered a burgeoning protest movement [a] unifying theme and philosophy” in that it was both a polemic against cyber-libertarianism and a sort of call-to-arms for cyber-collectivism. It gave this movement its central operating principle: Code and cyberspace can be bent to the will of the collective, and it often must be if we are to avoid any number of impending disasters brought on by those nefarious (or just plain incompetent) folks in corporate America. Led by a gifted, prolific set of disciples such as Jonathan Zittrain and Tim Wu, as well as increasingly influential activist groups such as Public Knowledge and Free Press, Lessig’s cyber-collectivists continue to preach skepticism regarding markets and property rights, and a general openness to — and frequent embrace of — government solutions to digital-era dilemmas.

Zittrain’s The Future of the Internet and How to Stop It is the most recent exposition of Lessigite techno-pessimism and illustrates just how pervasive Code remains among leading cyberlaw scholars and Internet activists. We are witnessing, Zittrain says, the victory of “sterile and tethered” digital technologies and networks over the more “open and generative” devices and systems of the past. The iPhone and TiVo are cast as villains in Zittrain’s drama since they apparently represent the latest manifestations of Lessig’s “perfect control” paranoia. Similarly, a Public Knowledge analyst recently likened Apple’s management of applications in its iPhone App Store to the tyranny of Orwell’s 1984. In both cases, Lessig’s influence is clear: Privately managed code is the real Big Brother that we should fear.

I’ve challenged these views repeatedly and noted, most simply, that no one puts a gun to your head and forces you to buy any of these devices or applications. Zittrain warns that “we can get locked into these platforms,” but was Steve Jobs subliminally programming him to go to the Apple Store and shell out good money for the iPhone that Zittrain now openly declares his love for? And comparing Apple to Big Brother ignores the critical distinction between private persuasion and public power. The one billion downloads of over 35,000 Apple iPhone applications in just nine months should be welcomed as a sign of healthy innovation and consumer choice, not an Orwellian nightmare.

Indeed, despite all this hand-wringing by the Lessigites, there exists a diverse spectrum of innovative digital alternatives from which to choose. Do you want wide-open, tinker-friendly devices, sites, or software? You got it. Do you want a more closed, simple, and safe online experience? You can have that, too. And there are plenty of choices in between. It sounds more like “perfect competition” than “perfect control” to me. Of course, one need not believe that the markets in code are “perfectly competitive” to accept that they are “competitive enough” — or at least, better than regulatory alternatives. That is the critical distinction between cyber-libertarians and Lessig’s cyber-collectivists.

Regardless, whether some of us care to admit it, Prof. Lessig and his movement are winning the battle of ideas on the cyber-front today. We have Code to thank — or blame — for that.

Continuing the Work of Code

I wrote Code to explain an academic insight. Writing Code launched me on an activist project.

The insight was a reminder (for as I said in the book, of course the point had been made throughout history): More than law regulates. And that if we find ourselves in a particularly happy moment — when the liberty and prosperity of the time make us wish that things as they are might always be — we need to remember that it’s not just law that can muck things up. John Stuart Mill was not just worried about Parliament in On Liberty. He was more worried about British norms that stifled dissent. Stanford Professor — and Reagan’s Assistant Attorney General for Antitrust — William Baxter was not just worried about backward regulation at the FCC. When he launched his effort to break up AT&T, he was also worried about market power that was stifling competition in telecommunications. French revolutionaries in the mid-19th century were not just worried about stupid edicts from a failing emperor — indeed, they thrived on such silliness. What worried them more was that Napoleon III had rebuilt Paris with wide boulevards and multiple passages, making it very difficult for them to bring the city to a standstill. What each of these actors recognized was the first point of Code: Again, that more than law regulates.

That point led to a second: That if we’re to preserve a state of liberty, we need to worry about much more than bad law. No doubt, laws might be changed to take away a liberty (think: the USA-PATRIOT Act). But so too, norms might change to make dissent costly (think: the Dixie Chicks). Markets could become concentrated, reducing the opportunity for innovation (think about the extraordinary re-concentration in telecom access to the Internet). And architecture, or “code” could change, to take away a freedom that too many had taken for granted (think: do you really know who knows what about where you go on the Internet?).

Point two then led to a final point three: That for the Internet, we (circa 1999) were paying plenty of attention to changes in law. We were not paying enough attention to changes in code. And indeed, for obvious reasons, those who controlled much of the code (what I unhelpfully called “commerce”) circa 1999 had plenty of reasons to change that code in ways that better enabled their own control, and as a byproduct (whether intended or not), control by the government. As I wrote, “Commerce, like government, fares better in a well-regulated world. Commerce would, whether directly or indirectly, help supply resources to build a well-regulated world.” (p. xiii)

When Code was published, my biggest fear was that these points were too obvious. That ten years later writers of the prominence of Adam Thierer still don’t get them at least gives me confidence that the effort was not unnecessary. (More on Adam later). But it is clear enough from Declan’s essay that he gets all this. And any differences between his intelligent and fair criticism of Code and my own current view are very small. Indeed, had the Editors of Cato Unbound not insisted I respond, I would have been happy to let his restatement be understood for my own argument, if only to let the point be understood by a broader and important community.

But if I must quibble, let me focus on the charge that I am Plato in disguise (or more accurately, a Plato-wannabe): That I “prefer,” as Declan puts it, “what probably could be called technocratic philosopher kings.”

This isn’t right. No doubt, Code argues that democratic government must take responsibility for the liberty that the Net preserves. By that I mean simply that democratic government can’t assume that the other regulators have liberty in their objective function. Corporations (at least public corporations) have maximizing shareholder value as their objective function — even corporations that promise not to be “evil.” Norms (thankfully) have no central command. And architectures (or codes) get built just as those who build them want — so the proprietary part built by commerce will follow commerce’s objective function, and the free software part built by hackers and the like will follow the norms of their own community. Thus, a liberty-loving democratic government must at least monitor to assure that the mix of regulating modalities doesn’t weaken liberty, or at least the liberty the democracy wants.

But Code grinds on endlessly about the limits in our current forms of democratic government. It explicitly argued that courts could not take this responsibility: “If there are decisions about where we should go, and choices about the values this space will include, then these are choices we can’t expect our courts to make.” (p. 319) And it emphatically concurred with the well-justified skepticism about democratic government more generally. As I wrote:

“[W]e are weary of governments. We are profoundly skeptical about the product of democratic politics. We believe, rightly or not, that these processes have been captured by special interests more concerned with individual than collective values. Although we believe that there is a role for collective judgments, we are repulsed by the idea of placing the design of something as important as the Internet into the hands of governments.” (p. 321)

But even though I shared then (and because I’ve continued to read Declan since then, I believe even more today) the view that governments as we know them are hopeless, I believed then (and even more today), that we can’t simply give up on making government work. That indeed, to allow the corruption that is government to continue would be catastrophic.

In the decade since Code, nobody makes the catastrophic point more effectively than Zittrain (The Future of the Internet and How to Stop It). The real hole in Code was the mechanism: I said there was an obvious union of interests between commerce and government. Neither liked the relative anonymity of the Internet circa 1995. Both would have an interest in layering onto the Net technologies that would make it easier to know who did what when. I was wrong about the particulars of those technologies. Digital certificates have not become ubiquitous. Certifying infrastructure is still crude.

But you’d have to be willfully oblivious to how the Net has changed not to concede that the general point was right: There were no ubiquitous and cheap technologies for deep packet sniffing in 1995. There are today. There was no simple way to identify with any confidence where in physical space someone was when someone was on the Internet. That technology is literally free (as in free speech) on the Internet today. There was no widely adopted infrastructure for tracing who did what in 1995. But the potential today through sophisticated cookies deployment for tracing your every move is almost unlimited, and we have no clue about the actual data-sharing agreements between the commercial entities that now “give” us the Internet.

All of these technologies (and many others as well) have been built by “commerce.” None has been developed by the government. They all have been built to serve legitimate commercial ends. But the argument of Code was that the unintended consequence of this “new and improved” Internet (from the perspective of commerce at least) would be newly empowered governments. And governments, as Zittrain rightly shows, are slowly coming to recognize this valuable gift.

Here again, the examples are endless. We could point with self-righteousness to Yahoo! France, where a court pointed to commercially developed IP-mapping technology as a justification for a rule that required Yahoo! to filter auctions on the basis of the user’s geo-location. Or we could point to Cisco routers, said to be used by the Chinese government to better control speech in China. Or we could practice a rare bit of domestic humility, and recall that just last year the U.S. government granted telecom providers like AT&T immunity for using privately developed technology to give the U.S. government after 9/11 an almost unimaginable ability to spy on the Internet — at least, unimaginable in 1995.

So I do stand with the core argument of Code, as any non-ideologue should: The Net is massively more regulable (as I put it) in 2008 than it was in 1995 — at least for , as I put it, the “bovine” among us. For this is the crucial qualification that many careless readers of Code overlook. As I wrote:

A fundamental principle of bovinity is operating here and elsewhere. Tiny controls, consistently enforced, are enough to direct very large animals. The controls of a certificate-rich Internet are tiny, I agree. But we are large animals. I think it is as likely that the majority of people would resist these small but efficient regulators of the Net as it is that cows would resist wire fences. This is who we are, and this is why these regulations work. (p. 73)

Substitute the current array of Net-based control technologies for “certificate-rich Internet,” and the same point is true: Whether hackers can crack the iPhone or not, the vast majority of the Internet lives life as the code permits.

But what that core argument missed — the “hole” as I put it above — was the mechanism that might move us from a better-enabled-surveillance Internet to the “dark” picture that Code warns of. And the clue is the non-bovine part of the Internet. As I have written elsewhere, we cheerleaders for the Internet have not done enough to make salient the real dangers non-bovinity presents. No doubt, great innovation comes from the non-bovine. But the point we cheerleaders suppressed was that not all great innovation is good — think spam, zombie-bots, or malware more generally.

This is the core anxiety that drives Zittrain’s fantastic book. That as the malicious on the Internet catches up with the good, it creates a new threat to liberty. At some point, Zittrain convincingly argues, the Net will face a catastrophic event caused directly or indirectly by these bad actors. That catastrophe will give the government the political will it needs to radically remake the Internet. Think about the USA-PATRIOT Act, and how it remade civil rights in America. Now think about an iPatriot Act, fundamentally remaking cyber-liberties. As I have written elsewhere, I once asked a senior government official whether there was an iPatriot Act in waiting. “Of course there is,” he told me, “and Vint Cerf is not going to like it very much.”

There is more reason today to worry about what the Net might become than there was in 1999, because there’s less reason today to trust longstanding norms of liberty or the rule of law than there was in 1999. Who would have predicted who we would become after 9/11? Who in the face of that reality could believe the Internet is any more secure? And “worry” not in the sense that we need to believe it will happen. But just in the sense that a parent worries about a car accident every time she buckles her toddler into a car seat — knowing not that an accident will happen, but just that if it does, it would have been inexcusable to have been so careless with the gift that that child is. So too do I believe about the liberty that the Internet has given us.

So of course there is reason galore for being skeptical about democratic government doing anything sensible to protect that liberty. Of course, there is an endless (and apparently never ending) list of useless (and worse) legislation that our government has enacted and threatens to enact. But the fact that there is no surgeon on hand does not mean your exploding appendix is any less dangerous. And likewise, the fact that Declan and I can agree about the uselessness of the current corruption we call government does not mean that there is no need for democracy to take responsibility for the liberty of the Internet and preserve it. As Zittrain notes, it is for that reason that I have shifted my own work away from the IP scholarship (in both the Internet Protocol and Intellectual Property senses of that term) of the last 15 years, and the activism of the last decade, to devote as much effort as I can to reforming the corruption that is our government.

One last note about Adam: I’ve argued that things aren’t quite a simple as some libertarians would suggest. That there’s not just bad law. There’s bad code. That we don’t need to worry just about Mussolini. We also need to worry about DRM or the code AT&T deploys to help the government spy upon users. That public threats to liberty can be complemented by private threats to liberty. And that the libertarian must be focused on both.

Adam translates my concern about private threats to liberty into “collectivism.” As if only Karl Marx could be concerned with the code commerce develops that might help government regulate more fiercely, or as if the Free Culture movement should be spending more time understanding the collected works of Lenin.

I still don’t buy the equation.

Collectivists want to limit individual liberty. The Free Culture movement wants to expand it. EFF fights DRM to advance its important civil liberties agenda. Neither it, nor I, aspire to a world where any “collective” gets to say how I or anyone else gets to deploy our liberty.

Likewise, I still reject the “since only law has the death penalty, we only need to worry about law’s effect on liberty” argument for giving private threats to liberty a free pass. Adam writes, “Lessig spends so much time trying to prove that ‘code is law’ that he seems utterly oblivious to the fact that ‘law is law,’ too, and has a much greater impact in shaping markets and human behavior.”

Of course, law is law. Who could be oblivious to that? And who would need a book to explain it? But the fact that “law is law” does not imply that it has a “much greater impact in shaping markets and human behavior.” Sometimes it does — especially when that “law” is delivered by a B1 bomber. But ask the RIAA whether it is law or code that is having a “greater impact in shaping markets” for music. Or ask the makers of Second Life whether the citizens of that space find themselves more constrained by the commercial code of their geo-jurisdiction or by the fact that the software code of Second Life doesn’t permit you simply to walk away (so to speak) with another person’s scepter. Whether and when law is more effective than code is an empirical matter — something to be studied, and considered, not dismissed by banalities spruced up with italics.

Finally, Adam spends an enormous chunk of his reply arguing that Code was wrong because some of the darker predictions I made in the book haven’t come true. “Trusted systems” aren’t everywhere (or they effectively are, but put that detail aside). DRM is dissolving (at least with music). And “just about every proprietary digital system is quickly cracked open and modified or challenged by open source and free-to-the-world Web 2.0 alternatives.”

But this is just to belittle the extraordinary work done by a world of activists who have over the past decade (and sometimes more) fought against these technologies of control. Replay the last decade without the EFF, the Free Software Foundation, Public Knowledge, the Open Rights Group, EPIC, and the ACLU and the story today would have been quite different. Certainly Code was not a prediction that these important groups would fail. Indeed, quite the opposite: It was the argument that their activism, and the activism of many others, was necessary if the liberty of the Internet circa 1995 was to survive.

I am very happy that these activists have proven the darkest predictions of Code wrong — so far. I am even more happy to see how many accept the basic framework of Code, and the insights and attention it demands. These insights were not mine. Mitch Kapor, William Mitchell, and Joel Reidenberg had all published points similar to mine before Code was published (as Code acknowledged). And the power of these insights has moved well beyond my work (again, see Zittrain).

But I would be happier still if we could move beyond red-baiting, and focus on a large number of difficult questions that remain. Questions not just about how to preserve the liberty of the Net against a host of threats, both public and private, but also about how to preserve the liberty of society and the Net against the ever-expanding harm caused by the captured corruption that we call democratic government.

[Editors’ note: The full text of Lessig’s Code Version 2.0 is available as a free download here. Page numbers refer to this edition.]

The Conversation

Our Conflict of Cyber-Visions

In his response to my critique of Code, Prof. Lessig attacks my reasoning on two primary grounds:

(1) First, he implies that I somehow fail to comprehend Code’s central thesis that (a) “more than law regulates” and that (b) “those who controlled much of the code… had plenty of reasons to change that code in ways that better enabled their own control, and as a byproduct (whether intended or not), control by the government.”

(2) Second, he takes issue with the “cyber-collectivism” label I have used to describe the philosophical paradigm that Code and his thinking have spawned.

I will address each in turn.

Code Failures vs. Market Failures

Prof. Lessig imagines that I just haven’t quite absorbed his digital didacticism. To the contrary, I understand the central teachings Code perfectly well; it’s just that I don’t entirely accept them.

But let’s be clear about something: Cyber-libertarians are not oblivious to the problems Lessig raises regarding “bad code,” or what might even be thought of as “code failures.” In fact, when I wake up each day and scan TechMeme and my RSS reader to peruse the digital news of the day, I am always struck by the countless mini-market failures I am witnessing. I think to myself, for example: “Wow, look at the bone-headed move Facebook just made on privacy! Ugh, look at the silliness Sony is up to with rootkits! Geez, does Google really want to do that?” And so on. There seems to be one such story in the news every day.

But here’s the amazing thing: I usually wake up the next day, fire up my RSS reader again, and find a world almost literally transformed overnight. I see the power of public pressure, press scrutiny, social norms, and innovation by competitors combining to correct the “bad code” or “code failures” of the previous day. OK, so sometimes it takes longer that a day, a week, or a month. And occasionally legal sanctions must enter the picture if the companies or coders did something particularly egregious. But, more often than not, markets evolve and bad code eventually gives way to better code; short-term “market failures” give rise to a world of innovative alternatives.

Thus, at risk of repeating myself, I must underscore the key principles that separate the cyber-libertarian and cyber-collectivist schools of thinking. It comes down to this: The cyber-libertarian believes that “code failures” are ultimately better addressed by voluntary, spontaneous, bottom-up, marketplace responses than by coerced, top-down, governmental solutions. Moreover, the decisive advantage of the market-driven approach to correcting code failure comes down to the rapidity and nimbleness of those response(s).

Of course, this assumes we can agree on a definition of “bad code” and “code failures.” What concerns me about the way Prof. Lessig approaches these issues in Code and in his subsequent work is that he is far too quick to declare the debate over by labeling short-term code hiccups as sky-is-falling market failures. The end result of such myopic techno-pessimism is the inevitable call for governments to intervene and “do something” to correct supposed code failures.

The cyber-libertarian instead counsels patience. Let’s give those other forces — alternative platforms, new innovators, social norms, public pressure, etc. — a chance to work some magic. Evolution happens, if you let it.

Moreover, if you are always running around crying “market failure!” and calling in the code cops, it creates perverse marketplace incentives by discouraging efforts to innovate or “route around” bad code or code failure. We don’t want the whole world sitting around waiting for government to regulate the mousetrap to improve it or even give everyone better access to it; we should want the world to be innovating to create better mousetraps!

To reiterate a key point I already stressed in my original essay: One need not believe that the markets in code are “perfectly competitive” to accept that they are “competitive enough” — or at least, better than regulatory alternatives.

“Regulability” and What to Do about It

But what should we make of Prof. Lessig’s concern about the “regulability” of code and cyberspace? Again, this is the notion that private players could be co-opted as henchmen of the state, or that the tools they create could be co-opted for any variety of nefarious political purposes.

Here the cyber-libertarian will occasionally find common ground with Prof. Lessig. This is a particular problem when it comes to data collection and aggregation. Generally speaking, however, a cyber-libertarian is skeptical of privacy claims based on theories of a property right in personal information. After all, as Eugene Volokh has taught us, “the right to information privacy — my right to control your communication of personally identifiable information about me — is a right to have the government stop you from speaking about me.” Moreover, the cyber-libertarian just isn’t going to get all that worked up about a private company collecting data in an attempt to sell people more goods and services. In our view, marketing isn’t mind manipulation; it’s a key part of a well-functioning capitalist system.

That being said, we are in league with Lessig when it comes to the forcible surrender of personal information or technological capabilities to government officials. When the Department of Justice comes knocking on Google’s door asking for records of our search histories to see who’s looking for online porn (or anything else), that’s a problem. The “deputization of the middleman” has long been a legitimate fear because, with the threat of liability hanging over their necks, online intermediaries could be coerced into giving the state information that leads to fines, imprisonment, censorship, or some other type of government harassment.

However, this is a problem we should handle by putting more constraints on our government(s), not by imposing more regulations on code or coders. While, as a general principle, I think it wise for companies to minimize the amount of data they collect about consumers or websurfers, we need not force that by law. And we should certainly hold companies to high standards when it comes to data security and breach. But, again, the way to deal with the “regulability” threat that Lessig and Zittrain raise is to tightly limit the powers of government to access private information through intermediaries in the first place. Most obviously, we could start by tightening up the Electronic Communications Privacy Act and other laws that limit government data access. More subtly, we must continue to defend Section 230 of the Communications Decency Act, which shields intermediaries from liability for information posted or published by users of their systems, because (among many things) such liability would make online intermediaries more susceptible to the kind of back-room coercion that concerns Lessig. If we’re going to be legislating about the Internet, we need more laws like that, not those of the “middleman deputization” model.

Labeling Philosophical Paradigms

Finally, Prof Lessig makes it clear that he doesn’t take kindly to being called a “cyber-collectivist,” even accusing me of “red-baiting” by using the term. But the collectivism of which I speak is a more generic type, not the hard-edged Marxist brand of collectivism of modern times.

Such labels and classifications play a useful role. After all, something quite profound separates our two camps and leads to endless squabbles about nearly every aspect of technology policy. Prof. Lessig is obviously far more enamored with the potential for the state and politics to play a beneficial role in shaping the digital future. Thus, even though Lessig rejects the association, Declan McCullagh was right to point to the distant influence of Plato on Code and much of Lessig’s other work. (And there’s a bit of Rousseauian influence there, too, with Lessig’s focus on bending markets and individual desires to some amorphous general will.)

In any event, if Prof. Lessig takes offense at this label and wants to call his approach something other than cyber-collectivism, by all means be my guest! Invent a new term and I’ll use it. But to me, as a student of political philosophy, I see his approach as just another variant of collectivism and I’m just not sure what else to call it. (“Cyber-Social Democrat”?) This isn’t “red-baiting;” it’s simply an exercise in philosophical classification.

But Lessig is utterly dismissive of any attempt to label his thinking, even going so far as to say that “I do stand with the core argument of Code, as any non-ideologue should.” [Emphasis added.] Here he seems to imply that he stands above it all and that it is only I who brings cursed ideology to the table. But as Cato’s Jim Harper rightly noted in response to Lessig’s assertion that he is a “non-ideologue”:

It is impossible to discuss public policy without using ideology. When I hear someone self-identify as non-ideological, I take that as a confession that they are unaware of the role that ideology plays in their thinking. If Lessig fancies himself a neutral analyst — dismissing “ideologues” as such, that’s pretty fanciful indeed.

Similarly, in responding to the assertion in my earlier essay that there is a qualitative difference between law and code, Lessig argues that “Whether and when law is more effective than code is an empirical matter — something to be studied, and considered, not dismissed by banalities spruced up with italics.”

Typography aside, I’m all for studying the impact of law vs. code as “an empirical matter,” a study that, in turn, raises the question of how we define effectiveness or success. I suspect that the professor and I would have quite a “values clash” over some rather important first principles in that regard. In other words, we’d need to talk about… ideology! And yet, Lessig persists in the belief that his arguments are “obvious” and non-ideological when they are nothing of the sort. (Cato adjunct scholar David Post made a similar point a decade ago in his brilliant review of Code.)

At issue here is nothing less than a conflict of visions similar to others throughout the history of political philosophy. It is a conflict between those who put the individual and individual rights at the heart of a system of justice and governance versus those who would place “the community,” “the public” or some other amorphous grouping(s) at the center of everything. It’s a classic libertarian vs. communitarian / collectivist debate.

Whether or not Prof. Lessig cares to acknowledge the role that Code and his thinking have played in defining one side of this debate is irrelevant. It has been enormous. And I’m not sure why he is running away from it because he is winning this battle of ideas handily. Cyber-libertarianism is under attack from all quarters: from liberals and conservatives; from the smallest institutions of government to the largest; and from practically every academic and politician on the planet who wants to reengineer the Net in one way or another.

A half century ago, Richard Weaver taught us that ideas have consequences. A half century from now, I suspect we will look back and realize just how profound the consequences of Lawrence Lessig’s ideas have been for cyberspace and our cyber-freedoms. Of course, I very much hope that I am proven wrong.

Code and Common Causes

“Ideologue”: Adam is right. That was a poorly defined word. What I meant it to mean was one who lets a conclusion cloud understanding. I don’t mean that I (or anyone) makes judgments from a value neutral space. Of course we have values. But I do mean that even if we disagree about some things, we don’t disagree about many important things. And I don’t think we should be disagreeing about a core point of Code – that there is an axis of alliance between government and (again the poorly chosen term) “commerce” to evolve the architecture of the Net in ways that benefit both: commerce, by making the net more tractable; government, by making the net more regulable.

This point suggests another that we apparently don’t disagree about: that there can be “code failures” (as Adam nicely puts it) that threaten important values that we all should (another value) hold dear. But that point leads Adam to claim there’s a disagreement between us when I don’t think he’s got the evidence to support the claim. Adam writes:

The cyber-libertarian believes that “code failures” are ultimately better addressed by voluntary, spontaneous, bottom-up, marketplace responses than by coerced, top-down, governmental solutions. Moreover, the decisive advantage of the market-driven approach to correcting code failure comes down to the rapidity and nimbleness of those response(s).

If I say I completely agree with this statement, does this make me a cyber-libertarian? Because of course I believe this. But do I have to believe it is true always? Or is a strong presumption enough? Because again, tutored by reporting such as Declan’s, I am happy to presume code problems work themselves out in the main, more often than government actually solves anything.

But part of the argument in Code was to suggest times when that might not be true. Times when “no law” is the inducement to “bad code,” and where a “good law” would stanch evolution to bad code. Think about spam. As JZ pointed out, Declan agrees the spammers should be sued. That’s law no doubt, but not very good law. Lawsuits are expensive and slow; spammers are typically fast and cheap. Federal district court judges are not about to shoulder the extraordinary burden imposed upon the net by this scourge. And the consequence of that failure is, in my view, the deployment of lots of terrible code: blackhole boycotts, stupid filters, etc. That code has the consequence of blocking lots of legitimate mail simply to block lots of illegitimate mail.

Thus a Code-inspired suggestion is that a better law than the one Declan would rely upon might stanch the demand for bad code. Might.

High burden of proof required. Etc. And whether or not in this case, the point of the argument is to force a methodology that considers the interaction between law and code. Libertarians should have no problem with that, since that same analysis is relied upon by libertarians all the time to justify the (admittedly minimalist) regulations they justify in real space. For example a government to enforce domestic peace is justified because of the high costs of vigilantism, etc.

Likewise are we in agreement about the dangers of “forcible surrender of personal information” to the government. A Code-based focus would just suggest the risk of that happening is greater depending upon the particular architecture used by, for example, search engines. And a liberty-loving sort might want to take the temptation away from government by laying down privacy principles that nudge us to architectures that would not enable “forcible surrender[s].”

Likewise are we in agreement about the need to put “more constraints on our government” and about the need for “more laws like [section 230 of the Communications Decency Act].”

In the end, in my view, the only place we don’t agree is about the usefulness of “philosophical paradigms.” I paid my philosophy dues — 3 years of graduate work, adding onto work I had done as an undergrad. I find these “paradigms” useful summaries of last generation’s battles. And nothing is more boring than the reductionist move to turn everything into the battles our philosophy professors thought “critical.”

As I have described elsewhere, what drew me to cyberlaw originally was that it (originally) obscured politics. It confused intuitions. And in that confusion, people were forced to think. No crude shorthands. No summary judgment based upon a supposed set of affinities with debates almost a century old.

Take, for example, “cyber-collectivist.” Much of the work I’ve done since Code – though Code itself launched it in the chapter on IP — has been to insist that one form of government regulation better defend itself: copyright. My aim has been to force the regulators to show why their restriction on speech is justified, why their regulation of creativity and innovation makes sense. And my sense was initially that most of these overregulations are the simple product of special interest rent-seeking, exploiting their power in a political system too eager to regulate to favor those with the largest campaign contributions. In this battle, I’ve been allied with some of the libertarian favorites — Richard Epstein joined us in our battle against the Sonny Bono Act, as did Eugene Volokh and David Post. And I can’t believe David and I have any real disagreements about the insanity that is now called “copyright law.”

My point is that to call that work, and those views, “collectivist” is to evacuate the word of all meaning. It is to assume a binary when no binary exists. John Stuart Mill was not a “norm-collectivist” when he railed against social norms as interfering with liberty. Neither am I am collectivist when I try to show a similar dynamic with Code.

We would have a real disagreement if Adam thought code was irrelevant to liberty. He doesn’t. And we’d have a real disagreement if Adam thought it could never make sense for a government (even a libertarian government) to talk account of the restrictions on liberty effected by code and intervene in some way to remedy them. He hasn’t said that.

Instead, as I read Adam’s argument, he thinks I would intervene more than he would. Maybe. I don’t have any basis for saying one way or another. But that’s a disagreement to have about a particular case, not a disagreement of principle.

My claim is that we should not disagree about the core of Code. We should not disagree about this not because this view is without values. And not because Code’s effect has been “enormous” (one more point upon which we disagree, but forgive the personal privilege in letting that disagreement slide). We should not disagree because we share these values, just as we share the view that markets are magical things, and that innovation is the promise of salvation. No doubt there will be things we don’t agree about. But the modest points of Code are obvious today, as they were obvious to many when first made.

What’s needed is the discipline to make them have effect.