The Gory Antigora: Illusions of Capitalism and Computers

The unfortunate Internet has only one peer when it comes to obfuscation due to an inundation of excessive punditry, and that peer is religion. Internet pundits have been a rather self-satisfied and well-paid class for over a decade and a half, and I am happy to have been one of their number. All things change, and I can’t imagine this gig will go on forever. All it might take is one small misfortune, like a downturn at Google, for Internet punditry to go out of fashion. Therefore, I cherish each remaining occasion when I am asked to comment on the Net.

The Internet as it is, rather than as we might wish it to be, is above all else an elaboration of the structure of computer software, or, more precisely, software as humans have been able to create it. And software as we know it is a brittle substance. It breaks before it bends. It is the only informational structure in human experience thus far that has this quality.

Contrast software with biological information, such as the information encoded in DNA, which can frequently be changed slightly with results that are also only modified slightly. A small change in a genotype will infrequently make a phenotype absolutely unviable; it will usually effect either no change or only a tiny change. All people have different genes, but few people have serious birth defects. That smoothness in the relationship of change in information to change in physicality is what allows the process of evolution to have a meaningful signal with which to drive incremental adaptation, despite the inevitably noisy nature of reality. Small changes in computer software, by contrast, too frequently result in crashes or glitches that teach observers nothing and cannot support smooth adaptation. The way I put this succinctly is that “Software Sucks.”

Brittleness leads to the phenomenon of “Lock-in,” which means that software is harder to change once it has been enhanced by subsequent developments than other human artifacts. Once software becomes part of the context or foundation for newer software in a network of dependencies, the older stuff becomes much more expensive to change than the newer stuff. There are severe, even existential, consequences of this quality of software.

One consequence is that situational advantage in the business side of software is overwhelmingly driven by snowballing early adoption, with Microsoft perhaps being the most celebrated example.

Software requires that a variety of human ideas that have previously been functional in part because of ambiguity must be stated precisely for the first time, and it becomes much harder to withdraw or modify an idea once formally specified.

An example of the principle of lock-in from the technical sphere is the idea of the computer file. Prior to sometime in the mid-1980s, there were influential voices in computer science opposing the idea of the file because it would lead to file incompatibility. Ted Nelson, the inventor of the idea of linked digital content, and Jef Raskin, initiator of the Macintosh project at Apple, both held the view that there should be a giant field of elemental data without file boundaries. Since UNIX, Windows, and even the Macintosh—as it came out the door after a political struggle—incorporated files, the idea of the file has become digitally entrenched.

We teach files to undergraduates as if we were teaching them about photons. Indeed, I can more readily imagine physicists asking us to abandon the photon in a hundred years than computer scientists abandoning the file in a thousand. Whether the idea of files is of any consequence is an imponderable. Files have become too fundamental to reconsider. But other candidates for lock-in can and must be considered.

Neither youthful political movements nor skillful entrepreneurs can usurp, or even substantially modify ideas that have been locked in by software design in a network. For instance, long ago, on the floor of a funky, messy apartment in Cambridge, I argued with a guy named Richard Stallman, the creator of the Free Software movement.[Correction: The original text stated Richard Stallman was “the first articulate evangelist of the open software movement,” when he is in fact the creator of the Free Software movement. —ed.] He was a sympathetic hippy sort of guy who shared my idealism and hope for what computers could mean to people. But what was he busy creating? An open version of UNIX! Yuk! He had no choice, since this was the only thing he could build on an open basis that might be used. UNIX, again! And now we have Linux.

As it happens, I dislike UNIX and its kin because it is based on the premise that people should interact with computers through a “command line.” First the person does something, usually either by typing or clicking with a pointing device. And then, after an unspecified period of time, the computer does something, and then the cycle is repeated. That is how the Web works, and how everything works these days, because everything is based on those damned Linux servers. Even video games, which have a gloss of continuous movement, are based on an underlying logic that reflects the command line.

Human cognition has been finely tuned in the deep time of evolution for continuous interaction with the world. Demoting the importance of timing is therefore a way of demoting all of human cognition and physicality except for the most abstract and least ambiguous aspects of language, the one thing we can do which is partially tolerant of timing uncertainty. It is only barely possible, but endlessly glitchy and compromising, to build Virtual Reality or other intimate conceptions of digital instrumentation (meaning those connected with the human sensory motor loop rather than abstractions mediated by language) using architectures like UNIX or Linux. But the horrible, limiting ideas of command line systems are now locked-in. We may never know what might have been. Software is like the movie “Groundhog Day,” in which each day is the same. The passage of time is trivialized.

Software gradually makes more and more of politics obsolete. Consider how a civil law can change when it is implemented as software. Pre-digital laws were made of language, which can only be interpreted. Language does not directly specify or operate reality. For instance, a copyright law might forbid the unauthorized distribution of a book, but the physical instantiation of the law benefits from an indispensable ambiguity. A browsing person might read a portion of the book at a bookstore, for instance, or even at a used bookstore, or at a friend’s house. Casual, low cost browsing is absolutely essential to core democratic or capitalistic ideas like the soapbox or the newsstand. If you had to agree to listen to either a whole soapbox rant, or none of it, you’d certainly choose to skip the experience entirely. I live in Berkeley, so I can speak with authority on this point. But allowing the occasional stray bit of a rant to enter one’s ear is a small investment in the unknown, a hedge against insularity. Ambiguity in the boundaries of information access is what makes this investment inexpensive. Unfortunately, digital instantiations of law tend to make ambiguity expensive.

The degree to which human, or “natural” language is unlike computer code cannot be overemphasized. Language can only be understood by the means of interpretation, so ambiguity is central to its character, and is properly understood as a strength rather than a weakness. Perfect precision would rob language of its robustness and potential for adaptation. Human language is not a phenomenon which is well understood by either science or philosophy, and it has not been reproduced by technologies. Computer code, by contrast, is perfectly precise and therefore immune to influence from context; and therefore it lacks any deep sense of meaning. Code is nothing but a conveyance of instructions that are either followed perfectly or not at all.

The all or nothing quality of digital code (as we currently know how to make it) trickles down into all systems we build with it. In the much-examined case of digital copyright, it is easy to design a closed information system with end-to-end controls. The video game market is an example. If only it were easier to browse video games, it wouldn’t be so expensive to advertise them, and it would be easier to sell less trivial games, for a browsing person learns more than a viewer of a TV ad.

A completely open system is also easy to design. The original Napster was an example. Completely open systems have their own problems. The usual criticism is that content creators are disincentivized, but the deeper problem is that their work is decontextualized. Completely open music distribution systems excel at either distributing music that was contextualized beforehand, such as classic rock, or new music that has little sense of authorship or identity, like the endless Internet feeds of bland techno mixes. (Yes, I’m making a value judgment here. One must.)

The most attractive designs, from the point of view of either democratic ideals or the profit motive, would have intermediate qualities; they would leak, but only a little. A little leakage greatly reduces the economic motivation for piracy as well as the cost of promotion. A little leakage gives context to and therefore enhances the value of everything.

Alas, it is hard to get desirable intermediate effects with digital systems. Although there are beginning to be some pale examples of leaky copyright online, the effect has not been implemented well as of this date, and whether or not an excellent leaky solution will ever be achieved is one of the most important open questions about the future of the Net.

The difficulty of achieving ambiguity in the face of digital brittleness is also central to the controversy surrounding digital or “touch screen” voting. A voting system in which there is absolute protection of voter privacy has never before existed. It is a digital phenomenon. The closed-system approach to digital voting machine design has inspired profound, even paranoid levels of distrust.

A little bit of potential leakage turns out to be necessary in order to have checks and balances and to build trust. If you can see the ballots in a box, there is a chance that once in a great while your eye might be able to follow the path of a particular ballot into the pile and you might just see how somebody voted. But most of the time you just see the overall process and are able to feel more comfortable. With a closed digital system, there might in theory be less chance that someone can spy your ballot, but there is nothing you can see to reasonably gain confidence in the system. Ultimately what makes a good digital voting system hard to design is exactly the same as what thwarts good content distribution system design. A little leakage is necessary to give things context and meaning, and digital systems abhor leakage.

Another consequence of digital brittleness and lock-in is that more niches turn out to be natural monopolies than in previous technological eras, with Microsoft once again being a celebrated example. I call these niches “Antigoras,” in contrast with the classical idea of the Agora. An Antigora is a privately owned digital meeting arena made rich by unpaid or marginally paid labor provided by people who crowd its periphery.

Microsoft is an almost ideal example, because users are dependent on its products in order to function in cooperation with each other. Businesses often require Windows and Word, for instance, because other businesses use them (the network effect) and each customer’s own history is self-accessible only through Microsoft’s formats. At the same time, users spend a huge amount of time on such things as virus abatement and glitch recovery. The connectivity offered by Microsoft is valuable enough to offset the hassle.

Traditional stock markets, or even flea markets, are a little like Antigoras, in that they are also private meeting places for business. One obvious difference resulting from the digital quality of the Antigora is a far stronger network effect; Antigoras enjoy natural monopoly status more often than physical marketplaces because it would be almost impossible for locked-in participants to choose new Antigoras.

Another defining characteristic is the way that people are connected with the value they produce in an Antigora. Much of the efforts of individuals at the periphery of an Antigora do not officially take place. Their work is performed anonymously. The reason is that the owner of an Antigora is powerful enough to get away with the enjoyment of this luxury. The potential to deny access to a locked-in digital structure gives owners a profoundly valuable “narrows of trade.”

As with any abstraction, there is no perfect actual Antigora, any more than there is any other perfect instantiation of an economic model.

Amazon and eBay are what might be called half-Antigoras, or Semigoras. They benefit from the enhanced network effect of digital systems, and an extraordinarily high level of volunteer labor from their customers, in the form of general communications and design services. They differ from perfect Antigoras because, A) Customer volunteer labor is not given anonymously, and B) Physical goods are usually the ultimate items being sold, so they are not locked-in. A book sold on eBay or Amazon can easily escape the system and be sold later in a physical flea market.

Wal-Mart is another interesting example of a Semigora. It is a traditional retail store from a customer’s point of view, but it has also engaged an enormous worldwide Web of suppliers into a proprietary digital information system. It enjoys the enhanced network effects of digital systems, and is able to demand that suppliers adapt to its digital structures instead of vice versa.

If Google maintains its success in the long term, it will do so as an Antigora, but it isn’t there yet. The services it offers thus far, which are essentially advertising placements, are not dependent on digital lock-in. Someone else could still come up with a way to offer ads in a way that trumps Google. It is only once new digital structures are built on top of Google’s structures that Google can leverage the full power of the Antigora. My guess is that Google will become an Antigora within a year or two.

Some other examples of Antigoras are Oracle and Apple’s iTunes/iPod business. The Internet and the Web would be fabulous Antigoras if they were privately owned. A hypothesis I have entertained from time to time holds that private layers of locked-in software are always dependent on public layers. There could be no Google without an Internet. It’s interesting to note that many of the public or pseudo-public layers, such as HTML and LINUX, arose in a European context, where the ideal of the Agora is more influential than in the USA.

I should make it clear that I am not “antigoraphobic”, and indeed have made use of many of the ones I’ve mentioned here in order to perform the task of writing this essay. They are part of life.

Indeed there are reasons to like Antigoras. The Linux community is an attempt to nurture an Agora in what has become a traditional Antigora niche. The Linux project is only a partial success, in my estimation. It is able to generate digital plumbing that gains a following and gets locked in, but the Linux market is poor at generating high quality user interfaces or end-user experiences. These things perhaps require some degree of privileged authorship, and the owner of an Antigora is a super-privileged author. If that author, by the grace of fate, happens to have good taste, as in the case of Steve Jobs, an Antigora can deliver extraordinary value.

The phenomenon of Antigoras exemplifies the intimate and unprecedented relationship between capitalism and digital information. Because of the magic of Moore’s Law and the network effect, the Invisible Hand has come to be understood not just as an ideal distributor, smarter than any possible communist central committee, but as a creative inventor outracing human wits. At the same time, tiny situational advantages, particularly related to timing and code compatibility, are amplified by the exponential growth environment of the Net in such a way that unusual figures can suddenly emerge as successful entrepreneurs. A recent example at the time of this writing is the Baltic crew who started Skype on a shoestring, although it’s still too early to say which firm will win this Antigora prize. The resistance of digital brittleness to interventions by governments, together with the possibility that any clever person can strike it rich with minimal starting capital by being in the right place at the right time to plant the seed that grows a new Antigora, has caused libertarianism to be the house philosophy of the digital revolution.

How much efficiency are digital systems actually introducing into human affairs? By an engineering standard, not as much as some of us old-timers once hoped for. (Am I an old-timer? When I was 40, which was five years ago, a Stanford undergraduate expressed amazement that I, the classic figure, was still alive.) The unreliability and quirkiness of computer systems, which result directly from the brittle quality of software, snatch away many of the gifts that would otherwise flow from them. Every computer user spends astonishingly huge and increasing amounts of time updating software patches, visiting help desks, and performing other frustratingly tedious, ubiquitous tasks. But at the same time, there are unquestionable efficiencies, large and small, which result not so much from computer systems working as advertised, but from the Antigora effect, in which vast human resources are applied without charges recorded on the same ledger in order to create the illusion that they are working as advertised. This is almost as good!

Considered as a trend, the Antigora suggests fascinating potential future trajectories. At a geopolitical level, we face the following emergent melodrama. America wants to just manipulate bits. India wants to run the help desks to descramble those bits. China wants to build the physical computers that hold the bits.

I’ll now sketch one way this casting lineup might play out in this century.

Perhaps it will turn out that India and China are vulnerable. Google and other Antigoras will increasingly lower the billing rates of help desks. Robots will probably start to work well just as China’s population is aging dramatically, in about twenty years. China and India might suddenly be out of work! Now we enter the endgame feared by the Luddites, in which technology becomes so efficient that there aren’t any more jobs for people.

But in this particular scenario, let’s say it also turns out to be true that even a person making a marginal income at the periphery of one of the Antigoras can survive, because the efficiencies make survival cheap. It’s 2025 in Cambodia, for instance, and you only make the equivalent of a buck a day, without health insurance, but the local Wal-Mart is cheaper every day and you can get a robot-designed robot to cut out your cancer for a quarter, so who cares? This is nothing but an extrapolation of the principle Wal-Mart is already demonstrating, according to some observers. Efficiencies concentrate wealth, and make the poor poorer by some relative measures, but their expenses are also brought down by the efficiencies. According to this view, the poor are only screwed up in the long term by things like health care or real estate, which Wal-Mart and its ilk do not sell.

(In fact, it has been pointed out by the editors that Wal-Mart is beginning to offer medical care, and I have no doubt the firm will find a way to address the real estate crunch sometime in the future. Perhaps customers can live in little pods in the big box stores.)

Now we are moved by the logic of the scenario from Luddite eschatology to the prophecy of H.G. Wells’ Time Machine. The super-rich who own the Antigoras become so fabulously wealthy that in the context of changing biomedical and other technologies they effectively become a new species. Perhaps they become the immortals, or they merge with their machines. Unlike the Wells story, though, the lumpenproletariat do not revolt because their cost of living has retreated faster than their wages. From their local perspective they are doing better and better, even as the gap between them and the rich is growing at an accelerating rate.

The poor might eventually become immortals or whatever as well, but at a different time, and inevitably in a different way. It’s a little like a cross between Adam Smith and Albert Einstein; the Invisible Hand accelerating towards the Speed of Light. Each participant has a local frame in which their observations make sense, but their means to perceive each other are altered.

I have written the above scenario as a farce, because if software stays brittle, there will be a huge dampening effect on any hyper-speed takeoff plans of the digital elite. We will still need those help desks in India, and they will be able to charge well for their services. The wild card is the core nature of software. If someone can figure out a way to get rid of brittleness, then the scenario I sketched becomes possible, or even normative. (Don’t believe every computer scientist who claims to already know how to get rid of brittleness. It’s a hard problem that perversely yields a lot of promising partial results that are ultimately useless, fooling many

researchers.)

I have tried to present a summary of some of the hot Net topics of the moment by building on a foundation of what I believe are key enduring ideas, like brittleness and Antigoras. But the most important potential of the Net as I understand it is not discussed much these days.

As I stated at the beginning, the Web and the Net are above all unfoldings of digital software as we know how to create it. Now consider that to an alien, a digital program is not a program at all, but random markings. As it happens, the more efficient a digital coding scheme is, the more random it appears to someone who is not given a perfect and complete decoding manual. That’s why the NSA and genomics research have brobdingnagian budgets and why neural firing patterns in the brain still appear random to researchers. A tree does fall in a forest if no one hears it, but only because someone will be affected somehow by some other aspect of its falling. Digital information doesn’t ever exist unless it’s decoded. A program without a computer to run it does not exist.

This might sound like an extreme claim. After all, perhaps some computer might come along in the future that can run a seemingly orphaned program. But recall the problem of digital brittleness. The match between programs and the environment in which a program runs, which is made mostly of layers of locked-in software, must be perfect. Every program is therefore mortal, lasting only so long as its environment remains perfect. The odds against an environment re-appearing once it is lost are astronomical. That is why NASA, for instance, cannot read much of its own digital data.

Software does not exist as a free-standing entity. This idea can be restated in political or economic terms: Brittle software can only run, and can therefore only exist, with the backing of Antigoras. There must be large numbers of people tweaking the global system of digital devices so that the bits in the various pieces of software remain functional and meaningful. A market economy can only work if these people at the Antigora peripheries, like you and I, aren’t usually paid much for this service, because otherwise the software would appear to be too expensive to operate. In this sense, the digital economy could be said to resemble a slave economy in the abstract, but one in which almost everyone spends some time as a slave.

By contrast, the content aspect of the Web is an example of a gift economy, in that a contribution is usually identified with the person who gave it, and therefore there is some relationship between motivation and individual identity. My argument in brief is that the gift economy aspect is so good that we put up with the slave economy aspect.

A fully anonymous version of potlatch wouldn’t work, because there would be no collective memory of what anyone had done, and therefore no motivation for any particular continued behavior. In a slave economy, by contrast, the slave is motivated by the relationship with the master, not the market. In order to use a PC, a user must spend a lot of time on twiddly nonsense, downloading the latest virus killer and so on. There is no recognition for this effort, nor is there much individual latitude in how it is to be accomplished. In an Antigora, the participants at the periphery robotically engage in an enormous and undocumented amount of mandatory drudgery to keep the Antigora going. Digital systems as we know how to make them could not exist without this social order.

There is an important Thoreau-like question that inevitably comes up: What’s the point? The common illusion that digital bits are free-standing entities, that would exist and remain functional even if there were no people around to use them, is unfortunate. It means that people are denied the epiphany that the edifice of the Net is precisely the generosity and warmth of humanity connecting with itself.

The most technically realistic appraisal of the Internet is also the most humanistic one. The Web is neither an emergent intelligence that transcends humanity, as some (like George Dyson) have claimed, nor a lifeless industrial machine. It is a conduit of expression between people.

This perception seems to me not only beautiful, but necessary. Any idea of the human future based only on amplifying some parameter or other of human capability inevitably leads to disaster or, at best, disappointment.

Take just one current example: If a lot of people get rich in a society, eventually some nasty people will get rich. Some of them will use their wealth to do nasty things, like sponsor terrorist activities. In a world with a lot of rich people, the risk of terrorism will be set by the worst of them, not the best, or even the average, unless there is a process to prevent that from happening. But what can that process be? If it restricts personal freedom, then the core ideal of widespread wealth acquisition is defeated. If everyone must conform, what was the point of growing rich?

There is a little philosophy book by James P. Carse called Finite and Infinite Games that suggests a way out. (By the way, much of the book strikes me as silly in a “New Age” way, but that does not detract from the validity of its central point.) According to Carse, there are two kinds of games. A finite game is like a game of basketball; it has an end. An infinite game is like the overall phenomenon of basketball, which can go on forever.

A race to maximize any parameter, such as wealth, power, or longevity, must eventually come to an end. Even if the ceiling seems far, far away, there is an inevitable sense of claustrophobia in singular ambition.

The alternative to the finite game of enhancement along a single dimension is found in the infinite process of culture. Culture can always grow more meaningful, subtle, and beautiful. Culture has no ceiling. It is a process without end. It is open and hopeful.

I hope I have demonstrated that the Net only exists as a cultural phenomenon, however much it might be veiled by an illusion that it is primarily industrial or technical. If it were truly industrial, it would be impossible, because it would be too expensive to pay all the people who maintain it.

It’s often forgotten that the Web grew suddenly big in the year before it was discovered by business. There were no charismatic figures, no religious or political ideologies, no advertising, no profit motive; nothing but the notion that voluntary, high quality connection between people on a massive scale was a good idea. This was real news, a new chapter in the unveiling of human potential.

There is an interesting way in which a connection-oriented view of the Net also addresses the future of security. There is no way for a high quality security agency to have enough spies to watch the whole of humanity. As soon as you have that many spies, you’ll end up with some corrupt ones. On the other hand, a highly connected world in which everybody sees a little of everybody can provide enough eyeballs to achieve a valid sense of security. This will require a compromise between open and closed net architectures, as described earlier, and will not be easy to achieve. But it seems to me to be not only possible, but the unique potential solution.

Culture, including large-scale volunteer connection and boundless beautiful invention, has been somewhat forgotten because of the noisy arrival of capitalism on the Net in the last decade and a half or so. When it comes to digital systems, however, capitalism is not a complete system onto itself. Only culture is rich enough to fund the Antigora.

Also from this issue

Lead Essay

  • In our techno-Utopian dreams, the advance of the internet is “a little like a cross between Adam Smith and Albert Einstein; the Invisible Hand accelerating toward the speed of light,” says tech visionary Jaron Lanier in this month’s big-thinking lead essay. Yet, according to Lanier, we chug along saddled by the illusion that the Internet is mainly a technological rather than a cultural phenomenon. Software, Lanier argues, is “brittle” and can continue to function only when backed by what he calls “Antigoras”– “privately owned digital meeting arenas made rich by unpaid or marginally paid labor … tweaking the global system of digital devices so that the bits in the various pieces of software remain functional and meaningful.” Antigoras are indispensable, but “if software stays brittle,” Lanier says, “there will be a huge dampening effect on any hyper-speed takeoff plans of the digital elite.” Takeoff velocity requires a reorientation that acknowledges that the “the Net is precisely the generosity and warmth of humanity connecting with itself.”

Response Essays

  • Open source software guru Eric S. Raymond takes issue with Lanier’s characterization of “lock-in,” his antipathy to the command line, and his discussion of ambiguity. Raymond claims that if Lanier’s point was just that the Internet is “a conduit of expression between people,” then he would stop in agreement. But, he writes, “the actual point seems to be to maintain an opposition between capitalism and (gift) culture that I think is … mistaken.”

  • Glenn Reynolds — taking pieces from both Lanier and Raymond — argues that small proprietary zones within the big open Internet — “semigoras” in Lanier’s terms — might prove “very fertile places for innovation and growth on the Internet” with the potential to empower individuals and small groups to “achieve the worker’s paradise” through technology and markets.

  • Ten years after his “A Declaration of the Independence of Cyberspace,” John Perry Barlow insists that “the Internet continues to be an anti-sovereign social space, endowing billions with capacities for free expression that would have been unthinkable a generation ago.” A liberating future is still ahead, Barlow argues, but we must be on guard against a deep fact of both biology and markets: “New success inspires creativity. Old success tries to kill it.”