About this Issue

As the Internet burgeoned and blossomed through the early to mid-nineties, visionary manifestos about its transformative social, political, and economic potential clogged VAX accounts the world over. After a solid decade of intense commercial development, much go-go nineties prophesying now seems a triumph of Utopian hope over hard reality. Does hope of Internet liberation yet remain? Or has the bright promise of the Internet been dimmed by corporate influence and government regulation? Are ideas like virtual citizenship beyond the nation-state, untraceable electronic currency, and the consciousness expanding powers of radical interconnectivity defunct? Is there untapped revolutionary power waiting to be unleashed?

Virtual reality visionary Jaron Lanier will kick off the discussion with a mind-bending lead essay. Commentators John Perry Barlow of the Electronic Frontier Foundation, open source software guru Eric S. Raymond, Glenn “Instapundit” Reynolds, and Yale computer scientist David Gelernter will grapple with Lanier’s vision, and offer their own wisdom on what the Internet still has to offer for the future of freedom.

Lead Essay

The Gory Antigora: Illusions of Capitalism and Computers

The unfortunate Internet has only one peer when it comes to obfuscation due to an inundation of excessive punditry, and that peer is religion. Internet pundits have been a rather self-satisfied and well-paid class for over a decade and a half, and I am happy to have been one of their number. All things change, and I can’t imagine this gig will go on forever. All it might take is one small misfortune, like a downturn at Google, for Internet punditry to go out of fashion. Therefore, I cherish each remaining occasion when I am asked to comment on the Net.

The Internet as it is, rather than as we might wish it to be, is above all else an elaboration of the structure of computer software, or, more precisely, software as humans have been able to create it. And software as we know it is a brittle substance. It breaks before it bends. It is the only informational structure in human experience thus far that has this quality.

Contrast software with biological information, such as the information encoded in DNA, which can frequently be changed slightly with results that are also only modified slightly. A small change in a genotype will infrequently make a phenotype absolutely unviable; it will usually effect either no change or only a tiny change. All people have different genes, but few people have serious birth defects. That smoothness in the relationship of change in information to change in physicality is what allows the process of evolution to have a meaningful signal with which to drive incremental adaptation, despite the inevitably noisy nature of reality. Small changes in computer software, by contrast, too frequently result in crashes or glitches that teach observers nothing and cannot support smooth adaptation. The way I put this succinctly is that “Software Sucks.”

Brittleness leads to the phenomenon of “Lock-in,” which means that software is harder to change once it has been enhanced by subsequent developments than other human artifacts. Once software becomes part of the context or foundation for newer software in a network of dependencies, the older stuff becomes much more expensive to change than the newer stuff. There are severe, even existential, consequences of this quality of software.

One consequence is that situational advantage in the business side of software is overwhelmingly driven by snowballing early adoption, with Microsoft perhaps being the most celebrated example.

Software requires that a variety of human ideas that have previously been functional in part because of ambiguity must be stated precisely for the first time, and it becomes much harder to withdraw or modify an idea once formally specified.

An example of the principle of lock-in from the technical sphere is the idea of the computer file. Prior to sometime in the mid-1980s, there were influential voices in computer science opposing the idea of the file because it would lead to file incompatibility. Ted Nelson, the inventor of the idea of linked digital content, and Jef Raskin, initiator of the Macintosh project at Apple, both held the view that there should be a giant field of elemental data without file boundaries. Since UNIX, Windows, and even the Macintosh—as it came out the door after a political struggle—incorporated files, the idea of the file has become digitally entrenched.

We teach files to undergraduates as if we were teaching them about photons. Indeed, I can more readily imagine physicists asking us to abandon the photon in a hundred years than computer scientists abandoning the file in a thousand. Whether the idea of files is of any consequence is an imponderable. Files have become too fundamental to reconsider. But other candidates for lock-in can and must be considered.

Neither youthful political movements nor skillful entrepreneurs can usurp, or even substantially modify ideas that have been locked in by software design in a network. For instance, long ago, on the floor of a funky, messy apartment in Cambridge, I argued with a guy named Richard Stallman, the creator of the Free Software movement.[Correction: The original text stated Richard Stallman was “the first articulate evangelist of the open software movement,” when he is in fact the creator of the Free Software movement. —ed.] He was a sympathetic hippy sort of guy who shared my idealism and hope for what computers could mean to people. But what was he busy creating? An open version of UNIX! Yuk! He had no choice, since this was the only thing he could build on an open basis that might be used. UNIX, again! And now we have Linux.

As it happens, I dislike UNIX and its kin because it is based on the premise that people should interact with computers through a “command line.” First the person does something, usually either by typing or clicking with a pointing device. And then, after an unspecified period of time, the computer does something, and then the cycle is repeated. That is how the Web works, and how everything works these days, because everything is based on those damned Linux servers. Even video games, which have a gloss of continuous movement, are based on an underlying logic that reflects the command line.

Human cognition has been finely tuned in the deep time of evolution for continuous interaction with the world. Demoting the importance of timing is therefore a way of demoting all of human cognition and physicality except for the most abstract and least ambiguous aspects of language, the one thing we can do which is partially tolerant of timing uncertainty. It is only barely possible, but endlessly glitchy and compromising, to build Virtual Reality or other intimate conceptions of digital instrumentation (meaning those connected with the human sensory motor loop rather than abstractions mediated by language) using architectures like UNIX or Linux. But the horrible, limiting ideas of command line systems are now locked-in. We may never know what might have been. Software is like the movie “Groundhog Day,” in which each day is the same. The passage of time is trivialized.

Software gradually makes more and more of politics obsolete. Consider how a civil law can change when it is implemented as software. Pre-digital laws were made of language, which can only be interpreted. Language does not directly specify or operate reality. For instance, a copyright law might forbid the unauthorized distribution of a book, but the physical instantiation of the law benefits from an indispensable ambiguity. A browsing person might read a portion of the book at a bookstore, for instance, or even at a used bookstore, or at a friend’s house. Casual, low cost browsing is absolutely essential to core democratic or capitalistic ideas like the soapbox or the newsstand. If you had to agree to listen to either a whole soapbox rant, or none of it, you’d certainly choose to skip the experience entirely. I live in Berkeley, so I can speak with authority on this point. But allowing the occasional stray bit of a rant to enter one’s ear is a small investment in the unknown, a hedge against insularity. Ambiguity in the boundaries of information access is what makes this investment inexpensive. Unfortunately, digital instantiations of law tend to make ambiguity expensive.

The degree to which human, or “natural” language is unlike computer code cannot be overemphasized. Language can only be understood by the means of interpretation, so ambiguity is central to its character, and is properly understood as a strength rather than a weakness. Perfect precision would rob language of its robustness and potential for adaptation. Human language is not a phenomenon which is well understood by either science or philosophy, and it has not been reproduced by technologies. Computer code, by contrast, is perfectly precise and therefore immune to influence from context; and therefore it lacks any deep sense of meaning. Code is nothing but a conveyance of instructions that are either followed perfectly or not at all.

The all or nothing quality of digital code (as we currently know how to make it) trickles down into all systems we build with it. In the much-examined case of digital copyright, it is easy to design a closed information system with end-to-end controls. The video game market is an example. If only it were easier to browse video games, it wouldn’t be so expensive to advertise them, and it would be easier to sell less trivial games, for a browsing person learns more than a viewer of a TV ad.

A completely open system is also easy to design. The original Napster was an example. Completely open systems have their own problems. The usual criticism is that content creators are disincentivized, but the deeper problem is that their work is decontextualized. Completely open music distribution systems excel at either distributing music that was contextualized beforehand, such as classic rock, or new music that has little sense of authorship or identity, like the endless Internet feeds of bland techno mixes. (Yes, I’m making a value judgment here. One must.)

The most attractive designs, from the point of view of either democratic ideals or the profit motive, would have intermediate qualities; they would leak, but only a little. A little leakage greatly reduces the economic motivation for piracy as well as the cost of promotion. A little leakage gives context to and therefore enhances the value of everything.

Alas, it is hard to get desirable intermediate effects with digital systems. Although there are beginning to be some pale examples of leaky copyright online, the effect has not been implemented well as of this date, and whether or not an excellent leaky solution will ever be achieved is one of the most important open questions about the future of the Net.

The difficulty of achieving ambiguity in the face of digital brittleness is also central to the controversy surrounding digital or “touch screen” voting. A voting system in which there is absolute protection of voter privacy has never before existed. It is a digital phenomenon. The closed-system approach to digital voting machine design has inspired profound, even paranoid levels of distrust.

A little bit of potential leakage turns out to be necessary in order to have checks and balances and to build trust. If you can see the ballots in a box, there is a chance that once in a great while your eye might be able to follow the path of a particular ballot into the pile and you might just see how somebody voted. But most of the time you just see the overall process and are able to feel more comfortable. With a closed digital system, there might in theory be less chance that someone can spy your ballot, but there is nothing you can see to reasonably gain confidence in the system. Ultimately what makes a good digital voting system hard to design is exactly the same as what thwarts good content distribution system design. A little leakage is necessary to give things context and meaning, and digital systems abhor leakage.

Another consequence of digital brittleness and lock-in is that more niches turn out to be natural monopolies than in previous technological eras, with Microsoft once again being a celebrated example. I call these niches “Antigoras,” in contrast with the classical idea of the Agora. An Antigora is a privately owned digital meeting arena made rich by unpaid or marginally paid labor provided by people who crowd its periphery.

Microsoft is an almost ideal example, because users are dependent on its products in order to function in cooperation with each other. Businesses often require Windows and Word, for instance, because other businesses use them (the network effect) and each customer’s own history is self-accessible only through Microsoft’s formats. At the same time, users spend a huge amount of time on such things as virus abatement and glitch recovery. The connectivity offered by Microsoft is valuable enough to offset the hassle.

Traditional stock markets, or even flea markets, are a little like Antigoras, in that they are also private meeting places for business. One obvious difference resulting from the digital quality of the Antigora is a far stronger network effect; Antigoras enjoy natural monopoly status more often than physical marketplaces because it would be almost impossible for locked-in participants to choose new Antigoras.

Another defining characteristic is the way that people are connected with the value they produce in an Antigora. Much of the efforts of individuals at the periphery of an Antigora do not officially take place. Their work is performed anonymously. The reason is that the owner of an Antigora is powerful enough to get away with the enjoyment of this luxury. The potential to deny access to a locked-in digital structure gives owners a profoundly valuable “narrows of trade.”

As with any abstraction, there is no perfect actual Antigora, any more than there is any other perfect instantiation of an economic model.

Amazon and eBay are what might be called half-Antigoras, or Semigoras. They benefit from the enhanced network effect of digital systems, and an extraordinarily high level of volunteer labor from their customers, in the form of general communications and design services. They differ from perfect Antigoras because, A) Customer volunteer labor is not given anonymously, and B) Physical goods are usually the ultimate items being sold, so they are not locked-in. A book sold on eBay or Amazon can easily escape the system and be sold later in a physical flea market.

Wal-Mart is another interesting example of a Semigora. It is a traditional retail store from a customer’s point of view, but it has also engaged an enormous worldwide Web of suppliers into a proprietary digital information system. It enjoys the enhanced network effects of digital systems, and is able to demand that suppliers adapt to its digital structures instead of vice versa.

If Google maintains its success in the long term, it will do so as an Antigora, but it isn’t there yet. The services it offers thus far, which are essentially advertising placements, are not dependent on digital lock-in. Someone else could still come up with a way to offer ads in a way that trumps Google. It is only once new digital structures are built on top of Google’s structures that Google can leverage the full power of the Antigora. My guess is that Google will become an Antigora within a year or two.

Some other examples of Antigoras are Oracle and Apple’s iTunes/iPod business. The Internet and the Web would be fabulous Antigoras if they were privately owned. A hypothesis I have entertained from time to time holds that private layers of locked-in software are always dependent on public layers. There could be no Google without an Internet. It’s interesting to note that many of the public or pseudo-public layers, such as HTML and LINUX, arose in a European context, where the ideal of the Agora is more influential than in the USA.

I should make it clear that I am not “antigoraphobic”, and indeed have made use of many of the ones I’ve mentioned here in order to perform the task of writing this essay. They are part of life.

Indeed there are reasons to like Antigoras. The Linux community is an attempt to nurture an Agora in what has become a traditional Antigora niche. The Linux project is only a partial success, in my estimation. It is able to generate digital plumbing that gains a following and gets locked in, but the Linux market is poor at generating high quality user interfaces or end-user experiences. These things perhaps require some degree of privileged authorship, and the owner of an Antigora is a super-privileged author. If that author, by the grace of fate, happens to have good taste, as in the case of Steve Jobs, an Antigora can deliver extraordinary value.

The phenomenon of Antigoras exemplifies the intimate and unprecedented relationship between capitalism and digital information. Because of the magic of Moore’s Law and the network effect, the Invisible Hand has come to be understood not just as an ideal distributor, smarter than any possible communist central committee, but as a creative inventor outracing human wits. At the same time, tiny situational advantages, particularly related to timing and code compatibility, are amplified by the exponential growth environment of the Net in such a way that unusual figures can suddenly emerge as successful entrepreneurs. A recent example at the time of this writing is the Baltic crew who started Skype on a shoestring, although it’s still too early to say which firm will win this Antigora prize. The resistance of digital brittleness to interventions by governments, together with the possibility that any clever person can strike it rich with minimal starting capital by being in the right place at the right time to plant the seed that grows a new Antigora, has caused libertarianism to be the house philosophy of the digital revolution.

How much efficiency are digital systems actually introducing into human affairs? By an engineering standard, not as much as some of us old-timers once hoped for. (Am I an old-timer? When I was 40, which was five years ago, a Stanford undergraduate expressed amazement that I, the classic figure, was still alive.) The unreliability and quirkiness of computer systems, which result directly from the brittle quality of software, snatch away many of the gifts that would otherwise flow from them. Every computer user spends astonishingly huge and increasing amounts of time updating software patches, visiting help desks, and performing other frustratingly tedious, ubiquitous tasks. But at the same time, there are unquestionable efficiencies, large and small, which result not so much from computer systems working as advertised, but from the Antigora effect, in which vast human resources are applied without charges recorded on the same ledger in order to create the illusion that they are working as advertised. This is almost as good!

Considered as a trend, the Antigora suggests fascinating potential future trajectories. At a geopolitical level, we face the following emergent melodrama. America wants to just manipulate bits. India wants to run the help desks to descramble those bits. China wants to build the physical computers that hold the bits.

I’ll now sketch one way this casting lineup might play out in this century.

Perhaps it will turn out that India and China are vulnerable. Google and other Antigoras will increasingly lower the billing rates of help desks. Robots will probably start to work well just as China’s population is aging dramatically, in about twenty years. China and India might suddenly be out of work! Now we enter the endgame feared by the Luddites, in which technology becomes so efficient that there aren’t any more jobs for people.

But in this particular scenario, let’s say it also turns out to be true that even a person making a marginal income at the periphery of one of the Antigoras can survive, because the efficiencies make survival cheap. It’s 2025 in Cambodia, for instance, and you only make the equivalent of a buck a day, without health insurance, but the local Wal-Mart is cheaper every day and you can get a robot-designed robot to cut out your cancer for a quarter, so who cares? This is nothing but an extrapolation of the principle Wal-Mart is already demonstrating, according to some observers. Efficiencies concentrate wealth, and make the poor poorer by some relative measures, but their expenses are also brought down by the efficiencies. According to this view, the poor are only screwed up in the long term by things like health care or real estate, which Wal-Mart and its ilk do not sell.

(In fact, it has been pointed out by the editors that Wal-Mart is beginning to offer medical care, and I have no doubt the firm will find a way to address the real estate crunch sometime in the future. Perhaps customers can live in little pods in the big box stores.)

Now we are moved by the logic of the scenario from Luddite eschatology to the prophecy of H.G. Wells’ Time Machine. The super-rich who own the Antigoras become so fabulously wealthy that in the context of changing biomedical and other technologies they effectively become a new species. Perhaps they become the immortals, or they merge with their machines. Unlike the Wells story, though, the lumpenproletariat do not revolt because their cost of living has retreated faster than their wages. From their local perspective they are doing better and better, even as the gap between them and the rich is growing at an accelerating rate.

The poor might eventually become immortals or whatever as well, but at a different time, and inevitably in a different way. It’s a little like a cross between Adam Smith and Albert Einstein; the Invisible Hand accelerating towards the Speed of Light. Each participant has a local frame in which their observations make sense, but their means to perceive each other are altered.

I have written the above scenario as a farce, because if software stays brittle, there will be a huge dampening effect on any hyper-speed takeoff plans of the digital elite. We will still need those help desks in India, and they will be able to charge well for their services. The wild card is the core nature of software. If someone can figure out a way to get rid of brittleness, then the scenario I sketched becomes possible, or even normative. (Don’t believe every computer scientist who claims to already know how to get rid of brittleness. It’s a hard problem that perversely yields a lot of promising partial results that are ultimately useless, fooling many

researchers.)

I have tried to present a summary of some of the hot Net topics of the moment by building on a foundation of what I believe are key enduring ideas, like brittleness and Antigoras. But the most important potential of the Net as I understand it is not discussed much these days.

As I stated at the beginning, the Web and the Net are above all unfoldings of digital software as we know how to create it. Now consider that to an alien, a digital program is not a program at all, but random markings. As it happens, the more efficient a digital coding scheme is, the more random it appears to someone who is not given a perfect and complete decoding manual. That’s why the NSA and genomics research have brobdingnagian budgets and why neural firing patterns in the brain still appear random to researchers. A tree does fall in a forest if no one hears it, but only because someone will be affected somehow by some other aspect of its falling. Digital information doesn’t ever exist unless it’s decoded. A program without a computer to run it does not exist.

This might sound like an extreme claim. After all, perhaps some computer might come along in the future that can run a seemingly orphaned program. But recall the problem of digital brittleness. The match between programs and the environment in which a program runs, which is made mostly of layers of locked-in software, must be perfect. Every program is therefore mortal, lasting only so long as its environment remains perfect. The odds against an environment re-appearing once it is lost are astronomical. That is why NASA, for instance, cannot read much of its own digital data.

Software does not exist as a free-standing entity. This idea can be restated in political or economic terms: Brittle software can only run, and can therefore only exist, with the backing of Antigoras. There must be large numbers of people tweaking the global system of digital devices so that the bits in the various pieces of software remain functional and meaningful. A market economy can only work if these people at the Antigora peripheries, like you and I, aren’t usually paid much for this service, because otherwise the software would appear to be too expensive to operate. In this sense, the digital economy could be said to resemble a slave economy in the abstract, but one in which almost everyone spends some time as a slave.

By contrast, the content aspect of the Web is an example of a gift economy, in that a contribution is usually identified with the person who gave it, and therefore there is some relationship between motivation and individual identity. My argument in brief is that the gift economy aspect is so good that we put up with the slave economy aspect.

A fully anonymous version of potlatch wouldn’t work, because there would be no collective memory of what anyone had done, and therefore no motivation for any particular continued behavior. In a slave economy, by contrast, the slave is motivated by the relationship with the master, not the market. In order to use a PC, a user must spend a lot of time on twiddly nonsense, downloading the latest virus killer and so on. There is no recognition for this effort, nor is there much individual latitude in how it is to be accomplished. In an Antigora, the participants at the periphery robotically engage in an enormous and undocumented amount of mandatory drudgery to keep the Antigora going. Digital systems as we know how to make them could not exist without this social order.

There is an important Thoreau-like question that inevitably comes up: What’s the point? The common illusion that digital bits are free-standing entities, that would exist and remain functional even if there were no people around to use them, is unfortunate. It means that people are denied the epiphany that the edifice of the Net is precisely the generosity and warmth of humanity connecting with itself.

The most technically realistic appraisal of the Internet is also the most humanistic one. The Web is neither an emergent intelligence that transcends humanity, as some (like George Dyson) have claimed, nor a lifeless industrial machine. It is a conduit of expression between people.

This perception seems to me not only beautiful, but necessary. Any idea of the human future based only on amplifying some parameter or other of human capability inevitably leads to disaster or, at best, disappointment.

Take just one current example: If a lot of people get rich in a society, eventually some nasty people will get rich. Some of them will use their wealth to do nasty things, like sponsor terrorist activities. In a world with a lot of rich people, the risk of terrorism will be set by the worst of them, not the best, or even the average, unless there is a process to prevent that from happening. But what can that process be? If it restricts personal freedom, then the core ideal of widespread wealth acquisition is defeated. If everyone must conform, what was the point of growing rich?

There is a little philosophy book by James P. Carse called Finite and Infinite Games that suggests a way out. (By the way, much of the book strikes me as silly in a “New Age” way, but that does not detract from the validity of its central point.) According to Carse, there are two kinds of games. A finite game is like a game of basketball; it has an end. An infinite game is like the overall phenomenon of basketball, which can go on forever.

A race to maximize any parameter, such as wealth, power, or longevity, must eventually come to an end. Even if the ceiling seems far, far away, there is an inevitable sense of claustrophobia in singular ambition.

The alternative to the finite game of enhancement along a single dimension is found in the infinite process of culture. Culture can always grow more meaningful, subtle, and beautiful. Culture has no ceiling. It is a process without end. It is open and hopeful.

I hope I have demonstrated that the Net only exists as a cultural phenomenon, however much it might be veiled by an illusion that it is primarily industrial or technical. If it were truly industrial, it would be impossible, because it would be too expensive to pay all the people who maintain it.

It’s often forgotten that the Web grew suddenly big in the year before it was discovered by business. There were no charismatic figures, no religious or political ideologies, no advertising, no profit motive; nothing but the notion that voluntary, high quality connection between people on a massive scale was a good idea. This was real news, a new chapter in the unveiling of human potential.

There is an interesting way in which a connection-oriented view of the Net also addresses the future of security. There is no way for a high quality security agency to have enough spies to watch the whole of humanity. As soon as you have that many spies, you’ll end up with some corrupt ones. On the other hand, a highly connected world in which everybody sees a little of everybody can provide enough eyeballs to achieve a valid sense of security. This will require a compromise between open and closed net architectures, as described earlier, and will not be easy to achieve. But it seems to me to be not only possible, but the unique potential solution.

Culture, including large-scale volunteer connection and boundless beautiful invention, has been somewhat forgotten because of the noisy arrival of capitalism on the Net in the last decade and a half or so. When it comes to digital systems, however, capitalism is not a complete system onto itself. Only culture is rich enough to fund the Antigora.

Response Essays

Reply to Lanier

I’m finding it difficult to reply to Jaron Lanier’s essay, because I’m finding it difficult to extract an actual point from the text.

His essay starts off with a factual howler about biology — nobody who has ever seen the effect of a point mutation in the homeobox genes of a fruit fly maintains any illusion that in genetics there is “smoothness in the relationship of change in information to change in physicality.” Jaron wants us to see software as uniquely brittle compared to biological systems, but genetic defects turn out to be a very poor argument for that proposition. DNA and digital media both rely on error-correcting codes, and both codes can fail. The actual reason we don’t see more of the failures is not any putative robustness of biology but the unpleasant fact that most victims die out of sight in their mothers’ wombs.

The essay continues with a vulgar error about technology lock-in effects. I yield to few in my detestation of Microsoft and all its works, but S.J. Leibowitz and Stephen E. Margolis exploded the lock-in myth quite definitively in “The Fable Of The Keys” and their followup book Winners, Losers, and Microsoft. Vendor “lock-in” cannot survive the end-stage of lock-in success in which the monopolist, having achieved market saturation, must push prices forever upwards on its fixed-size customer base to maintain the rising returns that Wall Street demands. Eventually the expected cost to customers will exceed their cost to transition out of the technology, and the monopoly will melt down. This is why TCP/IP is king today and proprietary networking technologies only fading memories. It has already happened to Microsoft in the financial-services sector and the movie industry, and the handwriting is on the wall elsewhere.

Jaron then takes a swing at the computer-science concept of a “file” without acknowledging a simple fact — information has to be transported. It’s all very well to speak of linked digital content and seamless webs of information, but at some point, these lovely ramified entities have to be moved between the hothouse environments in which they can flourish. At which point, willy-nilly, you will find yourself packing all their manifold complexities into a block of data with a name that you can push around as a unit. In other words, a file.

Jaron’s sally at the Unix command line is scarcely less naive. My own The Art of Unix Programming makes the technical case that this supposedly primitive form of interface retains some major advantages over GUIs and “virtual reality”. For a more humanist argument, see Neal Stephenson’s brilliant essay In the Beginning was the Command Line. It is no accident that towards the end of his life, the grand old man of the GUI (Jef Raskin) rejected icons and moved back towards text-centered gestures as the center of his work on “humane interfaces”.

By the time Jaron gets to claiming that the Web and video games are “based on an underlying logic that reflects the command line”, this assertion has been reduced almost to meaninglessness. Jaron wants to blame our inability to get virtual reality beyond the toy-and-demo-stage on fixed ideas, but the real problem with VR is far more fundamental. It’s what flight students call simulator sickness — people get nauseated and disoriented when their eyeballs and their inner-ear attitude-and-motion sensors keep sending them conflicting messages. Jaron invented the label and concept of “virtual reality”; his ire at the command line seems to me to be a pure displacement of an understandable frustration that VR just won’t work on humans.

Jaron then goes on to confuse partial openess with ambiguity. In fact, partial openness is quite easy to achive in software; many websites, for example, have both public and passworded content. Ambiguity is a little more difficult, but nowadays fuzzy logic and satisficing algorithms are so well established that they’re used in the firmware for washing machines. It isn’t that we don’t know how to do the things Jaron points at, it’s that there is not enough market demand for them to stimulate big deployments. Or, to put it slightly differently, most human beings don’t actually want them enough to pay for them.

I think there is considerable value in Jaron’s concept of an “antigora”, but by the time I got to that part of the essay I had nettle marks all over me from the preceding thicket of errors. And, alas, they continue; when he talks about computer users spending “astonishingly huge and increasing amounts of time updating software patches, visiting help desks, and performing other frustratingly tedious, ubiquitous tasks” he is mistaking a contingent property of Microsoft Windows for an essential property of software. Users of Linux and MacOS know that it doesn’t have to be this way.

I’m a Linux fan myself, and experience orders of magnitude less in the way of software pain than my Windows-using neighbors. And I observe that MacOS users experience significantly less pain than I do, if at significant cost in lost flexibility and options. These successes show that good user interfaces and robust software are not unattainable mysteries, they’re just engineering problems, albeit rather difficult ones. Thus, we should be wary of drawing larger conclusions from Microsoft’s incompetence.

We should also be wary of drawing too hard a distinction between antigoras and agoras. Human beings being what they are, they subvert antigoras into their own purposes and frequently turn them into agoras. Because I helped invent it, I know that the open-source culture Jaron uses as an example agora didn’t arise out of a vacuum; the space Linux needed to grow was wrestled out of vendor antigoras one step at a time over the two decades before 1991.

But the blurriness of the boundary between agoras and antigoras isn’t just a contingent historical fact. When Jaron talks about the “gift economy” of agoras, he’s using concepts and terminology that I introduced into the discourse of the Internet around 1999. He seems not to have noticed, unfortunately, how my analysis also shows that his “antigoras” are actually reputational agoras (Michael Goldhaber and others have since popularized this idea under the rubric of the “attention economy”).

In the other direction, agoras morph into antigoras when they need capital concentrations to keep functioning; one good example of this is IMDB. Wikipedia may be beginning a similar transition right now. This isn’t to be feared, it’s just an adaptive response — nobody, after all, is actually forced to “slave” in an antigora. I think one of the consequences of communications costs being driven towards zero is that social organizations are more likely to undergo such phase changes during their lifetimes.

Thus, what Jaron writes up as “farce” I think is a real and sober possibility, and a very hopeful one. When he says “My argument in brief is that the gift economy aspect is so good that we put up with the slave economy aspect.” I largely agree, but add that this is so mainly because on the Internet it is easier to go from “slave” to participant in a gift exchange than Jaron admits—or, perhaps, allows himself to understand.

Or, perhaps, Jaron does understand, but hasn’t connected his understanding with his questions yet. When he says “The Web is neither an emergent intelligence that transcends humanity,[…] nor a lifeless industrial machine. It is a conduit of expression between people,” he is absolutely right. It’s nice to be able to agree with this formulation, even if Jaron’s rhetorical path to it seems to me to have been riddled with obvious mistakes.

If this were in fact the point of Jaron’s essay, we could stop there in agreement. But the actual point seems to be to maintain an opposition between capitalism and (gift) culture that I think is again mistaken. As I pointed out years ago in Homesteading the Noosphere, gift cultures rely on a hefty wealth surplus to keep them afloat. While there are many ways to concentrate such a surplus (patronage by one tyrant or a group of aristocrats can do it) capitalism is the only way to do it that scales up well. Capitalism is every gift culture’s best hope for sustainability.

Reply to Lanier

Jaron Lanier raises some interesting points–more, in fact, than I can address here. But two aspects of his essay particularly struck me because they jibe so closely with what’s going on in my life right now.

On the tension between the agora and the antigora, his points seem to have a lot to do with some things I’ve observed about the tension between big and small. Those two things aren’t always parallel: the Internet is big, but open, and plenty of clubs are small, but closed. But there seem to be some parallels. In this essay on small and big, I noted that big entities–like eBay–make it possible for lots of small businesses to form. (I expand on this point at considerable length in my forthcoming book, An Army of Davids: How Markets and Technology Empower Ordinary People to Beat Big Media, Big Government, and Other Goliaths). I’m pretty sure that eBay counts as an “antigora”–you have to join, and eBay has the authority to keep you out if it wants–but in fact it functions as something very close to an agora. (Jaron calls it a “semigora,” which actually seems about right).

In the agora of the Internet, issues of trust and communication serve as significant barriers; in the antigora of eBay, those problems are (mostly) addressed. (eBay also offers health insurance to its “power sellers,” and it’s not all that hard to become a power seller; its antigoran nature lets it use its buying power to get them better deals than people could get in the agora on their own.) On the other hand, eBay can exist only because it’s embedded in the larger open space of the Internet agora. And lots of people started using the Internet regularly because it provided access to antigoras, or semigoras, like eBay and Amazon.

This makes me wonder if the semigoras (Jaron’s neologism is already catching on!) might not prove to be very fertile places for innovation and growth on the Internet – a sort of informational tidal basin exploiting the boundary between two different zones. Or perhaps I’ve just fallen into a very 1990s sort of metaphor…

In these respects, I think I agree with Eric S. Raymond, who observed:

“antigoras” are actually reputational agoras (Michael Goldhaber and others have since popularized this idea under the rubric of the “attention economy”).

In the other direction, agoras morph into antigoras when they need capital concentrations to keep functioning; one good example of this is IMDB. Wikipedia may be beginning a similar transition right now. This isn’t to be feared, it’s just an adaptive response–nobody, after all, is actually forced to “slave” in an antigora. I think one of the consequences of communications costs being driven towards zero is that social organizations are more likely to undergo such phase changes during their lifetimes.

Eric’s also right that only a highly productive economy can support a gift economy across large numbers of people, and that echoes a theme that I’ve sounded in An Army of Davids: We may achieve the worker’s paradise, but it will be through the interplay of technology and markets, rather than via the mechanisms favored by 20 th Century advocates of socialism.

Jaron also notes another theme that I’ve sounded: The empowerment of ordinary people is a good thing, but it also carries with it the dangers inherent in empowering bad people. In a world in which individuals have the powers formerly enjoyed by nation-states, an already-shrinking planet can get pretty small.

To me, this is another reason why we should favor space exploration and – more significantly, over the long run – space colonization. (As I wrote a while back, “Stephen Hawking says that humanity won’t survive the next thousand years unless we colonize space. I think that Hawking is an optimist.”) And, it happens, the empowerment of individuals and small groups that we’re seeing elsewhere is also going on here, with significant progress in space technology taking place now that it’s moving out of the hands of a government monopoly. Let’s hope it moves fast enough.

Reply to Lanier

Exactly ten years ago, I wrote a widely circulated manifesto called “A Declaration of the Independence of Cyberspace.” The current conventional wisdom on this admittedly pompous outburst is that my statements about the naturally liberating qualities of this global agora are embarrassingly naïve and that the “weary giants of flesh and steel”–the traditional nation-states and megacorporations – have imposed their wills on it without a lot of trouble.

Aside from ruing the neo-Jeffersonian hifalutination of its rhetoric as well as techno-utopian “they just don’t get it” arrogance of its style, there is not much of substance in it I feel compelled to retract now. The Internet continues to be an anti-sovereign social space, endowing billions with capacities for free expression that would have been unthinkable a generation ago.

Of course, it’s a very benign wind that blows no ill, and bad things have also happened as a consequence of the proliferating Internet. In 1996, I was very keen on the power the Internet might convey to the individual against the state. I think I was right about that, but I was, with my usual narcissism, imagining the likes of myself with the digital slingshot of the New David and not, say, Osama bin Laden. I also wasn’t imagining that increased freedom of expression would manifest itself primarily in uncontrollable tsunamis of spam, viruses, and really cheesy porn. One of the weaknesses we libertarians fall prey to is a sunnier view of human nature than our species often deserves.

But Jaron Lanier says that religion is the only peer to the Internet when it comes to inspiring excessive punditry, and there is a reason for that. The way one regards the Internet often is a religion, and there are many sects among us already. The traditional monotheists who created the Industrial Period and the modern nation-state believe, as a matter of religious principle, that their powers remain intact, and the emerging digital pantheists, while chastened by the Dot Bust, the continued dark empire of Microsoft, and the aforementioned spew of ugly human expressions, remain optimistic. As Anais Nin once said, “We don’t see things as they are. We see them as we are.”

Even if I’m not entirely pleased with what this “Civilization of Mind” is “thinking” in its contemporary state of development, one can hardly insist that the industrial “powers that were” haven’t been hit with some wrenching challenges. Moreover, I believe that the predictions I made in 1996 will seem more accurate in another 10 years than they do today. Paul Saffo of the Institute for the Future once coined the term “macromyopia” to describe a general human tendency to overestimate the short-term consequences of a profound new technology and underestimate them in the long term. As he pointed out, about half the population has to die off and get out of the way before the real social transformation such discontinuities generate can take place.

Of course, I could be wrong about the overall long-term liberating and equalizing effects of the Internet. If these dreams remain unrealized, I think it is likely that one of the reasons for their failure might be the problem that Jaron has identified has identified in his essay: the antievolutionary nature of our current software model and the natural monopolies that form around it. As Mitch Kapor once observed, “Architecture is politics.” A communications environment will only permit the forms of expression its architecture enables.

Culture, which Jaron correctly identifies as the global ghost in the machine, is a living thing, an ecosystem of thought that, like any other ecosystem, thrives on diversity, hybridization, and free competition. In a healthy ecosystem, everything is interoperable in a sense, and the system has a very broad possibility space to explore. It has to be open, fluid, and error-tolerant. If the monopolies or, to use Jaron’s term, the antigoras that naturally form around the “brittleness” of our current software continue to dominate, they could succeed in creating a cultural domination that is as inimical to innovation as was the Catholic church during the Dark Ages or, more recently, Soviet Communism and the more rigid forms of Islam.

While I lament the gratuitous snottiness of Eric Raymond’s response to Jaron, we are of the same religion when it comes to the virtues of open source as a means of making our virtual spaces more fluid and adaptive. I’m not quite convinced that UNIX weenies ought to become our new philosopher kings; the wise men who designed the architecture of the Internet originally displayed a gift for benign societal design and selfless leadership that was a little surprising for a bunch of guys with unevenly developed social skills.

Mitch also once said that “Inside every working anarchy is an old boy network.” I worry that the old boys in this case may be getting a little too old and that they have not worked out a credible succession model. I pray that the open source community will become emerging new boy network that replaces them.

Like Jaron, Eric, and Glenn, I am a pretty devout free marketer, but I think that monopolies are as inimical to free markets as the kind of well-intentioned governmental regulatory oppression that I believe the Internet has resisted quite effectively thus far. Some of the digital “antigoras” Jaron fears have been quite successful in preventing the innovations that a truly free market might enable. The “semigoras,” as Jaron calls them, have seemed to be getting it pretty right. eBay enabled huge new opportunities for small business. They understood that, in the era of self-organizing commercial networks, the market is the product.

Nevertheless, it’s in the nature of success to create ever more powerful homeoboxes with which to protect itself. eBay may become as stifling to innovation as Microsoft has been.

Therein lies the unaddressed paradox in Jaron’s argument (which ultimately may be no more than a clever way of re-framing the old debate between the open and the closed): new success inspires creativity. Old success tries to kill it.

To return to the biological model, I would point out that nature is really a long argument between creative chaos and grimly stable order. The evolutionary record displays what Stephen J. Gould called “punctuated gradualism.” Successful life-forms dominate for long, boring periods between brief exclamation points of wild experiment. The Cambrian Explosion is a case in point. For a long time, blue-green slime had cornered the market. Then, about 540 million years ago, we see a proliferation of multi-cellular organisms assembling themselves into a surreal range of possibilities before settling down to the six taxonomic kingdoms that have ruled ever since, though within each of these “platforms,” there have been periodic “standards wars”–like, say, the emergence of the class Mammalia–that covered the earth with many new species in a short time.

Though I am so personally fond of Jaron that I hate to offer any criticism, it strikes me that he has used this opportunity to saddle up his old hobby horse, irritation at the command line, in the service of a new cause. If you’d asked Alan Kay, he might have told you that a great threat to Internet freedom might have been avoided if we’d adopted object-oriented programming straight out of his box at Xerox, just as Eric Raymond bangs a drum I’ve heard before, and Glenn Reynolds even finds in this discussion an opportunity to promote space exploration.

By the same token, I have been sorely tempted in this response to nominate what is still pleased to oxymoronically call itself “intellectual property” as the greatest threat to a generally enlightened future. I’m glad I resisted that impulse, not because I might not have been right, but because it would have been self-indulgent of me.

All in all, I think the revolution is proceeding rather well and, in fact, is about to enter another period as fruitful and messy as the five years that followed the introduction of Mosaic. I still believe that we are engaged in a great work that will truly liberate much of humankind. Call me an optimist, but I can live with that.

The Conversation

Kicking Off the Conversation: Replies to Comments

To those who have posted responses to the essays: What I write here does not acknowledge what you’ve written. Apologies. It’s been an insanely busy week for me and I haven’t had time to read anything but the primary essays.

To David Gelernter

Yes, I should have included Lifestreams as a sterling example of a non-file approach to computing. It is also a fine example of designing computation around human experience. The length of time it takes for ideas like Lifestreams to come into use will be a measure of how bad our legacy problem really is. I hope very much that it will turn out that I was too pessimistic, and that the adoption of ideas like Lifestreams will not be excessively delayed by the crappy software that’s already laid down.

Thank you for stating the obvious about UNIX! For God’s sake, what’s with this Linux worship going on all over the place? The UNIX approach is this old, wretched snapshot of history. Every time some Slashdotter waxes lyrical about its virtues I feel even more strongly that software lock-in has caused idea lock-in.

Regarding specific points of disagreement: I would say that gasoline can be re-used if you smash the engine, just like dinosaur residue can be reused even though they and their ecosystems are smashed. Computer code loses its potency in a way that nothing else does.

Regarding “slaves”: The slaves I spoke of are the end users who engage in “tweakage denial.” We pretend to spend less time being humiliated by our machines than is actually spent.

Here are some recent examples, from the last two days, in fact. Two very prominent computer scientist friends, both founders of prominent computer science departments, deleted a half hour from their lives trying to figure out how to read email from an academic account on the East Coast. These guys have done this a lot, almost certainly more than you, dear reader, whoever you are, but the details keep changing. Then I got personal email not meant for my eyes, having nothing to do with me, forwarded to me by mistake. This was due to a server bug in the computer of an extremely famous technologist who has publicly disputed my contention that software doesn’t improve the way hardware does. I won’t use the names here, but you know who you are. Another example: Earlier this week I had dinner with the chief technologist of one of the biggest tech companies, and he was locked in combat for much of the evening with his phone/PDA gadget trying to pry some data out of it.

During the same period of time, your humble author was plagued by an apparent side effect of a software/security update from Apple, which screwed up my music editing system. I zapped pram, reset PMU, fixed permissions (damned UNIX!), reinstalled drivers, initialized the open firmware (FORTH!), and now I can only say I think I fixed it. Any of you doing sound editing on OSX will recognize these terms. They mean absolutely nothing but time wasted on tangled software despite what I am certain are good intentions from the folks over at Apple.

No one wants to admit how much time one spends on mandatory digital tweakage, so we deny it. There’s something, well, unmanly about admitting you sunk a bunch of time on this kind of idiocy. So we pretend it’s only the other people who waste time this way. Go ahead, pretend you don’t do it, but you do. We all do, and we are the “slaves” while we tweak, no matter how much we deny it. (An acknowledged software engineer is not a slave at all, of course.)

To John Perry Barlow

The hobbyhorse I insist on riding, or beating, or whatever it is one does with that odd metaphor, is not so much my unhappiness with the command line, or the file, or any other particular thing, but the loss of self-awareness that seems to be taking place. If architecture is politics, then we ought to take notice when our ideas are locked in by software.

Whether these particular ideas are the important ones or not is another question. I am annoyed by the command line, it is true; but what I wish to point out is that the underlying process by which it lives or dies is different from those of other ideas or designs, because of the exaggerated legacy effect of software lock-in. When we fight over the design of digital infrastructure, the fight is extra-important because the cost to reverse a mistake rises over time to infinity.

You worry about intellectual property. Intellectual property as defined by law is tolerable. We know that because we’re all still here even though it’s been around. Intellectual property as it might someday be implemented in a network of rigid software structures would be awful and perhaps intolerable.

Hey, I’m still utopian; this essay just happened to be about other stuff. I don’t like talking about an emergent mind of mankind, because I don’t know how. But I do love talking about how the varieties of connection between people will expand. I have a whole rap about “Post-symbolic communication” that I’ve been presenting for more than twenty years, and I still get excited about. (Briefly, this means that someday you’ll be able to improvise every aspect of a shared virtual world with other people, so you can make stuff instead of using symbols to talk about stuff. It will be a shared, waking state, intentional dream.) Yes, I believe in a wonderful future filled with more adventure, challenge, and beauty than people can take in. And I believe technology will be an essential, enabling part of it.

So I join you in utopian reverie, but I’d like to make a comment about timeframe. I’m delighted by people who believe in the coming of the Messiah, the Apocalypse, Ragnarok, the emergent mind of the Net, or post-symbolic communication, but only so long as they don’t expect any of these things very soon. When someone expects any of that stuff in our lifetimes, then you usually have a problem. People get nutty, extremist. This is how I feel about some of the Singularity believers, for instance.

Just because I think we have a long haul ahead of us to fix digital systems, maybe to be measured in centuries, please don’t worry that I’ve lost my utopian cred.

To Glenn Reynolds

Good point about space travel. Actually my private theory about how we overcome the software legacy problem in the far future is that information systems which are spread out over deep space become desynchronized because of the long latencies involved. Finally, some of them untangle themselves from MIDI, Windows, command lines, and so on. Distance will not only provide survivability, but a level of diversity that isn’t found in mere “Globalization.”

To Eric Raymond:

Raymond claims my term ‘Antigora’ is just a replay of one of his terms, but then uses my term differently than I do! Such sensitivity about whether his accomplishments are recognized! It doesn’t detract from Raymond’s reputation that other people also have thoughts.

(He seems to be accusing me of ripping off his rants. If it matters, I was ranting about this stuff noisily from around 1980. My first rant about the problems of UNIX and the command line was in 1978. Many of the ideas in the essay appeared earlier, in places like this.)

I’ll respond to a couple of specific points:

The frequency of unviable fetuses is irrelevant to the argument. There are indeed deadly mutations, but the point is that there are so many survivable ones that are similar to one another. There’s a signal-to-noise ratio for the feedback loop of any adaptive system—just like for any signal. Biological systems have enough signal. This comes about because there’s a modicum of smoothness in the system. Small changes in causes create small changes in results often enough to allow adaptation. Simple, right? The claim on the table is that software as we know it does not have enough signal in the noise, because small changes lead to big changes too often.

While I love the free market system, and could not imagine that a weirdo like me would thrive in any other type of society, I have little sympathy for Panglossian free market fanaticism. Raymond’s analysis of seemingly every event that has occurred is that it was the result of competition in a free market, and that therefore the outcome was for the best.

He applies his one-size-fits-all theory to TCP. Wow. I remember when TCP/IP had rivals. The process by which it won was not purely libertarian. In the interest of defending the reputation of one of the few useful politicians we’ve had in our arena, I must remind everyone that there was this thing called the Gore Bill. Government stepped in to help bring order to our online communities for the mutual good. Of course there were many factors aside from that, but this idea that TCP/IP was out there in a price war is a poor telling of history. The point Raymond raises about how monopolies can die from overreach is sound, but in Microsoft’s case, I fear the code will survive the company’s fate, whatever it is.

Next up for Panglossian treatment is what I call “leaky” systems. He says the reason we don’t have them is that they lost out in marketplace. That must be why book authors are suing Google and Sony has become a malicious PC cracker. Someone offered a solution and everyone thought it would be cheaper to suffer. In truth, a solution has not yet appeared. One of the reasons is that it will be technically difficult to create it.

Regarding the holy relics of the file and the command line: My point wasn’t about the text aspect of the command line, but about timing and interactivity, which are different issues entirely, with cognitive implications, and also not in any sense uniquely connected to virtual reality. You can’t play a violin with a command line. You also wouldn’t be advised to operate UNIX with a violin.

See my thoughts on the problems VR must still overcome. Motion sickness is probably well enough understood for prime-time, but there are plenty of other challenges. The point is that video game interactions are based on discrete interaction events instead of continuous engagement, in the way that sports, sex, music, language, eating, and all non-computer activities are continuous. I hope to see more profound levels of interaction in future games. Whoever tries to make that happen will have to battle against a bunch of entrenched code.

I hope one of the multiple-universe hypotheses is true and there’s an alternate universe out there where some of the most ambitious, weird, early computer science projects actually happened. In one of those universes, Ted Nelson’s Xanadu as imagined early on is streaming nodes, as well as my old dream of the Mandala operating system from 25 years ago—and Jef Raskin is making minimalist user interfaces that are widely used. I am confident that in all those universes all of us are arguing all the time. Raymond sees files as the only possible idea, but he is a creature of this universe, this narrow little place of ours that we’ve made a bit narrower.

About Jef Raskin: he was doing text-based work all along. He was never the grand old man of the GUI, but of UI minimalism. If there ever was a grand old man of graphics UI, maybe he was Ivan Sutherland (still youthful in every way that counts and productive, by the way) or Alan Kay (same). For those who don’t know, Jef passed away recently and he is sorely missed.

Raymond claims that what I dreamed up as a “farce” is to him a “hopeful” possibility. What I wrote up as farce was people abandoning the species to become Gods because of Wal-Mart.

I am curious whether a cultural gulf between Raymond and me is so profound that my words are impenetrable to him, or whether he just skimmed and made assumptions about what I said.

The main point for me is noticing the warmth and generosity of what’s happened with the net. The problem with capitalism is that it works too well and can distract people from noticing beautiful things. If you think of absolutely everything in creation, or even just in human affairs, as capitalism-in-action, then you live in an impoverished universe of your own reduction. I point out that the same is true for the spectacular economic successes of the new economy.

Be Clear. Be Crisp. Be Concise.

Jaron,

I cheerfully affirm that the ideas expressed in your essay are your own. With one or two important exceptions that I think I’ve already specified, I wouldn’t want any part of them. Nor did I suspect you for a moment of “ripping off my rants.” You are perfectly entitled to use the concepts of “gift culture” and “agora” in your intellectual toolkit without bowing in my direction; I, after all, shamelessly appropriated the former from cultural anthropology and the latter from agorist libertarianism. I’ve never claimed a patent on either, and I’m not going to start now.

Nor am I going to defend myself against your wild and (at least to me) amusing flings about “Panglossian free-market fanaticism” etcetera, because, as I understand it, this conversation is supposed to be about your ideas rather than mine. My role, as I understand it, is to call you on factual errors, to prod you into thinking more sharply and expressing yourself more clearly.

Therefore, I charge you with writing in a sufficiently confusing and vague way that I had to guess at what you meant by an “antigora.” If you don’t want myself and others to use the term in ways other than you intend, you’d best be a lot clearer about what you actually do intend.

So tell us what “antigoras” and “semigoras” are, please. Try to do it without divagating all over the lot into biology and economics—your grasp on these fields seems to me sufficiently weak that the analogies you attempt don’t help you at all. Try to stick to the observable behaviors and communications patterns of “antigoras” and “semigoras.” What are people in these forms of social arganization actually doing? How does it differ from what people in agoras are doing?

I meant it quite seriously when I opined that the concepts may have considerable value. I’m challenging you to discard the muddle that surrounded them in your original essay. Be clear. Be crisp. Be concise.

Keeping it Cozy

I think that everyone would like to keep a sense of community. Jaron writes:

The main point for me is noticing the warmth and generosity of what’s happened with the net.

But then he follows it with this:

The problem with capitalism is that it works too well and can distract people from noticing beautiful things. If you think of absolutely everything in creation, or even just in human affairs, as capitalism-in-action, then you live in an impoverished universe of your own reduction. I point out that the same is true for the spectacular economic successes of the new economy.

But most of the “warmth and generosity” on the net was made possible by capitalism. Those fiber lines didn’t just build themselves, and neither did the better servers and computers that allow things like podcasting, videoblogging, file-sharing, etc. (By way of contrast, how much beauty is there in the Internet’s decidedly non-capitalist predecessor, the post office? Not very much.)

As I’ve noted elsewhere, capitalism often fosters community, even while decriers decry it.

I Love Capitalism. Really!

How much love do I have to declare for capitalism before it’s possible to point out that it isn’t the only active or worthy principle in human affairs, so that I won’t be pounced on by libertarians? I love capitalism this much! (He stretches his arms out wide.) As I’ve said in this discussion and elsewhere, I can’t imagine that I, for one, would survive, much less thrive in any other system. The free market is triumphant, although it faces interesting challenges in the coming years as many countries age, the energy cycle will be forced to shift, and the biggest capitalist economy might still be run by a communist party. Even so, capitalism has more than earned it’s stripes well enough to not need defending at every turn.

To respond in a little bit more detail, I agree with Reynolds that capitalism in the broadest sense is at the very least an enabler of community. Actually “community” is one of those words I try to avoid. Out here in the Bay Area everything is about community. The corner Pizza joint is a collective that serves the community. Does that phrase mean anything? Why do they keep on saying it? So as one member of the punditry community to another, I declare that the rise of the pre-business web I was talking about wasn’t an example of community, exactly. The people putting up web sites didn’t do much to build trust, make commitments, get to know one another, or do any of the other things that might distinguish a community from other groupings of people. That came later. My claim is that the initial big push was driven by the joy of volunteerism, a bit of braggadocio, the sense the web was a good idea, and by a kind of optimism that didn’t yet understand a profit motive.

Once again, that’s not a criticism of capitalism. Why can’t you libertarians revel in how well capitalism works? How much success do you need to feel assured? My own view is that capitalism’s future will be brighter if people learn to think of it as a great tool rather than as a universal life philosophy.

On Capitalism and Jaron’s Views on Capitalism

I consider myself a conservative Republican; I am in fact a Bush administration appointee, in a small-potatoes way. (I’m on the board of the National Endowment for the Arts, and was confirmed by Congress.) But capitalism strikes me as the spoiled brat of the political and philosophical universe.

I strongly agree with Jaron: people don’t need to declare their loyalty to capitalism every time they open their mouths. Everyone knows about capitalism’s successes; we need to spare a little attention for its failures too. The US university is one big one. Everyone knows that elite US universities occupy the wacko left of the ideological spectrum. Because they run the ed schools, they’ve gradually turned the public schools into wacko-left institutions also, where children learn every day about all the awful things (aand none of the good ones)the US, western civilization, amd white men in general have foisted off on the world.

Why does it work this way? In part because humanities and social science professors are paid approximately nothing. They’ve always earned less than their accountants, but nowadays I’ll bet they make an order of magnitude less. (Science profs are underpaid too, but at least we have consulting opportunities, etc.) Why shouldn’t U.S. humanities professors hate this country and hate capitalism when their mediocre-ist students routinely get rich while their professors can’t even pay their damned bills? Do we really think this is a clever way to run a country—to pay the people who have maximum influence on the attitudes of young people so little that they’re bound to be resentful and angry? Nowadays, colleges that have managed their portfolios well are swimming in money and are putting up new buildings right and left. How much of that filters down to the faculty? Zero.

And why do we want to be a nation that worships rich people anyway? Conspicuous consumption used to be bad taste. Unfortunately taste has been abolished. And students have never been so obsessed with money, and so indifferent to spiritual things. It’s not the tech industry’s fault. But the next time a multi-billionaire tech bigshot tells me how wonderful capitalism is, I’m going to throw up. Obviously they think it’s wonderful. But there’s more to life. Jaron is one of the few top technologists I know who makes an attempt to speak about the “more.”

Sticking Up For Capitalism

Dr. Gelernter, I’m a long-time fan of your writing. I normally find it crisp, incisive, and refreshingly free of either the left- or right-wing varities of political correctness. You’ve been on my short-list of computer scientists I most admire, and have hoped to meet personally someday, for many years.

Against that background, I have to say you have disappointed me dreadfully here. Your critique of capitalism sounds uncannily like special pleading on behalf of academics, a group of which you just happen to be a member.

No real-world market is perfect. But market failure is only grounds to deprecate markets when we have reason to believe non-market allocation mechanisms can do better. Otherwise, the only aim and effect of the deprecation can be to replace an imperfect market with something worse. (Usually the “something worse” is a committee of bureaucrats.)

Can you propose a non-market allocation mechanism that would rescue academia from its present disgraceful state? Good luck with that. F. A. Hayek and David D. Friedman, among others, have shown that even a bureaucrat-god with perfect information and infinite computational capacity cannot outperform market allocation through price signals (the most accessible proof I know of this is in Friedman’s Price Theory, which I recommend).

More generally, the political culture of the West is only now beginning to recover from the memetic damage done to it from 1920 on by Soviet propaganda and Soviet agents of influence (see, for example, Stephen Koch’s Double Lives : Stalin, Willi Munzenberg and the Seduction of the Intellectuals). This memetic attack followed the prescriptions of Antonio Gramsci and other Marxist theoreticians, and was determinedly (even brilliantly) executed for over fifty years. Part of the resulting damage is manifest in what you aptly describe as the pervasive “wacko leftism” of the academic/educational world.

Where you see in humanities and social-science academia resentful victims of a system that fails to reward them properly, I see an academic establishment that swallowed not just Stalin’s bait but the hook and the line and the sinker as well (multiculturalism, postmodernism, and “world system” theory), and in doing so rendered itself largely incapable of teaching anything of value. Their economic troubles are not the cause of their political fecklessness, but its completely inevitable consequence.

I’ve said this before in public, and I will again. I think my own fumbling efforts at descriptive sociology and anthropology have earned a better press than they probably deserve, because the standards of scholarship in those fields are now so desperately bad that an outsider/amateur like me who applies even minimal rigor looks brlliant by comparison. My modest success is the flip side of the Sokal hoax, and both are less testimony to the cleverness of their authors than to the degree that the academic background of our work has become an intellectually impoverished wasteland.

You complain that nobody wants to pay decent salaries to humanities academics as if this is market failure. I think it is the market mercilessly assessing the actual value of what they teach. If anything, they’re paid far too much, and insulated from the hard choices independent scholars like me have to make.

Against this background, I’d say yes—public intellectuals (especially academics) do need to declare their loyalty to capitalism every time they open their mouths. This will continue to be necessary until the academy fully recovers from the effects of Gramscian subversion. And, by consequence, earns a decent average salary again.

Want some bright-line tests that are less starkly economic? When literary theorists are able to stop obsessing about “power relationships” and “alienation”; when sociology and anthropology abandon the stifling Durkheimian dogma of tabula rasa; when Middle-Eastern studies departments get out from under the dead hand of Edward Said and “post-colonialism”—then, maybe, declaring our loyalty to capitalism will stop being necessary therapy for a sick academia.

I’m at a complete loss to understand why a self-described “conservative Republican” (particularly one of your intelligence) should make excuses for the academic apparat.

The Road to Professorial Liberation

Eric, Regarding long-time-fandom, thanks very much and the feeling is mutual. But you haven’t described my views accurately.

I’m not pleading on behalf of academics; rather on behalf of humanities and social science academics, a group of which I am not a member. As I pointed out, professors in the sciences have access to other sources of income that are generally closed to humanities scholars. Your analysis of the state of US faculties is interesting but wrong; you’ll find that the US intelligentsia had clearly-formed left-wing views long before 1920. Teddy Roosevelt complained about them when he was president (and he pointed out the absurdity of pacifist condemnations of, e.g., the famous British victory against the Mahdists at Omdurman). What happened to the universities is simple: after World War II, they were taken over by intellectuals. Before that point Yale, Harvard, Princeton et al had been run by social (not intellectual) bigshots. Once the intellectuals took over, the political future was clear. The main determining event was the way US intellectuals turned Vietnam into a pseudo-American First World War. They didn’t want to be cheated out of the cathartic experience Euro-intellectuals enjoyed after 1918; therefore they made the absurd claim that Vietnam, like the First World War in the Keynesian view, was a doomed and pointless exercise in bloodletting.

But, anyway, your claim that English professors are merely getting what they’re worth is clearly wrong. A Harvard education is worth a lot. Not because the students learn anything much (not much that’s true, at any rate), but because a Harvard degree is clearly worth money in the job market (or is believed to be, which amounts to the same thing). Therefore students or their parents (or US taxpayers) are willing to pay huge sums to Harvard in exchange for Harvard degrees. The question is, what happens to that money? The same thing that used to happen to ticket receipts at ball games before the players got smart. It’s not that a Harvard English professor is worth what a right fielder (or whatever) makes. Just that he’s worth a lot more than he gets today.

The solution is obvious: a free market in education; a market controlled by the producers (namely the professors) and not the institutions. The road from here to there isn’t trivial, but one way or other that’s where we’re going.

Capitalism and More

One must, I think, move in fairly rarefied libertarian circles to think that capitalism is over-defended. I also think that pleas of poverty on behalf of academics are overstated. Academics make less than people who make a lot, but they make more than most Americans, for work that is pleasant, interesting, and largely free from annoying, Dilbertesque crap.

That said, I think it’s worth looking at where we’re going instead of arguing about capitalism, something that seems unnecessary when blogging under the aegis of Cato.

Indeed, one of the themes of my forthcoming book, An Army of Davids, is that technology and markets are blurring the old distinctions: We’re likely to achieve worker control of the means of production not because of anti-capitalists, but because capitalism has made many tools so cheap that anyone can afford them. Right now we’re seeing that effect mostly in areas dominated by information — music, journalism, video, etc. — but as Neal Gershenfeld notes, we’re heading toward a revolution in personal manufacturing, too. (Developments like nanotechnology are likely to accelerate that.)

The Internet will accelerate this change, of course, as it has accelerated the earlier ones. With rapid collaboration and near-instant prototyping, we’ll see learning curves grow much steeper, with global wealth accelerating dramatically. Unless, that is, efforts to channelize the Internet, or tools for censorship developed by U.S. corporations in cooperation with the Chinese government come into wider play.

Indeed, this whole discussion reminds me of this statement:

Throughout history, poverty is the normal condition of man. Advances which permit this norm to be exceeded—here and there, now and then—are the work of an extremely small minority, frequently despised, often condemned, and almost always opposed by all right-thinking people. Whenever this tiny minority is kept from creating, or (as sometimes happens) is driven out of a society, the people then slip back into abject poverty.

This is known as “bad luck.”

With the Internet explosion, the minority—though still a minority—is no longer “tiny.” This may turn out to be very significant. Or I may turn out to be too optimistic.

The Empty Future

Here’s where the Net is going, as far as I can see. The world is moving to an “Empty Computer” model of computation. In the Empty Computer world, all my digital assets (all my docs, apps, images, videos, soundtracks, mail mssgs etc) are stored in my personal data structure, afloat in the Cybersphere. I can access my personal structure from any net-connected computer in the world (obviously, modulo security checks).

Today’s computing model is a dead end, and we’re near the end of this road. People have more and more computers and quasi-computers (cell phones, iPods, etc.) in their lives. For many people, the management problem was already verging on impossible five years ago. It’s increasingly hard to remember where a file (or the latest version of a file) is stored. And, as desktop PCs get cheaper and more capable, installing a new one becomes more and more of a pain. Whatever method I choose, it takes far too long to move all the stuff I need to the new machine. And even though storage is dirt cheap, I always wind up leaving lots of stuff behind, because it’s too much of a nuisance to move it all. (Over the last few months there have been several announcements of new apps designed to synch up your files over all your various machines. One thing we know for sure: these are going nowhere. No one is going to buy them. This is exactly the kind of systems app most people hate to have anything to do with.)

Under the Empty Computer model, I buy a new computer, plug it in and my whole digital world is available on the machine as soon as I connect to the net. I can smash up my old machine with a sledgehammer and feed it to the dog; it doesn’t matter. All my digital assets are afloat in my personal data structure on the net, available to me automatically on every computer everywhere–on computers in phonebooths, supermarkets, planes, airports, classrooms, my office, etc. I log on and identify myself and there’s my stuff. Computers become viewing devices for tuning in personal data structures (which are floating out there in the cyber-cosmos like Venus).

The Net will provide distributed, reliable, fungible storage for these floating personal data structures. What will the structures look like? I claim they’ll look like “lifestreams,” the electronic timeline-journals we first implemented in the mid ’90s (everything fully indexed; with a past, present and future; the stream begins with your birth certificate). Lifestreams have various good characteristics, and people can learn how to work them in three minutes or less. (No one is willing to spend more nowadays.) But, whether or not the all-inclusive personal structure of choice turns out to be a lifestream or something else, the Empty Computer model is the way we’re going. And the Net’s future is to get us there.

I Still Believe

You know, I would love to join this discussion in some useful way, particularly since the other designated contributors are all—each in his own way—heroes of mine and, at the least, a group I would love to go to dinner with.

However, starting with Jaron’s essay, and proceeding serenely from that point, we have failed to address directly the topic at hand. This was, unless I’m mistaken: Is the Internet a liberating technological force? Did the “utopian” promise of Cyberspace—as blared by Wired and other cyberoptimists like Nicholas Negroponte and myself back in those foolish ‘90′s—become realized in any useful way or have the “weary giants of flesh and steel” asserted their authority over this new social space without much inconvenience?

Or, to restate the original questions as stated by Cato:

After a solid decade of intense commercial development, much go-go nineties prophesying now seems a triumph of Utopian hope over hard reality. Does hope of Internet liberation yet remain? Or has the bright promise of the Internet been dimmed by corporate influence and government regulation? Are ideas like virtual citizenship beyond the nation-state, untraceable electronic currency, and the consciousness expanding powers of radical interconnectivity defunct? Is there untapped revolutionary power waiting to be unleashed?

I gave a shot at taking up these matters, but my fellows have mysteriously decided to ignore them, except by tangential reference, and quibble instead about the relative virtues of capitalism, whether UNIX and its command-line interface are stultifying to the creation of online community, whether the concept of the “file” is inhibiting, who said what wise thing first and in what way, whether or not manned space travel is a good idea, and various other pronouncements that seem, at best, orthogonal to the questions at issue.

Gentlemen, you disappoint me.

I guess that part of growing up, which I’m still trying to do at an advanced age, is accepting the reality that one’s heros are human, but it strikes me that you fellows are being astonishingly self-indulgent. Coming from me, this is quite a charge, since, as a self-confessed narcissist, I don’t even defend myself when people say I think it’s all about me.

I guess we’ve run out of time, but to the extent we haven’t, might I encourage you to address one question? I want to know whether you think that the Internet is a liberating phenomenon. I still do.

I Believe Too

John Perry Barlow asks:

I guess we’ve run out of time, but to the extent we haven’t, might I encourage you to address one question? I want to know whether you think that the Internet is a liberating phenomenon. I still do.

I absolutely do. And there’s no better evidence than that dictators continue to fear it, as demonstrated by this rather sad story about Google capitulating to Chinese censorship demands. But regardless of the (perhaps predictable) tendency of big corporations to sell out, I think that the liberating power of the Internet exceeds the ability of the Chinese government, and its corporate collaborators, to snuff it, and I think we’ll see that demonstrated quite clearly before the decade is out.

Academia and the Internet: Rising From the Stalinist Ashes Like the University of Phoenix

Dr. Gelernter, I think your account of Western academia’s failure and mine are different angles on the same story.

(Bear with us, folks, this will get back to Internet liberation; at the end of this rant I’ll explain how the Internet may turn out to be the lever to force constructive change in academia.)

In fact, it was partly your musings on the post-WWII failure of nerve by the U.S.’s WASP elite that started me doing the thinking and research that led to my current understanding of Stalin’s meme war. You’ve fluttered some dovecotes by observing that the replacement of WASPs by Jewish intellectuals was a leading indicator of some major negative trends in postwar American culture; what you didn’t note (an omission which surprised me) was that the real wreckers in the new elite weren’t a random selection of Jews, they were red-blanket babies from the same Ashkenazic family dynasties of the Old Left that had produced the Rosenbergs. These were Stalin’s most willing tools.

(I’m sure Dr. Gelernter knows better than to conclude from the previous paragraph that I’m anti-Semitic, but third parties reading this should know that I believe Jews found their way to central roles in Communism for the same reason they have been disproportionately important in every other reform, revolutionary movement, and conspiracy of the last three centuries; to wit, on average they’re a standard deviation brighter than Gentiles. Talent will out, even if it does so in horrible ways. It’s hardly the Jews’ fault that Gentiles are, comparatively and Gaussianly speaking, dumber.)

As you say, vapid leftism was already a common problem among intellectuals before 1920; which is just to notice that Stalin’s seduction of the Western intelligentsia built on earlier exertions by the Fabian Society and organizations going back to the First International in the 1860s.

What changed under the Soviets is that Gramscian seduction of the intelligentsia became not merely an instrument of old-fashioned Marxist evangelism but a conscious and principal weapon for corrupting and destroying Western institutions, one that actually substituted for warfare by other means. No secret was made of this; it’s right there in Party doctrine. CPUSA even went so far as to tell members to promote non-representational art so that public spaces would become ugly. (No, I’m not joking.)

One index of the difference this shift made is this: before it, Marxists (led by Marx himself) actually promoted the development of industrial capitalism in the Third World because in theory it is a necessary precondition for the next stage of the dialectic. After it, “world system” economics a la Baran opposed Third World capitalism in order to damage the economic network of the “main adversary,” the U.S.

We know much more today about Stalin’s meme war than we did before the Soviet Union fell, because some of the immediate successors of the original architects of the program are still alive and are talking. Koch’s book is good on the subject; so is Denial: Historians, Communism, & Espionage by Haynes and Klehr. These revelations have unsettling consequences even for lifelong anti-Communists like myself; I’m still trying to get over the shock of learning that Dorothy Parker was a Stalinist agent of influence!

I’m not a conservative, have never been a conservative, and don’t ever expect to become a conservative. So it spooks me how accurate all those old-time McCarthyite rants about Communist subversion turned out to be now that we have the Venona transcripts and ex-KGB generals telling all to historians. Back in the ’60s and ’70s I thought I was as hard-core anti totalitarian as an American boy could be, but even I bought some of the obscuring smoke that the anti-anticommunist “liberals” were peddling. For you, a self-described “conservative Republican”, accepting the full magnitude and insidious character of the meme war should be easier.

One of the things we’ve learned from ex-Soviet informants is that academia, the press, and the entertainment industry were regarded not merely as high-value targets, but targets which by around 1980 they believed they had largely succeeded in subverting or neutralizing. They had good reason for this belief—our humiliating retreat from Vietnam, brought about by adroit and successful manipulation of domestic U.S. politics conducted through American proxies.

So: we know the Soviets aimed to apply Gramscian subversion as a war weapon against the West, we know they believed themselves to have succeeded in significant ways, and the dominant cultures of the entertainment industry, the press, and academia behave today precisely as we would expect if they had succeeded in those ways (that is, they sneer at traditional values and patriotism and exhibit pervasive left-wing and anti-American bias). Still think my analysis of academia’s decline is so wrong?

Getting back to the thread topic, otherwise intelligent people like Jaron Lanier still screw up their thinking about technology and capitalism by obsessing about “unfairness to the poor” in exactly the same wrongheaded ways the KGB found so useful in 1950, even though today’s “poor” are overweight from having too much to eat and own cars and air-conditioners and cellphones. Few things are more pathetic than Marxist cant that doesn’t know itself to be Marxist cant, but we hear it constantly—and that is Stalin’s victory, toxic memes successfully poisoning our discourse long after the despotism that spawned them has died.

(To be evenhanded, I should say that I find conservatives far from blameless in all this. Some of them got that there was a memetic war going on, but almost all settled for being peevish reactionary critics in a permanent defensive crouch. Far too many cuddled up to racists and religious absolutists. That’s how conservatism lost the young, and deserved to, until the Left became so absurd that the kids started to see through the bullshit on their own.)

On the other hand, I concede you have a point about the market value of a Harvard education. I don’t think that point generalizes outside of the Ivy League, though—about all an English degree from a second- or third-rate school qualifies anybody for these days is a job as a barista at Starbucks.

You should, therefore, consider the possibility that humanities faculty at second- or third-rate schools are in fact getting paid what their teaching is worth. It’s the Harvard/Yale/Princeton case (golden-ticket degree, but the junior professors still eat crap) that needs explaining. And I think that one is simple—there’s a surplus of would-be academics, so universities act as price seekers and wages even at the elite schools get pounded down to the same levels as those at East Nowhere U.

You say:

The solution is obvious: a free market in education; a market controlled by the producers (namely the profs) & not the institutions.

I agree completely. We programmers are seizing control of our craft back from management (nod at the ironic parallel with Marxism in passing); why shouldn’t educators do likewise?

Actually, there are good reasons why, up to now, educators have been unable to do that. And this is where the Internet comes back into the conversation. Because the thing that pinned programmers to big ugly stupid secretive management structures is the same thing that gives big ugly stupid educational institutions the whip hand over junior professors—the fixed-capital costs of the tools of the trade.

The open-source movement wasn’t possible when programming required a million-dollar mainframe. Million-dollar mainframes require big capital concentrations, which require lots of managers to run ‘em. When the PC and the Internet arrived, computation and communication costs plummeted towards zero. The need for big capital concentrations to support software development almost (though not entirely) vanished. An increase in the relative power of programmers followed as the night the day.

University campuses, school buildings, laboratories—these are academia’s equivalent of the million-dollar mainframe. We probably can’t disaggregate campuses entirely (time-shares in a cyclotron, anyone?) but to the extent the Internet helps us break apart these institutional lumps and make a more fluid market, the actual human producers will regain power over their craft.

I think adult-education schools like the University of Phoenix are leading the way here—finding creative ways to use technology and distance learning, renting space and changing curricula in rapid response to customer demand. The explosion in homeschooling at the K-12 level is worth notice, too.

Under market pressure, I think we can expect valuable education to get divorced from valueless political cant pretty rapidly. Simply ending the cross-subsidization of vacuous victim-studies departments by business and engineering schools would be a huge improvement.

But you’re not going to get to your free market in education by conceding the authoritarian/statist case that “we need to spare a little attention for [the market’s] failures too.” That, Dr. Gelernter, is called “abandoning the moral high ground to the enemy”. It’s Stalin’s toxic memes messing with your head. And it’s a mistake I respectfully submit you should know better than to make.

Liberation: It’s Here

John Perry Barlow:

I guess we’ve run out of time, but to the extent we haven’t, might I encourage you to address one question? I want to know whether you think that the Internet is a liberating phenomenon.

Is that a trick question? Of course the Internet is a liberating phenomenon—it’s liberating in so many different ways that we suffer almost epistemic confusion when trying to understand them all.

I think Google counts as liberation—from having to keep a big technical library, for starters (I hardly use my nonfiction books any more).

I think blogs count as liberation; they used to say that freedom of the press belongs to the man who owns one, and now everybody owns one.

I think plain old email counts as liberation. Distance matters vastly less to our relationships than it did when we were kids. (Cellphones helped too, of course.)

OK, so we didn’t get the exact liberation, on the exact timeframe, that we expected before the fact. Do we ever when the revolution actually comes? Human beings almost always overestimate the impact of near-term technological change and underestimate the long-term impact. So it was with the telegraph, the telephone, and the television; so, I am sure, it will be with the Internet.

The kind of radical libertarian changes you and I and our peers dreamed of aren’t here yet, but I think they’ll come eventually. Of course, we might have to go through the Singularity to get there—but the Internet is helping that along, too.

Liberated from What?

There’s something peculiar about the language of libertarians. In a modern American context, terms like “liberation” and “community” are old lefty code-words. They have become nostalgic. They lend a familiarity and warmth to affairs here in crazy Berkeley. Our archaic tropes mean exactly as much as the ubiquitous “frankly” does in D.C. It’s hard for me to imagine that our nonsense words mean something to you guys over in Policy Land, but apparently they do.

I guess the idea is that libertarians are triumphantly re-contextualizing the opponent’s symbols. It’s like using the N-word in a whole new way!

What’s with the obsession with old 60s lefty symbolism? What demons are you guys fighting? Are you still upset with the hippie chicks of old who said Yes to guys who said No? Are you suffering from Lysistrata blowback? It isn’t fair. We don’t go around saying “frankly.” You don’t hear Berkeley protesters in the streets yelling, “Frankly, we should impeach Bush!”

So that’s why it’s hard for me to address the idea of Internet liberation head on. Which oppression are we expecting to be liberated from? The idea isn’t well framed. Internet-supported self-improvement, Internet adventure, Internet-based economic development, Internet-based pleasures; these are all ideas that make sense to me.

My wife had to escape the Soviet Union when the wall was still in place, and what my parents and their parents went through under Nazis and Tsars was unspeakable. I have thankfully never had the potential to experience liberation.

It seems to me the rhetorical charge underlying of the Internet liberation question is something like, Isn’t it true that this is yet another case where the promises of the 60s Left turned out to only be realizable by Free Markets? The problem is that some of those old promises didn’t exactly make sense in the first place, so you guys don’t make sense either when you use them as points of reference.

The Next Great University Will Be Net-based

Here’s a brief reply to Eric, for closure: I completely agree with you regarding the solution to the free-market education problem. I published a couple of pieces on this topic over the last 2-3 months (one in the LA Times, one, appropriately, in Forbes), and I am now part of a project working along these lines. I don’t know whether it will succeed, but it’s clear some project along these lines will. There hasn’t been a great university founded in this country for an amazingly long time (I guess maybe Stanford and the University Chicago are the two most recent, maybe Cal Tech, but I don’t know for sure). What I do know is that the next great university will be Net-based. As you note, online education is already happening, and it’s a startling success.

[On a totally irrelevant but fascinating (to weirdos like me) topic: I think you’re also right, up to a point, about the “Ashkenazic families” vs. Jewish intellectuals in general who led the Long March to the Left. I’ve never written about the topic in part because I don’t want to give people a distorted impression. Not only were Gentiles prominent on the Left, but they were often pushed forward and became the cover boys & girls–thus, e.g., Dwight MacDonald. See Mary McCarthy’s hilarious story “Portrait of the Intellectual as Yale Man.” (Of course when she wrote it, no one knew that she was part-Jewish herself. But note also that it was startling at the time that a Yale man should be an intellectual.) It’s also true, though, that nearly all US Jews are Ashkenazic. Some turned into communists, and some turned into Robert Moses–not only a Yale man and establishment icon but also a right-wing Republican. It’s really East European Jews you mean. German Jews came earlier and were more conservative. But again, East-Euro Jews were prominent on both sides of the political spectrum. Nonetheless you are right, amd it’s a matter of history: Jews played a disproportionately prominent role in pushing American intellectuals and universities to the Left. That this never created any sort of anti-Semitic backlash (or hasn’t so far) is one of the most extraordinary, noble facts in modern American history.]

The Dark Side of Internet Liberation

David Gelernter wrote:

It’s really East European Jews you mean.

Quite correct. I had almost inserted that qualifier myself. It’s impossible to read any history of the American Old Left without noticing the preponderance of Litvak, Galizianer, Ukrainian, and Polish family names.

I guess my underlying error here is that when somebody says “Ashkenazim” I tend to automatically think of shtetl country rather than Vienna or Berlin. I suspect this is a common tendency among goyim who know the word “Ashkenazim” at all. Blame “Fiddler on the Roof” for it. :-)

Gelernter:

Nonetheless you are right, and it’s a matter of history: Jews played a disproportionately prominent role in pushing American intellectuals and universities to the Left. That this never created any sort of anti-Semitic backlash (or hasn’t so far) is one of the most extraordinary, noble facts in modern American history.

Now that you mention this, I wonder why it never did. My best guess is simply that the Holocaust put Jew-bashing so far beyond the pale for a generation after the war that anti-Semitic nativism couldn’t get any traction even when there were historical facts on the table to motivate it.

Today, American politics does have a still minor but troubling problem of anti-Semitism. Curiously, in view of those historical facts, it isn’t a right-wing phenomenon but a left-wing one. I’ve sometimes wondered if, among the many suicidally stupid things American leftists have done, driving away the Jews that have provided them with most of their intellectual firepower will be the one that finally drops them in the dustbin of history.

The dark side of Internet liberation slapped me in the face as I was researching these issues yesterday. If you google for “jews communism” the top hit will give you facts about the dominance of Eastern European Jews in the Old Left. They’ll even be true facts, or at least consistent with what I’ve gathered from respectable sources. The trouble is, you’ll find these true facts on Stormfront, a site run by evil neo-Nazi scum.

The most disturbing thing about this isn’t that the Internet has enabled Stormfront to peddle glossy lies as propaganda, but that it enables them to present unspeakable-in-polite-company truths in a tempting and tasty format—one which might just bait gullible people into buying their poisonous brew of hatred and racism.

Internet liberation gives more power to the bad guys as well as the good. That’s obvious. The subtler challenge here is that in an information-rich environment, even the quality of facts and argument used by evil scum rises. Neo-Nazis were easier to dismiss as sick jokes when they dealt entirely in obvious falsehoods.

This makes it more important than ever for people to be able to think critically and discriminate—not just to be able to catch lies and logical errors, but to notice even when truth and relatively sound argument are being gradually twisted to support an agenda that is neither true nor sound.

The top “jews communism” hit on Google should be a genuine and dispassionate scholarly study of the Eastern European Jewish Old Left dynasties, but it isn’t because in today’s academia you’re not allowed to go anywhere near that close to noticing that race matters. Before the Internet and search engines, neo-Nazis couldn’t really exploit that willful blindness effectively. Now they can.

The lesson here is that people of goodwill can’t allow any truth to be unspeakable any more, because on the Internet that kind of polite or politically correct suppression just hands a weapon to hatemongers.

Honest Mistakes

This last turn of the conversation towards an examination of Jews and the Left is too annoying to pass without comment.

To state the obvious: Jews were an oppressed minority in Europe. And as it happens we were also a hyperactively bookish minority. Marxism and related ideas were new, and it was perfectly reasonable to give them a try. It’s completely understandable that there would have been a huge Jewish Left movement.

It’s important to retain respect for people, particularly those who are no longer with us, who believed in things that seemed reasonable at the time. It’s also vital to learn, and recognize when an experiment has failed. In my grandparent’s generation, the Left seemed like the place where beauty, love, comfort, compassion, and intelligence were to be found.

People learn from mistakes, and learn most from honest mistakes.