Saturday, June 15, 2013

Review: Jaron Lanier's "One-half a Manifesto"


American Prospect

3/12/01

Contra Totalism
By Harvey Blume

We associate manifestos with big ideas, combative theses itching to change the world. While the roar of the manifesto has pretty much faded from the culture at large, it can still be heard loud and clear in the digital world. Digital culture continues to foster grand ambitions; it nurtures not only the ongoing quest for the killer app but also the search for the one idea that will make sense of most everything.

Jaron Lanier's recent "One-half a Manifesto" has this heaven-storming quality. The 9,000 word document (available at www.edge.org/3rd_culture/lanier/lanier_index.html) flexes the usual manifesto muscles, but with one difference: It is dedicated not to proclaiming a new theory but to deflating one that is already fully formed and primed, in Lanier's view, to wreak havoc on the world. Lanier names that theory cybernetic totalism. It is cybernetic because the computer is at its core; and in a sense, the computer, more than any written document, is its manifesto. It is totalistic because it aspires to an intellectual synthesis loath to let much of anything escape its explanatory grasp.

Whatever you think of the contents of "One-half a Manifesto," Lanier has to be credited with nerve for issuing it. The thinkers he sets out to oppose are some of the most formidable writers and theorists of our time, including the geneticist Richard Dawkins, the philosopher Daniel Dennett, and the evolutionary psychologist Steven Pinker.

Of course, Lanier, too, is a name to reckon with. The blue-eyed, dreadlocked, multitalented visionary made his reputation as a prodigy in the mid-1980s: Still in his twenties, he coined the term "virtual reality" and launched VPL, the first business to try to implement the concept. Since then, he has consulted for major institutions such as Citibank, Kodak, and the U.S. Department of Defense. Today, Lanier is lead scientist for the National Tele-Immersion Initiative (NTII), an organization that aims to build virtual reality into the fabric of the Internet. When the Internet goes broadband, as is anticipated, Lanier says that NTII will let "users in different places interact in real time in a shared simulated environment, making them feel as if they were in the same room."

Under no circumstances, then, could Lanier be mistaken for a Luddite or a defector from the digital revolution he has helped to foment. To clear away any possible confusion on this point, he pauses early in the manifesto to declare himself "more delighted than ever to be working in computer science," and to praise the "lovely global flowering of computer culture already in place." He adds that a full manifesto, rather than the half he has composed, would be sure to "describe and promote this positive culture." Having affirmed his loyalty to the cause, he then feels free to go after the villain of the piece: the technological elite, or the "inner circle of Digerati," whose dogma of cybernetic totalism "has the potential to transform human experience more powerfully than any prior ideology, religion, or political system ever has."

What Lanier goes on to say about cybernetic totalism may sound, at first, much like other recent alarms against digital overreaching. The best-known of these is no doubt Bill Joy's article, "Why the Future Doesn't Need Us," in the April 2000 issue of Wired magazine. Joy is the co-founder of Sun Microsystems and the author of the Java programming language. That such a legendary hacker could suddenly be afflicted by severe doubts concerning the whole digital enterprise gives his words extra weight. Joy worries that the worst thing about some of our most outlandish digital dreams is that, unfortunately for us, they can be realized. He fears, for example, that if we do not put limits on the development of nanotechnology we will be overrun by lethal, self-replicating mechanical viruses. Not long after Joy's piece was published, the stock market began to deliver its own practical rebuke to dreams of dot-communist utopia.

Lanier joins in this post-millennial mood of second thoughts about computers and the Internet. But uniquely, above and beyond practical concerns, he insists on a philosophical point: What he objects to most about cybernetic totalism is the very fact that it is a totalism. He reserves some of his strongest language to drive this point home-writing, for example, that cybernetic totalism may well "catch on in a big way, as big as Freud or Marx did in their times. Or bigger, since these ideas might end up essentially built into the software that runs our society and our lives."

Although it's what's most distinctive about his manifesto, Lanier's determined anti-totalism has made little or no impression on respondents and reviewers, who prefer to take him up piecemeal and haggle with him over practical matters. It is as if postmodernism, with its suspicion of all-consuming syntheses, has passed digital culture by. The result is that totalism can propagate freely within the digital culture, which has barely any immune response to it. According to Lanier, the totalism to be most wary of these days is built on Darwinism. Of the triumvirate of thinkers who rode astride so much twentieth-century thought-Darwin, Marx, and Freud-only Darwin survives into the new millennium with his reputation not just intact but enhanced. While other grand narratives were being picked apart, Darwinism mutated into a totalism that makes Marxism look like minimalism.

Darwinism gives the new totalists what they take to be a bridge between nature and technology, a way of translating between genetics and cybernetics. Crucially, Darwinism offers the new totalists what any theory must have to undo the constraints of reason: the sense of mounting historical tension, the charged expectation of a watershed event--in short, an eschatology. Cybernetic eschatology focuses on the coming of an electronic species, an artificial intelligence that is nearly ready to peck its way out of the human brain. Lanier defines the new totalist creed by its "astonishing belief in an eschatological cataclysm in our lifetimes, brought about when computers become the ultra-intelligent masters of physical matter and life."

The work of Richard Dawkins plays a key role in cybernetic totalism, whether or not Dawkins himself subscribes to the full package. In books like The Selfish Gene, Dawkins shows that organic beings are no less coded entities than computer programs. So what if one kind of code takes evolution a billion years to assemble and the other can be thrown together by a generation or two of programmers? Isn't it possible--or so the thinking goes--that computer code and genetic code differ more in their details (the time involved, the material employed) than in their logic? We know that computer programs are governed by algorithms--simple, unambiguous sets of instructions that in concert allow for the complicated behavior of operating systems. Might not evolution be algorithmic, too?

For the new totalists, the answer is a resounding yes. Nowhere is this expressed more clearly than in the work of Daniel Dennett. In Darwin's Dangerous Idea, Dennett argues that evolution and software use similar strategies to build complexity out of simplicity, intelligence out of mindless routines. He writes: "The best reason for believing that robots might some day become conscious is that we human beings are conscious, and we are a sort of robot ourselves. That is, we are extraordinarily complex self-controlling, self-sustaining physical mechanisms, designed over the eons by natural selection." Dennett sets the stage for a possible encounter between Charles Darwin and Charles Babbage, the founder of computer science, in one or another of the Victorian drawing rooms they frequented. Each man's work, in Dennett's view, completes the other's. Babbage launched the study of computational algorithms while Darwin laid bare the trade secrets of nature, a mindless but famously successful engineer. Whether or not Darwin and Babbage ever compared notes, their followers have.

Dennett may be the closest thing to cybernetic totalism's Marx-harmonizing its various intellectual sources, but as yet the movement has no Lenin. Lanier observes, "Some of the most dramatic renditions have not come from scientists or engineers, but from writers such as Wired editor] Kevin Kelly and [author of NonZero: The Logic of Human Destiny] Robert Wright, who have become entranced with broadened interpretations of Darwin. In their works, reality is perceived as a big computer program running the Darwin algorithm, perhaps headed towards some sort of Destiny." Lanier wants to rescue Darwin from this destiny. He acknowledges that "the movement to interpret Darwin more broadly, and in particular to bring him into psychology and the humanities has offered some luminous insights." He admits, further, that as a computer scientist it is impossible not to be "flattered" by narratives that put "algorithmic computation at the center of reality." On the other hand, he prefers a more circumscribed Darwinism, a Darwinism that hasn't gone nova. "While I love Darwin," he writes, "I won't count on him to write code."

Still, Lanier recognizes that today it is Darwinism rather than philosophy or theology that hosts the key debates about human nature. In London several years ago, for example, a public discussion led by Steven Pinker and Richard Dawkins reportedly drew 2,300 people and was sold out weeks in advance. Pinker and Dawkins are in basic agreement on the big questions of evolution. It is interesting to speculate about how many seats would be filled by a no-holds-barred debate between Dawkins, say, and Stephen Jay Gould, the chief opponent of the totalists in the quarrel over Darwin's legacy.

In such a face-off, Lanier would be in Gould's corner. He sees Gould as providing evolutionary support for a belief in free will, whereas Pinker, Dawkins, and Dennett would hem us in with determinism. After all, if evolution is as algorithmic as the totalists--Gould calls them "Darwinian fundamentalists"--would suppose, then a big-brained beast like Homo Sapiens is well-nigh inevitable, with artificial intelligence inevitably to follow. Lanier prefers Gould's view (as argued in Full House: The Spread of Excellence from Plato to Darwin) that evolution is more familiar with contingency than with inevitability. Summarizing Gould, Lanier writes: "If there's an arrow in evolution, it's towards greater diversity over time, and we unlikely creatures known as humans, having arisen as one tiny manifestation of a massive, blind exploration of possible creatures, only imagine that the whole process was designed to lead to us." That such a basic issue as free will versus determinism is now being fought out on the grounds of Darwinian logic helps explain why the Darwin wars have been and will continue to be so venomous.

Lanier takes exception to the entire "cultural temperament" of totalists who have become so "intoxicated" by their system that they "seem to not have been educated in the tradition of scientific skepticism." They grow reckless when they meme splice Darwin to Babbage, and giddy when they add Moore's Law to the mix. They take Moore's Law, according to which computer power doubles every 18 months or so, to guarantee that tomorrow's machines will have a million times the speed and memory of today's computers. With that kind of computing power driving them, machines will hardly be able to avoid being jarred into sentience, or so the theory goes. But Lanier has some bad news for totalists: Moore's Law applies only to hardware. Software can be counted on to drag the whole thing down.

With tongue only somewhat in cheek, he suggests,: "If anything, there's a reverse Moore's Law observable in software: As processors become faster and memory becomes cheaper, software becomes correspondingly slower and more bloated." The sad state of software, he continues, may turn out to be humanity's best defense against the coming of any cyber species. "Just as some newborn race of superintelligent robots are about to consume all humanity," he writes, "our dear old species will likely be saved by a Windows crash. The poor robots will linger pathetically, begging us to reboot them, even though they'll know it would do no good."

Bad software gives Lanier a novel spin on the Turing Test, which attempts to gauge whether machine intelligence has evolved to the point where it is indistinguishable from human intelligence. Lanier suggests there is another way for computers to get a passing grade than by becoming smarter, and that is by making people more stupid. In his view, that is just what's going on. He thinks that the Turing Test won't be decided in a single big event; instead, "miniature Turing Tests are happening all the time, every day, whenever a person puts up with stupid computer software."

Why does software improve so slowly, if at all? Lanier blames cybernetic totalism, with its peculiar mix of outsized ambition and downright complacency. If computers are rapidly advancing to the point where they can write their own code, why bother about software elegance? Computers will soon be debugging each other as naturally as monkeys groom each other's fur. Until that day, pile on the features, bring on the bloat. Moore's Law is coming to the rescue.

Still, none of this would seem commensurate with the direst warnings of "One-half a Manifesto." It's true that if machines pass -- or people fail --- the Turing Test, and human beings and computers shake hands on the common ground of the algorithm, there may be little for a humanist to celebrate. But that's no reason to raise a hue and cry about the "suffering [of] millions of people" or to compare cybernetic totalism to "history's worst ideologies," as Lanier does. But for Lanier, the problems we have with software today give but the barest hint of the horrors in store when the computer becomes integral to human genetic engineering.

He predicts "that the hardware/software dichotomy will reappear in biotechnology, and indeed in other 21st century technologies." When genetic code "becomes more manipulatable, more like a computer's memory, then the limiting factor will be the quality of the software that governs the manipulation." With software snarled by Moore's Law in reverse, it will be expensive to rewrite DNA. Only the rich will be able to afford the really good hacks, such as longevity; only they will have access to the indisputable killer app, as it were, a genetically engineered elixir of immortality. Here Lanier joins a number of other thinkers-including some, like E.O. Wilson, on the fringes of cybernetic totalism--in fearing that we'll know the real meaning of binding Babbage and Darwin together with Moore's Law when the human race splits, roughly along the lines of rich and poor, into different species.

Will this occur? It's, of course, impossible to say. But "One-half a Manifesto" has value well beyond this or any other particular prediction. It is the warning against totalism per se that stands out in the piece, all the more so because techies and others have so adroitly overlooked it. You can't know in advance all the specific dangers that will issue from a grand synthesis; you can't forecast how much of the scenery it will devour as it gains energy. But you can be alert, as Lanier urges. And you can take the implication of "One-half a Manifesto" seriously, namely that postmodernism has been only a lull before the gathering of another totalist storm.



No comments:

Post a Comment