Monday, February 21, 2011

Me not robot, me Person

kw: book reviews, nonfiction, manifestos, technology
Computationalism (noun): The belief that human thought processes are entirely describable as computations, or can be perfectly mimicked by computations. Coined by Jaron Lanier and published in his book You are not a Gadget: A Manifesto.
Try this. The next time you search using Google or Yahoo (or whatever), locate a site that answers your question that is not from Wikipedia or another Wikimedia entity. In just under ten years, this most successful of web 2.0 products has become the "go to" place for information. It is the exemplar of the now-popular notion that "the wisdom of crowds" is somehow greater than the wisest individual. Try telling that to Jaron Lanier, and he'll likely ask which crowd would have produced the special or general theory of relativity, or nylon, or the Hubble space telescope.

I made a few Google searches, with the following results: The first hit for Mayflower Compact, E coli, geomorphology, Edison, and Churchill was wikipedia; for life insurance and bioinformatics wikipedia was the second and for metabolic engineering it was third. Making almost any "what is" query yielded a first hit from Wikimedia Answers. But make the experiment yourself. You are likely to find that the non-wiki site has better content.

Lanier's book is primarily a push-back against the more dewy-eyed predictions of the promoters of web 2.0, which began as a synonym for Semantic Web (that is, the web itself "understanding" meaning), and morphed into a looser composition of interactive services including blogs, wikis and "the cloud". The phrase "wisdom of the crowd" underlies many web 2.0 dreams. All too often, the crowd becomes a mob, and little wisdom results.

The author first tackles the concept of lock-in. While there may be many ways to do something, only one or a few become predominant, and often not the best way(s), just because of history. The classic example is the QWERTY keyboard. Few know that the prototype typewriter had an alphabetical keyboard. Typists soon gained sufficient speed to jam keys frequently, so the keyboard was carefully redesigned to slow them down! 75% of common words are typed entirely or all but one letter using the left hand on the QWERTY keyboard. No alternative keyboard has become popular, even though several designs make typing easier and faster, for someone learning for the first time (I once spent a month learning to use the Dvorak keyboard. It was too late; I'm locked-in to QWERTY).

The most telling example from computer science is the UNIX/Linux set of operating systems, which are designed around a command-line interface. Windowing systems laid on top of these systems (including their close cousin MS-DOS, which underlies MS Windows) are sending commands to a command-line interface. This makes it very, very hard to write truly real-time responsive software. Current systems are responsive only because they run on CPU's that can perform a few billion operations per second. It takes a lot of overhead to turn a "command" into a few simple instructions that request the CPU to perform what the user is really asking for.

It seems that modern culture is based on the mash-up. Like "mix tracks" made of snippets of favorite songs, everything is being mashed together, with little thought of the creativity that went into the original productions. While it can be quite creative to produce a collage, the collage-maker must rely on truly creative artists to produce the works that are cut up to make it. When the collage-maker gets more recognition (and money) than the original artists, what incentive is there for artists to produce more art? They have to get "day jobs" to keep body and soul together, and produce much less art as a consequence.

I built a forty-year career upon the premise that people ought to do what people do better than machines, and machines ought to do what they do better than people. Confusing these categories is bad for both people and their machines. Lanier has a very similar stance, and it underlies one of his principal complaints. Technological lock-in, as mentioned above, is one result of trying to make people conform to what is overly easy for machines. But things can change. For many tasks, mouse clicks are much better than keyboard commands. The recent spate of gesture-driven devices such as the I-pad gives hope that innovation is continuing. And I like the Kinect attachment for the XBox 360. For tasks where it is appropriate, particularly for an immersive virtual environment, it is a large step in the right direction.

Thus, the last quarter or third of the book is more optimistic. The author, a working musician, has looked for working artists who are able to make a living by distributing their work over the internet. The number is discouragingly few. However, a trend is just arising toward a compensation scheme similar to pay-per-view on cable TV, but more equitable (and lots cheaper, individually). Though he does not mention it, the Netflix model of movie distribution is a step in the right direction. My brother, a self-published author, has found that Amazon is a great way to eliminate the middle man, and that having a relationship with a print-on-demand publisher completes the circle of resources an author needs. As more books are published electronically to Kindles and Nooks, and to print-on-demand for those who love paper, fewer authors will need either agents or the big publishing houses to market and distribute their books. This aspect of web 2.0 seems to be largely positive.

And Lanier thinks it can be more and more positive if we can step back from the wilder claims and take advantage of what the web can do for us in a practical way. The web is not just a lot of machines. Behind the machines we find hordes of very talented people. Without them the machines do nothing. Only people can either have or confer meaning. Only people innovate. People make things work. Even an "inference engine" that produces (actually, reproduces) inventions, was programmed by some person(s). So far, no machine-designed machine has gone beyond the capabilities of the original machine without human help. Until we understand much more deeply what consciousness is, none ever will. And maybe not even then.

As a working programmer, I can verify the author's contention that only small programs can be made to work perfectly. The larger a program gets, the larger the number of intractable bugs it will contain. There is an axiom in this business, "A large computer program that works grew out of a small program that worked." A huge system like Linux or Windows will have thousands of programming errors, and in addition, will have thousands of unexpected behaviors because no complete flow chart of stimulus and response can be produced.

With that in mind, consider that the human brain contains 10 billion neurons and 100 billion glial cells. The average neuron has 7,000 synapses to other neurons, and a smaller number to the neighboring glia. This means that an average brain has something like 100 trillion connections. A computer simulation of ten neurons with complete connectivity among them is a huge challenge for a supercomputer. And just in case nobody else noticed, Moore's Law for processor cycle time ended about five years ago. The primary way to get more speed from our computers is now to add more and more processors. The software to parcel out a problem to multiple processors is difficult and buggy. We're not at risk of a computer "brain" superseding us anytime soon.

Computers have the potential to free us to be more human. Far too many techno-enthusiasts have been busy finding ways to make people more like machines. This violates our humanity. So does trying to make the machine "too human". Let the machines do what they do well, and the humans do what they do well. We remain irreplaceable.

No comments: