Free Novel Read

The Chip: How Two Americans Invented the Microchip and Launched a Revolution Page 13


  Although digital devices can recognize only two numbers— 1 and 0—they can arrange those two to represent any number in the universe. Large computers have, for example, computed the value of pi to 206 trillion decimal places—a number so long it would take a 4-million-page book just to print it. But the only numbers the computers had to work with were 0 and 1.

  To understand how this works, forget numbers for a moment and think about words—specifically, the word “cat.” Anyone familiar with languages knows that there is nothing inherent in the purring, four-footed feline species that requires it to be represented by the three Roman letters “C,” “A,” and “T.” You can spell the thing gato or chatte and it is still the same animal. For that matter, there’s no reason why a cat has to be represented by letters chosen from our 26-letter Roman alphabet. In the story On Beyond Zebra, Dr. Seuss tells of a boy who decided that the Roman alphabet was incomplete, so he went on past Z and invented 19 new letters, including one, “thnad,” that is used to spell the name of a typically Seussian breed of cat. An alphabet can have as many letters in as many different shapes as its users find convenient. When Samuel F.B. Morse perfected his telegraph, he found that only two letters—dot and dash—could be conveniently sent through the wires. With those two, he invented his own alphabet; telegraphers used to practice their skill by tapping out the Morse Code version of “cat”: ..–. .– –. The Japanese kana alphabet has 47 different letters; in that system “cat” is represented by the symbols , but it means the same thing as our word “cat.” The Chinese alphabet includes about 25,000 different characters, including one that stands for “cat”: . The 26-letter alphabet we use is just a selection of symbols that Western culture has grown used to. The word “cat” is just a convenient combination of those symbols that English speakers have settled on to represent a cat.

  The same principles apply to numbers. There’s nothing inherent in the number 206, just to choose a number at random, that requires it to be represented by the symbols 2, 0, and 6. We just happen to use that representation because of the way our number system works. Our system uses ten different symbols, or digits, for numbers:

  0 1 2 3 4 5 6 7 8 9

  To represent numbers larger than 9, we add a second column and run through the list again:

  10 11 12

  Since the turning point—the new column—comes at the number 10, our system is called the base-10, or decimal, number system. The decimal system is the most familiar counting system in the world today, but it is by no means the only one available. It would be quite simple to go On Beyond Nine and invent new symbols; a duodecimal, or base-12, number system, could look like this:

  0 1 2 3 4 5 6 7 8 9 & # 10

  With twelve different symbols, the turning point—the new column—comes at the number 12.

  The great breakthrough that permitted man to count far beyond 10 with just ten different symbols was the invention of this turning point—a concept that mathematicians call positional notation. Positional notation means that each digit in a number has a particular value based on its position. In a decimal number, the first (farthest right) digit represents 1’s, the next digit 10’s, the next 100’s, and so on. The number 206 stands for six 1’s, no 10’s, and two 100’s:

  Add it all up:

  and you get 206. This number, incidentally, demonstrates why mathematicians consider the invention of a symbol that represents nothing (i.e., the number 0) to have been a revolutionary event in man’s intellectual history. Without zero, there would be no positional notation, because there would be no difference between 26 and 206 and 2,000,006. The Romans, for all their other achievements, never hit on the idea of zero and thus were stuck with a cumbersome system of M’s, C’s, X’s, and I’s which made higher math just about impossible.

  With positional notation, we can use any number of different symbols to count with. We could devise a numerical alphabet with 26 different digits, or 206 different digits, or 2,006 different digits. The base-10 system we use is just a convenient method that people have settled on to represent all numbers.

  If you want to know why modern man has settled on a base-10 number system, just spread your hands and count the digits. All creatures develop a number system based on their basic counting equipment; for us, that means our ten fingers. The Mayans, who went around barefoot, used a base-20 (vigesimal) number system; their calendars employ twenty different digits. The ancient Babylonians, who counted on their two arms as well as their ten fingers, devised a base-12 number system that still lives today in the methods we use to tell time and buy eggs. Someday a diligent grad student doing interdisciplinary work in mathematics and the history of film may produce a dissertation demonstrating that the residents of E.T.’s planet use an octal number system; the movie shows plainly that E.T. has eight fingers. For earthbound humans, however, the handy counting system is base-10.

  A computer’s basic counting equipment is simpler. It is an electronic switch—a transistor—that can be either on or off. Each of these conditions represents one digit; on represents 1 and off represents 0. This two-digit number system is called the base-2, or binary, system. Just as people can count to any number, no matter how high, with just ten digits, a computer can count to any number with just two. Like people, computers do this through positional notation. Counting in binary starts out just like decimal:

  0 1

  But in a binary number, the turning point—the new column— comes at the number 2. In binary the two-digit number 10 stands for one 2 and no 1’s—that is, the quantity 2. Binary 11 means one 2 plus one 1—that is, 3. Another column must be added to write 4, another for 8:

  0 1 10 11 100 101 110 111 1000

  Things go on this way until we get to the number 1111. Reading from the right (generally the easiest way to read a binary number), 1111 stands for one 1 plus one 2 plus one 4 plus one 8, or 15. Having come to 1111, the system is out of digits again, so another column is added: the number 10000 is the binary version of 16.

  In a binary number, in other words, the first (far right) column represents 1’s, the second column 2’s, the third 4’s, the fourth 8’s, the fifth 16’s, and so on, as long as necessary. The binary number 11001110, just to choose a number at random, represents (from the right), no 1’s, one 2, one 4, one 8, no 16’s, no 32’s, one 64, and one 128:

  Add it all up

  and it turns out that 11001110 is precisely the same number that is represented by 206 in the decimal system. The quantity hasn’t changed; the only thing different is the alphabet, or number system, used to represent it.

  Except for someone who has lost nine fingers in an accident, a number system based solely on 1 and 0 is not particularly useful for humans. But the binary system is perfect for digital machines, not only because all numbers can be represented by chains of on and off electronic switches, but also because binary arithmetic is the simplest possible mathematical system. Humans learning basic arithmetic in the elementary grades find that our decimal system requires them to memorize hundreds of “math facts”— facts like 2 + 2 = 4, 17 − 9 = 8, 8 × 7 = 56. If they were learning the binary system, things would be infinitely simpler. In binary arithmetic, there are only three math facts: 0 + 0 = 0, 1 + 0 = 1, and 1 + 1 = 10 (10, of course, is the binary version of decimal 2).

  This extreme simplicity is a boon to computer designers. It is possible, but cumbersome, to construct extensive mazes of transistors inside a machine to perform binary subtraction, multiplication, and division. Thanks to some convenient mathematical gimmickry, however, digital machines based on the binary system can carry out any numerical operation using only addition, which involves fairly simple circuitry. With an ancient trick called ones-complement subtraction, computers can solve subtraction problems by adding. Multiplication is performed the way humans did it eons ago, before they developed the multiplication table—by repeated addition. If you ask your calculator to multiply 4 × 1000, the machine steers the numbers through a series of logic gates that essentially p
ut a binary 4 in a storage place, and then 999 more 4’s, one at a time, to get the answer. Division, similarly, becomes a series of ones-complement subtractions.

  The ingenious, indeed breathtaking, insight that binary mathematics was perfectly suited to electronic computers occurred more or less simultaneously on both sides of the Atlantic to a pair of ingenious, indeed breathtaking, visionaries who had scoped out, by the late 1940s, remarkably accurate forecasts of the development of digital computers over the ensuing half century. These two cybernetic pioneers were John von Neumann and Alan M. Turing.

  Von Neumann was born in Budapest, the son of a wealthy banker, in 1903. He was recognized almost immediately as a prodigious mathematical talent, and spent his youth shuttling from one great university to another: Berlin, Zurich, Budapest, Göttingen, Hamburg. He published his first scholarly monograph at the age of eighteen and thereafter turned out key papers in a wide variety of fields. In 1930 he sailed west with a tide of refugee European scholars to Princeton, where he held a chair at the university but also became one of the first fellows—along with a fellow named Albert Einstein—of the Institute for Advanced Study. He made important contributions in pure mathematics, but also wrote major works on applications, ranging from chemical engineering and quantum physics to economics and the Theory of Games, a mathematical construct of his own for winning complex games.

  During World War II, von Neumann was involved with the development of atomic bombs, an engineering task of overwhelming scope that required, among much else, huge numbers of separate mathematical computations. On a train platform in Aberdeen, Maryland, one day in 1944, von Neumann was pondering how best to scale this mountain of mathematics when, by sheer luck, he ran into a younger mathematician, Herman Goldstine. Chatting on the train, Goldstine told von Neumann about the new ENIAC computer under way in Philadelphia—a machine that could zip through repetitive computations at unprecedented speed. This chance conversation pulled von Neumann into the new world of computers, where he immediately began making major contributions.

  A lot of people who visited ENIAC in the postwar years saw an amazing leviathan of hot, blinking tubes that could, indeed, handle huge computational problems but had no other practical use. When the farsighted von Neumann looked on ENIAC for the first time, though, he saw a future in which computing machines were universally used, with universal applications. The transistor, with its promise of fast, low-power switching, spurred him to even more ambitious theories of what computers might become. More and more, toward the end of his life, he began to see parallels between the evolution of computing machines and the evolution of the human mind. His last book, published posthumously in 1958, was titled The Computer and the Brain.

  Alan Turing, born in London in 1912, was considered a poor student with little academic promise through most of his school career. After twice failing the scholarship exam for Trinity College, Cambridge, he matriculated at King’s, another Cambridge college, and took his Ph.D. there in 1935. He became intrigued by the Entscheidungsproblem, a deep mathematical quandary posed by the German scholar David Hilbert. One difficulty in solving this problem was that a solution would take not only ingenuity but also drudgery, because it required endless repetitive calculations. But endless repetitive chores were a waste of human time and energy, Turing felt; the drudgery should be left to machines. While pondering Hilbert’s problem, Turing hit upon an extraordinary new idea: that a machine could be designed, or programmed, to perform any mathematical computation a human could carry out as long as there was a clear set of instructions for this machine to follow. This ideal computer, with universal application as long as it was programmed correctly, came to be called the Turing Machine, and the concept served as a key inspiration for computer pioneers in Europe and the United States—among them von Neumann, whom Turing met during a stay at Princeton in the mid-thirties.

  During the war, Turing joined the team of mathematicians who gave the Allies an invaluable step up by cracking the Germans’ Enigma military code. The work involved reading pages and pages of sheer gobbledygook, looking for repetitive patterns of letters that would reveal, under ingenious mathematical manipulation, the inner workings of the German cipher machines. To carry out the calculations, the codebreakers developed simple mathematical machines of their own—real-life variations on the abstract Turing Machine. After the war, Turing worked on the first generation of British computers. At the age of forty-one, shortly after being tried and convicted for homosexual conduct (“Accused Had Powerful Brain,” a London tabloid reported), he died from eating an apple tainted with cyanide he was using for an experiment. People are still debating today whether that was an accident.

  Turing had gone even further than von Neumann in suggesting that electronic “brains” could eventually match those of their human builders. “One day ladies will take their computers for walks in the park,” he predicted, “and tell each other, ‘My little computer said such a funny thing this morning.’ ” In his most famous paper, “Can a Machine Think?” published in 1951, he predicted that computers would be carrying on “human” conversations with men and with other machines “in about fifty years’ time.” The filmmaker Stanley Kubrick read that monograph, did the addition, and went to work on a movie about the year 2001.

  Von Neumann, in a report to the U.S. Army Ordnance Department, and Turing, in a report for England’s National Physical Laboratory, set forth their notions of the general architecture of an electronic computer. Both agreed that the device would have to carry out four basic functions: “input,” to take in data and instructions; “memory” (in England, “store”), to keep track of the data; “processing,” to do the actual computing; and “output,” to report the answer back to the human user. And both concluded that the logical way to handle data was in the form of binary numbers.

  Since computer calculations would be performed by switches flipping from on to off and back again, Turing wrote, it was natural enough to assign the value 1 to on and 0 to off, and handle all mathematics with only those two digits. “We feel strongly in favor of the binary system for our device,” the von Neumann report agreed. The computer “is naturally adapted to the binary system since we . . . are content to distinguish [just] two states,” he wrote. “The main virtue of the binary system . . . is, however, the greater simplicity and speed with which the elementary operations can be performed.”

  For all the virtues of binary, though, there was a problem— something the mathematicians called the “conversion problem.” This was a euphemism for the fact that few humans understand the binary system and thus would find it difficult to convert a computer’s answers into a form intelligible to people.

  Two solutions were proposed. One was that the human race should drop its decimal system and learn something closer to binary. Under this arrangement, preschoolers watching Sesame Street would be indoctrinated with rhymes like this:

  Such a sharp change in human habits was obviated by a more practicable idea set forth by both Turing and von Neumann. “The one disadvantage of the binary system from the human point of view,” von Neumann’s report noted, “is the conversion problem. Since, however, it is completely known how to convert numbers from one base to another . . . there is no reason why the computer itself cannot carry out this conversion.”

  This suggestion was quickly adopted, and ever since all digital devices have included a piece of circuitry called a decoder, which translates decimal numbers into their binary equivalents. When you punch the keys to put the number 206 into your calculator, the decoder sends out electronic pulses to a chain of eight transistors so that the transistors line up this way:

  Thus transformed to binary format—11001110—the number 206 becomes comprehensible to a digital machine.

  It was human genius on the part of von Neumann, Turing, and others like them that figured out how to use binary numbers and binary math to turn an inert chain of electronic switches into a powerful computational tool. But the computer pioneers did n
ot stop there. They also designed a complete system of logic that permits machines to make decisions and comparisons and thus work through complex patterns, or “programs,” for manipulating words and numbers. The beauty of this logic system is that it, too, is binary; it can be implemented by integrated circuits full of transistors that do nothing but switch on and off.

  Which modern high-tech genius developed this binary logic? None. The logical methods that all digital devices use today were worked out about 100 years B.C. (before computers) by a British mathematician named George Boole.

  Boole was born in Lincolnshire in 1815, the son of a cobbler who was always pressed for money. The family’s lowly status dictated that the boy would enter some manual trade; he was sent to a vocational school that did not even attempt to teach Latin, the sine qua non of a professional future for any English lad of that day. Undaunted, George taught himself Latin and Greek after school. This came in handy in 1831, when the sixteen-year-old boy was forced to leave school and help support the family. He took a job as an assistant teacher, but continued to educate himself. In his seventeenth year Boole had two experiences that changed his life. He read Newton’s Principia and transferred his attention from classical languages to math. Shortly afterward, while walking alone through an open field, Boole was suddenly struck with a “flash of psychological insight” that convinced him that all human mental processes could be formulated in straightforward mathematical terms.

  It would be pleasant to report that Boole then and there dedicated his life to the explication of this great concept. Unfortunately, things were not that easy. The family was now dependent on George for support, and his job left insufficient time for complex mathematical work. His knowledge of Latin and Greek qualified him for the clergy, and he decided to train for ordination. Gradually, though, it became clear that Boole was too much a freethinker for such a career; unlike the Anglican Church, he doubted the literal truth of the Bible and believed in religious tolerance. All his life, in fact, he was suspicious of clergymen and their efforts at indoctrination. On his deathbed, according to a biographer, William Kneale, Boole requested “that his children not be allowed to fall into the hands of those who were commonly thought religious.”