• Home
  • T. R. Reid
  • The Chip: How Two Americans Invented the Microchip and Launched a Revolution Page 2

The Chip: How Two Americans Invented the Microchip and Launched a Revolution Read online

Page 2


  After indulging themselves for a year or two in the resentful skepticism with which academia generally greets revolutionary new concepts, physicists and electronics engineers gradually warmed to the expansive new possibilities offered by semiconductor electronics. By 1953, The Engineering Index, an annual compendium of scholarly monographs in technical fields, listed more than 500 papers on transistors and related semiconductor devices. A year later there were twice as many, a year after that even more. The titles reflected the intensity and the global sweep of academic research spurred by the new technology:

  “Interpretation of Alpha Values in p-n Junction Transistors”

  “Le Transistron dans le circuit trigger”

  “Circuito Multiplicatore del coefficient di risonanza con tran-

  sistor”

  “Tensoranalysis in Transistor-Rueckkopplungshaltungen”

  “Perekhodnaya, chastotnaya, i fazovaya kharakteristika transis-

  tora”

  But the papers reflected as well a quite unscholarly enthusiasm among the academics:

  “Success Story—Transistor Reliability”

  “Transistors Key to Electronic Simplicity”

  “Méthodes d’Optimization Appliquées à la Microminiaturiza-

  tion”

  “Fabulous Midget”

  And then, as designers learned how to make use of the fabulous midget’s properties, the tyranny of numbers began to emerge. Enthusiasm gave way to frustration, even desperation. By the second half of the 1950s, when the problem was growing acute, the titles in The Engineering Index reflected a general sense of disappointment, even despair, in the technical community:

  “Switching Losses in Transistor Circuits”

  “Electronic Equipment—Weight and Volume Penalties to

  Flight Vehicles”

  “Comment a été résolu le problème de la fabrication des tran-

  sistors”

  “Design Limitations of Semiconductor Components”

  That last title was imprecise. The “design limitations” were not inherent in the transistors or other components; they stemmed from the basic design structure of all electric circuits.

  Building a circuit is like building a sentence. There are certain standard components—nouns, verbs, adjectives in a sentence; resistors, capacitors, diodes, and transistors in a circuit—each with its own function. A resistor is a nozzle that restricts the flow of electricity, giving the circuit designer precise control of the current flow at any point. The volume control on a TV set is really a resistance control. Adjusting the volume adjusts a resistor; the nozzle tightens, restricting the flow of current to the speaker and thus reducing the sound level. A capacitor is a sponge that absorbs electrical energy and releases it, gradually or all at once, as needed. A capacitor inside a camera soaks up power from a small battery and then dumps it out in a sudden burst forceful enough to fire the flashbulb. If you have to wait until the indicator light on your camera blinks to tell you that the flash is ready to use, you’re really waiting for the capacitor inside to soak up enough energy to make the thing flash. A diode is a dam that blocks current under some conditions and opens it to let electricity flow when the conditions change. An electric eye is a beam of light focused on a diode. A burglar who steps through the beam blocks the light to the diode, opening the dam to let current flow through to a noisy alarm. A transistor is a faucet. It can turn current flow on and off—and thus send digital signals pouring through the circuitry of a computer—or turn up the flow to amplify the sound coming from a radio. Just about every circuit is made of these basic parts— the nozzle, the sponge, the dam, and the faucet—which come in various speeds and sizes. By connecting these standard components in different ways, one can get circuits, or sentences, that perform different functions.

  Writers of sentences are taught to keep their designs short and simple. This rule does not apply in electronics. Some of the most useful circuits are big and complicated, with hundreds of thousands of components wired together. In the era of vacuum tubes, the designers’ implicit awareness of power, heat, and size restraints set a limit to the scope of any circuit design; there was no point in designing a machine that would melt to shards a few moments after it was turned on. With the transistor, though, these fundamental design limitations disappeared. Now the designers could draw up plans for exotic communications and computer circuits—circuits that could steer a rocket to the moon or control a global network of instant mail. Circuits like that might use 50,000 or 500,000 or 5 million transistors, and similar numbers of resistors, diodes, and capacitors. And why not? You could build a circuit that big now without worrying about heat or power problems. On paper, these supercircuits could outperform anything that had been designed before. All you had to do was get the parts, wire them together, and. . . . But there was the problem. That was where the numbers barrier came in. The new circuits on the drawing boards were so big and so complex it was virtually impossible to build them. Nobody could wire that many different parts together.

  An electric circuit has to be a complete, unbroken path along which current can flow. That means that all the components of a circuit must be connected in a continuous loop: resistors wired to diodes, diodes to transistors, transistors to other resistors, and so on. Each component can have two, ten, even twenty interconnections with other parts of the circuit. Making the connections—wiring the parts together—was almost entirely hand labor: it was expensive, time-consuming, and inherently unreliable. A circuit with 100,000 components could easily require 1 million different soldered connections linking the components. The only machine that could make the connections was the human hand.

  Even if somebody—the Pentagon, for example, where price, in the depths of the Cold War, was no object—could pay for that much hand labor, there was no way humans could put together a million of anything without turning out a few that were faulty. By the late 1950s, the electronics industry had come head-to-head with this implacable limit. The Navy’s newest aircraft carriers had 350,000 electronic components, requiring millions of hand-soldered connections; the labor cost—for wiring those connections and testing each one—was greater than the total cost of the components themselves. Production of the first “second generation” (i.e., completely transistorized) computer—the Control Data CD 1604, containing 25,000 transistors, 100,000 diodes, and hundreds of thousands of resistors and capacitors—lagged hopelessly behind schedule because of the sheer difficulty of connecting the parts. And new computers on the drawing boards would be far more complex. At the end of the decade, people were already planning the computers that would someday guide a rocket to a landing on the moon. But those plans called for circuits with 10 million components. Who could produce a circuit like that? How could it fit into a rocket?

  “For some time now,” wrote J. A. Morton, a vice president of Bell Labs, in an article celebrating the tenth anniversary of the transistor, “electronic man has known how ‘in principle’ to extend greatly his visual, tactile, and mental abilities through the digital transmission and processing of all kinds of information. However, all these functions suffer from what has been called ‘the tyranny of numbers.’ Such systems, because of their complex digital nature, require hundreds, thousands, and sometimes tens of thousands of electron devices.” “Each element must be made, tested, packed, shipped, unpacked, retested, and interconnected one-at-a-time to produce a whole system,” Morton wrote in a later article. “Each element and its connections must operate reliably if the system is to function as a whole. . . . The tyranny of large systems sets up a numbers barrier to future advances if we must rely on individual discrete components for producing large systems.”

  In essence, the small community of engineers exploring the frontiers of electronics in the 1950s faced the same abject frustration that had confronted the small community of seamen exploring the frontiers of navigation in the 1590s. At the far western extremity of the Atlantic, hard against the shores of Central America, the explo
rers could look westward from the masthead and see, “with a wild surmise,” a vast new ocean, a whole new world, beckoning across the isthmus. But there was no way—no way short of the impossibly expensive, time-consuming, and unreliable voyage around the tip of South America—to get to that wonderfully promising new stretch of sea. The future was within sight, tempting, tantalizing, but out of reach. Just so for Jack Kilby, Bob Noyce, and their colleagues. A vast new electronic world was right there on the blueprints, but impossible to achieve. And so physicists and electronics engineers embarked on a great voyage of discovery, searching for a route across the numbers barrier.

  The search became a top-priority technological concern throughout the established world. The Royal Radar Establishment, racing to bring the honor of this important accomplishment to Great Britain, developed a promising concept as early as 1952 but failed to make it work. The French, the Germans, and the Russians competed against one another; in the United States, the Army, the Navy, and the Air Force competed just as fiercely, each service pushing its own preferred solution, each rejecting the ideas of the others. Private firms, sensing a gold mine, poured millions of dollars and man-hours into the effort. But through most of the 1950s none of these endeavors really helped. Patrick Haggerty, the president of Texas Instruments, complained that most of the proposed solutions to the tyranny of numbers “tend to exacerbate the tyranny.”

  The multifaceted efforts to deal with the numbers problem were grouped in the technical literature under the general title “miniaturization” (or “subminiaturization,” or “microminiaturization”). It was an unfortunate term because it suggested a solution that could not work. The basic thrust of miniaturization was an effort to make electronic components extremely small, thus reducing the overall size and weight of complex electronic devices. This goal was obviously important to the military, which had to squeeze radios, radar and sonar devices, and computers into the nooks and crannies of missiles and submarines. One of the first miniaturization programs was a Navy-financed effort called Operation Tinkertoy. But there were civilian implications as well. “In civilian equipment, such as computers,” the trade journal Electronics noted, “the number of components alone makes miniaturization essential if the computer is to be housed in a reasonable-sized building.”

  But turning out transistors, resistors, and the like on Tinkertoy scale did nothing to reduce the sheer number of components and connections. The Tinkertoy business tended to exacerbate the tyranny because circuits composed of tiny parts were harder, and costlier, to build. On the assembly lines, the women who soldered circuits together—it was almost entirely women’s work, because male hands were considered too big, too clumsy, and too expensive for such intricate and time-consuming tasks—now had to pick up miniature components and minute lengths of wire with tweezers and join them under a magnifying glass with a soldering tool the size of a toothpick. Circuits made under those conditions were far more likely to end up with faulty connections. In many cases, even a single bad connection could be fatal to the entire circuit, just as a single burnt-out bulb can make an entire string of Christmas lights go dark. Electronic devices that relied on circuitry employing the “microminiature” parts were famously unreliable.

  To enhance reliability, the designers tried redundancy. Instead of building a radio with a single set of components (a typical small radio of the late fifties might have used a half dozen transistors wired to a dozen resistors, capacitors, and diodes) the electronics companies started making radios with an extra circuit built right in—like a car built with two front axles just in case one should snap in half on the road. The redundancy approach tended to exacerbate the tyranny because the extra components and extra wire required more interconnections, and thus more labor. Worse, redundant circuitry took up more space, and that was an anathema, particularly to the people who build computers. Even without redundancy, electronic circuits were already too large. Large circuits undermined the single most important asset of modern electronic equipment: speed.

  As we’ll see in Chapter 9, calculators, computers, digital clocks, video games—for that matter, all digital electronic devices—are extremely dumb tools. But they are extremely fast extremely dumb tools. A computer reduces every question, every computation, every decision to the simplest possible terms (yes or no, one or zero, true or false) in the machine’s internal circuitry. These two black-or-white states are represented by switches—transistors— that are either on or off. An astrophysicist mapping the universe in the observatory needs to calculate the twenty-fourth root of arctan 245.6; to do it, he types the problem into his computer. The machine has to work through a few hundred separate yes-or-no steps—that is, transistors have to switch on and off hundreds of times—just to figure out that someone has punched its keys. To determine which keys were pushed, and then to solve the problem, will take another 100,000 steps, quite possibly more. A kid playing Super Zaxxon in the arcade needs to destroy an enemy base; to do it, he pushes the “Fire” button. The machine has to work through dozens of separate yes-or-no steps just to figure out that the button was pushed. To fire the missile, and see if it hits anything, will take another 5,000 steps, quite possibly more. The machines can get away with their absurdly convoluted way of doing things only because the transistors switch, from on to off, from off to on, quickly. At a switching speed of once per second, computers would be impossible; at 1,000 times per second, merely impractical. Switching at a million times per second, computers become important. At a billion times per second—completing one step of the problem every nanosecond—they become the foundation of a revolution that has swept the world.

  “After you become reconciled to the nanosecond,” Robert Noyce observed, “computer operations are conceptually fairly simple.” In this respect, the electronic revolution of the twentieth century is the intellectual mirror image of the biological revolution of the nineteenth. Only after they became reconciled to enormously long periods of time—enough time for a dynohippus to turn into a donkey—could Charles Darwin and his contemporaries contemplate species evolving on an evolving planet. Only after they became reconciled to enormously short periods of time—microseconds, nanoseconds, picoseconds—could the computer pioneers contemplate machines solving problems by turning switches on and off. The central concept of computer operations is that the machines operate inconceivably fast. Speed is the computer’s secret weapon. If computers did not work as fast as they do, no one could justify the time and materials required to build them. At a switching speed of 1,000 times per second, it would take a whole second, maybe two, for a computer to add 2 and 2. At that rate, it would make no sense to buy the machine. The human brain, the sublimely intricate, powerful, efficient computer that everyone gets for free, can solve the problem faster than that.

  The wires in an electric circuit tend to slow things down. The transistors in a computer switch on and off in response to electronic signals. A pulse of electricity moving through a wire reaches the transistor, and the transistor switches on; another pulse comes along, and the transistor switches off. No matter how quickly the transistor itself can switch, it cannot do so until the pulse arrives telling it what to do. The more wiring there is in a circuit, the farther these messenger pulses have to travel. In the 1950s, the limiting factor in computer speed was the travel time for those electronic signals moving through the circuit. In the biggest computers, with literally miles of wiring, it took so long for pulses to travel from one side of the circuit to the other that computation rates were seriously impaired.

  At first blush, it might appear that there were two potential solutions to this problem: either speed up the signals, so they move through a large circuit faster, or shrink the circuits. Someday, if relativity theory is displaced and the laws of twentieth-century physics are stood on their heads, the first solution may be at hand. At present, however, it is against the law. Electronic signals move through a circuit at the universal speed limit—the speed of light. If modern physics is correct
, nothing will ever move faster. That leaves the second solution. To increase computing speed, it was necessary to reduce the distance the messenger pulses had to travel—that is, to make the circuits smaller. But smaller circuits meant decreased capacity. The result was a paradox. In the argot of the engineers, a computer’s “power” is a measure of both its capacity to handle big problems and its speed in solving them. It was possible—at least in applications where cost and size were not serious problems—to increase problem-solving capacity by wiring in more transistors. But more transistors and more wire meant a larger circuit, which reduced computing speed. Thus the effort to build in increased computing power led to decreased computing power. It was the technological equivalent of Catch-22: the tyranny of numbers.

  “It was a situation where, quite clearly, size dictated performance,” Bob Noyce recalled. “Not just performance, in the sense of limiting computing speed, but the size and complexity of electronic circuits dictated cost, reliability, utility.”

  “The things that you could see that you wanted to do were going to take so many transistors, so many parts, that it would just be prohibitive, from a cost standpoint, from a size standpoint, any way you wanted to look at it,” Jack Kilby remembered.