Domestication of Radical Rhetoric
By mid 1961, the Air Force raised concerns about the promised progress of the molecular electronics program. The major criticism regarded the ‘slowness in proceeding with manufacturing development’.16 Although Westinghouse succeeded in making eight prototypes for the 1960 status meeting, manufacturing these in large quantities was quite another story. The relationship between Westinghouse and the Air Force began to sour as Westinghouse experienced difficulties in turning custom-made prototypes into mass-produced devices. By late 1961, Westinghouse had still failed to deliver the products, fueling growing suspicion within the Air Force whether Westinghouse ‘ha[d] a coherent, vibrant organization working on molecular electronics’ that could bring the contract to the promised conclusion.17
Complaints went both ways. From Westinghouse’s perspective, the Air Force did not pick up its side of the stick by ‘allow[ing] the various phases of our contract to sort of dissipate in the mist’.18 For example, the Air Force’s demand for sample products ‘without financial remuneration but with the hope that this would get for us this large manufacturing methods contract’ forced Westinghouse to ‘spread thin over a large number of functional blocks’. This impaired Westinghouse’s ability to aggressively bring laboratory prototypes into manufacturing. Compared with other military contractors who ‘have military funding for specific items’, Westinghouse had to struggle with a large number of functional blocks ‘essentially with
[their] own funds’.19
As it turned out, the centerpiece of molecular electronics that won Westinghouse the Air Force contract became the major bottleneck. In April 1962, Gerath wrote: ‘A review of molecular engineering contracts [reveals] that our progress which is slower than anticipated is partially related to materials processing and techniques. It is realized that the development of proven processes is a time consuming operation’.20 This is perhaps not surprising given the relative lack of experience at Westinghouse, and the inherent difficulties of controlling material properties at a molecular level. It was at this juncture that radical rhetoric surrendered to a more incremental approach.
As Westinghouse struggled with manufacturing issues, Air Force enthusiasm toward molecular electronics began to subside. As early as May 1961, a trade publication reported that the ‘USAF [was] hedging molec-tronics bets’ by using integrated circuits and discrete components as an ‘interim step’ toward full ‘molecularization’ of electronic equipment.21 When the central technique that defined the identity of molecular electronics proved unfeasible as a manufacturing process, the term rapidly lost its appeal not only for its patrons in the Air Force but also forWestinghouse engineers. In 1962, a new plant was established in the suburbs of Baltimore in Elkridge, MD, consolidating ‘molecular electronic activities that had been carried on in laboratories in several other locations’. However, according to a Westinghouse in-house magazine, the process that was implemented was an ‘epitaxial diffused planar process’, which Fairchild and Western Electric had developed in 1960. Perhaps more significantly for our purposes, Westinghouse began phasing out of the term ‘molecular electronics’ by, for instance, calling their operation ‘the new art of molecular electronics, also called integrated circuits’.22
In 1963, when the editor of The Tool and Manufacturing Engineer visited the Westinghouse Elkridge plant, the first manufacturing step was to ‘lap and polish both sides of the silicon wafers’, which clearly indicates that Westinghouse had abandoned the dendritic approach to preparing semiconductor materials. Although the final products were called ‘molecular circuits’, the structure and manufacturing process were very similar to the integrated circuits made at Fairchild and Texas Instruments (see Fig. 5, which bears a striking resemblance to contemporary patent illustrations for planar integrated circuits). It was no surprise that the Air Force ‘hedged’ its bet and went for the integrated circuits made by these firms that offered their products at a lower price.
By the mid 1960s, the term ‘molecular electronics’ had largely disappeared from the technical literature.Westinghouse’s semiconductor operation continued under the rubric of ‘molecular electronics’ until the mid 1960s. However, this only indicates the difficulty of ousting a term once it had made its way onto the organizational chart of a large bureaucracy. AtWestinghouse and elsewhere, silicon integrated circuitry became the standard terminology – and practice – for the microelectronics industry. To be sure, Westinghouse made outstanding contributions in defense and especially space electronics
From Black (1963: 79–84). Copyright notice 1963. Copyright by Society of Manufacturing Engineers. All rights retained. This image appears with permission from Manufacturing Engineering, the official publication of the Society of Manufacturing Engineers (SME).
well into the 1970s.23 And through its foray into molecular electronics, Westinghouse obtained valuable knowledge and skills in areas such as clean room techniques and scanning electron microscopy.24 But the company failed to achieve equal success in the commercial market. By the end of the decade, the center of electronics had already migrated across the country to what is now known as Silicon Valley.
Silicon Scaling and the Rebirth of Molecular Electronics
The downfall of the first venture into molecular electronics, then, was that it was sold both as a transition to a wholly new kind of electronics platform (that is, it would fill the next stage in the sequence of vacuum tube→discrete transistor→integrated circuit→?) and as a manufacturable product that could be mass-produced in great enough quantities and at low enough cost to compete with silicon ICs in the Air Force’s short time horizon. When faced with the contractual (and competitive) obligation to deliver a product,Westinghouse chose the route of known manufacturability (the silicon IC) rather than that of an untried, possibly unmanufacturable, new platform.
Yet the desire for a new electronics platform beyond silicon ICs never wholly disappeared. Many proposals for radical new platforms have appeared over the years: spintronics, DNA computing, Josephson computing, quantum computing, cellular automata, and so on.25 Partly by chance and partly because of a genealogical link to Westinghouse, ‘molecular electronics’ (in a variety of related guises) has continually reappeared since the mid 1970s as a proposed post-silicon platform.The attraction – but also the death knell – of all these revolutionary platforms lies in the tremendous profit and influence that the industry built around silicon. Any person/organization/nation that could develop the next microelectronics platform after silicon ICs could potentially control an industry with a quarter of a trillion US dollars in sales.26
Before a radical new platform could supplant silicon, however, it would have to be as manufacturable as silicon ICs – someone would have to make mass quantities of chips based on the new platform that would be faster and cheaper than silicon or provide some other advantage. One strategy for a revolutionary platform proponent would be to develop their technology for some niche application. This would provide the proponent with resources and time to work out the new platform until it is manufacturable enough to compete with silicon. A few exotic forms of microelectronics have, in fact, survived this way.27 Gallium arsenide integrated circuits, for instance, were widely tipped to displace silicon in the 1980s.They have yet to do so, but today they are commonly used in cellular telephones – a sizeable enough market that manufacturing knowledge about gallium arsenide can continue to grow, with the potential to eventually displace silicon.
Now we can ask: Since the 1970s, what kinds of people and organizations have been attracted to radical platforms such as molecular electronics? One possibility would be an academic lab group or a small start-up company.28 In fact, academics did become interested in molecular electronics (as well as other post-silicon platforms) in the 1980s, and a few start-ups emerged in the 1990s.These organizations proved adept at making a molecular device (or, usually, some part thereof); but so far no such group has acquired the manufacturing know-how on its own to make many molecular devices integrated as a single chip, much less to make thousands or millions of such chips.
A second possibility would be the firms in Silicon Valley or their closest international competitors. Since the early 1970s, though, the innovation regime of such firms has discouraged a leap away from silicon to a radical new platform. Among such firms, chip production has been a multi-organizational affair. A piece of silicon goes through well over a hundred process steps on the way to becoming a chip. Each process step involves at least one large, multi-million dollar machine (made to order by one or several equipment suppliers) and numerous smaller tools (photoresists, polishing pads, slurries, and so on) manufactured by an array of materials companies (Hatch & Mowery, 1998). All these materials and pieces of equipment are very precisely engineered and highly dependent on each other; any small change to a process step ramifies through many other steps and therefore affects the practices of a large number of organizations.
For instance, moving from aluminum to copper interconnects (the small ‘wires’ between transistors in a chip) was predicted for 20 years to enable 10–20% improvement in performance, yet it took a decade of intense negotiation and engineering through the 1990s to make the switch because even this seemingly small modification of one process step had
implications for a further 25 other steps.29 Such changes require a great amount of coordination, either through standards-setting by a dominant firm (for example Intel) or road-mapping through a trade association or quango. These bodies can manage incremental improvements to process steps, but they are extraordinarily averse to large discontinuities in platform that would require rebuilding their infrastructure from scratch.
If not universities, start-ups, or Silicon Valley, then who? The history of molecular electronics since the 1950s points to two types of organization that have both been attracted to radical changes in the microelectronics platform and believed they had the wherewithal to make the new platform manufacturable. Those organizations are large, vertically integrated firms and national security research bureaucracies.The latter usually do not have their own manufacturing capacity, but they have unique requirements (for example cryptographic supercomputing or radiation hardening) that make them wary of Silicon Valley’s consumer-oriented innovation paradigm; and if their requirements are urgent enough they have the funding to experiment with new platforms.
As for vertically integrated monopolies or near-monopolies (for example Westinghouse, IBM, or AT&T), until the 1990s these firms had large research arms, substantial manufacturing capacity, and exotic requirements that, again, made them leery of Silicon Valley’s way of doing things. For instance, as Rebecca Henderson (1995) has pointed out, where SiliconValley firms have been extremely reluctant to discard optical lithography in patterning transistors onto their chips (and have extended the life of that technology more than 20 years beyond expectations), firms such as IBM and AT&T were the first to develop (and lobby for the widespread adoption of) exotic lithographies such as X-ray, electron beam, and extreme ultraviolet.30
Such skepticism extended not just to one piece of the silicon integrated circuit platform (optical lithography) but to the platform as a whole. Indeed, these firms were among the first to call attention to the eventual demise of incremental improvements to silicon ICs. In the early 1970s, Robert Noyce, Carver Mead (1972; Mead & Rem, 1979), Gordon Moore (1975), and others associated with Intel were articulating an open-ended ‘Moore’s Law’ of miniaturization. At exactly the same time, Robert Keyes (1972, 1975, 1977), IBM’s microelectronics guru, was announcing that silicon ICs would cease to get any smaller within just a few years.31 While manufacturers such as Intel – always tightly networked with and mutually dependent on an array of suppliers – saw no presumptive anomaly in silicon, vertically integrated firms such as IBM thought otherwise and believed they could reinvent their microelectronics manufacturing infrastructure from scratch.
Of course, IBM continued to plow money and people into improving silicon technology; but its research arm was easily enticed into adventurous explorations of post-silicon technologies. For instance, from 1969 to 1983 IBM spent more than US$100 million (in 1970s dollars) to develop a supercomputer based on superconducting materials such as niobium or lead rather than traditional semiconductors such as silicon.32 The company
tried to develop all aspects of this computer – from the exotic chips to the refrigerators needed to keep them cool to mundane equipment such as cables and printers. And even though IBM researchers proved adept at making small quantities of superconducting logic elements, by the time they could even assess the manufacturing obstacles to making the millions of such elements needed for a supercomputer, silicon’s slow, steady improvement in cost and speed had erased much of the superconducting chip’s hypothetical advantage.
The rebirth of molecular electronics was enabled by IBM’s ambivalent pursuit of both better silicon technology and a post-silicon microelectronics platform. Even as it explored alternatives, IBM was committed to developing the advanced materials needed to make smaller, faster, cheaper silicon integrated circuits. In the early 1970s, one piece of this effort was Bruce Scott’s group at IBM’sYorktown Heights lab that was trying to develop new lithographic resists used in patterning of silicon. Resists are lacquer-like organic chemicals that, like photographic film, change their chemical character when exposed to light, x-rays, electron beams or other lithographic beams; this means that when they are exposed to the image of a pattern of transistors, an acid can then etch away the areas that have been exposed to the beam, leaving behind a solid negative of the transistor pattern. Further etches can then be used to transfer that pattern directly into the silicon.
Scott was interested in seeing whether a class of materials known as organic conductors, which had been discovered in the late 1960s, might be used as lithographic resists. Ordinarily, organic compounds are very poor conductors of electricity, but certain charge-transfer salts such as tetrathia-fulvalene-tetracyanoquinodimethane (TTF-TCNQ) had been found to be reasonably good conductors.These are compounds made up of alternating layers of an electron donor molecule (for example TTF) and an electron acceptor (for example TCNQ). By moving along an alternating stack of donors and acceptors, an electron is able to pass through the material encountering relatively little resistance.
Scott believed that if a charge-transfer salt could be designed which gained or lost its ability to conduct electrons after it had been exposed to a lithographic beam, then it would make an excellent resist. He therefore tasked the group’s synthetic chemist, Ari Aviram, with making a series of charge-transfer salts for the other members of the group (largely physicists) to characterize. But Aviram began to formulate a more far-reaching vision for charge-transfer salts.33 As a young father with a growing family to feed, Aviram could see three obstacles to career advancement that such a vision might overcome. First, as a synthetic chemist with a master’s degree he felt at a disadvantage among the physics- and PhD-chauvinists who were Yorktown’s cultural and managerial elite.34 Second, his current work was largely auxiliary: he made samples to order for other people to build theories and experiments around. Finally, charge-transfer salts were seen as relevant to somewhat low-status applications atYorktown. At best, they could be used in Scott’s photoresists, but more likely they would end up in parts for IBM’s line of photocopiers.
By 1970, therefore, Aviram had decided to get his PhD, and for his dissertation research he planned to develop a theory for using charge-transfer salts not for photocopiers but for the kind of radical new microelectronics platform that was bound to grab attention within IBM Research. So, that autumn, he walked into the office of Mark Ratner, an assistant professor in theoretical chemistry at New York University, and persuaded Ratner to supervise his dissertation on electron propagation in organic molecules.35 Ratner – 3 years Aviram’s junior – agreed to this unusual and forward request partly because Aviram had convinced Scott that IBM should pay his tuition as well as bring Ratner in to consult on organic conductor research.
He also agreed because he could see an interesting theoretical question in Aviram’s proposal. Aviram, in preparing bulk quantities of charge-transfer salts, had begun thinking about the properties of a single molecule of a compound such as TTF-TCNQ. This molecule would have a functional unit (TTF) rich in electrons and another unit (TCNQ) poor in electrons. This made the molecule similar to a traditional semiconductor microelectronic component called a diode, in which an electron-poor region of semiconductor is electrically adjacent to an electron-rich region.When a voltage is placed across the diode such that electrons run from the electron-rich region to the electron-poor one, a substantial current is created; when the voltage is reversed, electrons pass poorly through the electron-poor region and little current is created.The theoretical issue for Ratner was whether a single organic molecule could be designed that would have a similar current-versus-voltage graph to that of a semiconductor diode.
The pragmatic issue for Aviram was to take his and Ratner’s theory and promote it to his managers as the basis for a new ‘molecular’ electronics. Abstractly, the step from a molecular diode to a molecular transistor is small. A transistor (especially the bipolar junction transistors on which IBM’s machines then depended) is basically two diodes back-to-back – that is, a sandwich of electron rich–poor–rich regions (or poor–rich–poor).The main difference is that the middle region of this sandwich (the ‘gate’) is used to control current flow across the whole transistor (by the addition or subtraction of a very small voltage on the gate).36
Aviram believed that, with both his PhD and a theory of molecular diodes in hand, IBM would allow him to build a program to take the next steps: design and synthesize a molecular transistor, build small devices from these molecules, and eventually wire together millions of these transistors into a full-fledged microprocessor. He framed this program as a radical leap in miniaturization not just beyond Silicon Valley firms, but right to the conceivable limits of microelectronics.
Thus far, the components which carry out the processing of electrical energy have moved through three ‘generations’: (1) the vacuum-tube ... (2) the transistor ... and (3) integrated circuits which at increasing levels of miniaturization combine a host of electronic devices ... on single ‘chips.’ [Aviram and Ratner] have suggested a drastic reduction in component size far below present-day levels of circuit fabrication. ... [T]hey have proposed
the design of individual molecules which would be able to act as functioning electronic devices in circuitry.37
Note how this reiterates the notion of molecular electronics as the fourth (and final) generation of microelectronics that is captured in Figs 1 and 2.
Aviram and Ratner (1974) published a now-famous paper on ‘Molecular Rectifiers’ describing how a modified charge-transfer salt (Ratner added a small barrier between donor and acceptor) could operate in a circuit.38 And, as Aviram had hoped, this research did spark considerable discussion within IBM. For Ratner, Scott, and Aviram’s other colleagues, though, the paper was a theoretical curiosity which could not be tested experimentally, much less scaled up to a product. Aviram’s charismatic vision had its moment at IBM – it was taken seriously by Scott, Philip Seiden (director of Physical Sciences at IBM) and the Yorktown semiconductor establishment, some of whom (for example Sokrates Pantelides) eventually defected from semiconductors to molecular electronics in the late 1990s. It even found its way into the mainstream media (Time, 1974).
Even Aviram, though, had no answer to the problem of manufactura-bility. At the time, he could not even synthesize the molecular rectifier he and Ratner proposed, much less put it into a functioning circuit – let alone wire together millions of such molecules! IBM was already throwing hundreds of millions of dollars at a disruptive new form of microelectronics (superconducting computing) that looked much closer to manufacturabil-ity. At the same time, it was investing billions into somewhat less disruptive improvements to silicon manufacturing (for example x-ray lithography). In that environment, a research group such as Scott’s could afford to pursue the esoteric questions about electron transport that had caught Ratner’s interest, but there was little basis to take Aviram’s lead and move directly into molecular computing.
Thus, for a few years Aviram was allowed to develop his ideas, and to explore new materials for molecular devices such as conducting polymers (a new kind of organic conductor discovered in 1973). By the late 1970s, though, Aviram’s group had dispersed – Scott into administration at IBM headquarters, others to IBM Almaden (in California). Aviram, in Ratner’s words, was ‘exiled’ to work on printer inks, and molecular computing at IBM went into hibernation. The ‘Molecular Rectifiers’ paper – today seen as the founding statement of modern molecular electronics – sank virtually without trace until Aviram returned to the topic in 1988. As Ratner says, ‘nobody read it, and it just laid there for years’ (Wolinsky, 2004).Yet in that time, largely independent of Aviram and Ratner, a molecular electronics community – dispersed across regions and organizations and disciplines – came into being for the first time.
Forrest Carter asTransitional and Catalytic Figure
Aviram and Ratner never actually used the phrase ‘molecular electronics’. However, their paper is seen as the origin point of the modern field with
that name, because it gestured to the general features of what now counts as molecular electronics: the substitution of more-or-less discrete single organic molecules for integrated silicon transistors in a microelectronic circuit. The appropriation of the Westinghouse program’s name for this research area and (in large part) the mobilization of practitioners to work on it, however, fell to Forrest Carter, a chemist at the Naval Research Laboratory, in the late 1970s and early 1980s. Aviram and Ratner’s 1974 paper was important in spurring Carter – he was one of the few to cite it before 1988, and he included both men in his early community-building. Yet Carter’s program developed in parallel with, and was initially much more successful than, Aviram’s. In the rest of this paper we follow the molecular electronics community that first nucleated around Carter and then, in the early 1990s, re-formed (with Aviram’s help) as a subsidiary of, rather than a competitor to, silicon microelectronics.
Critically, Carter’s interest in molecular computing grew out of an institutional and disciplinary environment similar to Aviram’s, as well as a personal curiosity dating to his graduate training at Caltech. There, he had studied organometallic chemistry under Howard Lucas, graduating in 1956 (Carter, 1956). His Caltech mentors also included Linus Pauling and Richard Feynman; indeed, by Carter’s account, he attended parties at Feynman’s house and played bongos and talked science with the older man. It is interesting to note that Carter knew of and was influenced by Feynman’s (1960) famous ‘Room at the Bottom’ speech – much more so
than most other early nanotechnologists.39
Moreover, Carter incorporated elements of the Feynman persona into his own presentation of self, developing an expansive, charismatic style that helped him promote bold visions and gather protйgйs, but which also led to institutional conflict. Like Feynman, he had a taste for exotic hobbies (hot rods, motorcycles, fencing, platform diving, salsa dancing); and, like Feynman, he became known for extraordinary parties, risquй banter, and a coterie of young acolytes. Carter’s striking appearance, rumbling voice, and colorful banter (cited to this day by skeptics and believers alike) personalized molecular electronics as neither Aviram nor the Westinghouse engineers had before him.40
From Caltech, Carter moved to Westinghouse in 1957. While not directly involved in the molecular electronics program, he was aware of the project and even worked on semiconductor materials in collaboration with the Army Signal Corps (Ryan et al., 1962).41 In 1964, he moved to the Naval Research Laboratory (NRL), where he became one of the laboratory’s specialists in x-ray photoelectron spectroscopy (a very recently developed surface analysis technique). As Bruce Hevly (1987) has shown, the postwar NRL portrayed itself as a ‘university of applied research’ where scientists were expected to contribute broadly to Navy-relevant questions, but were also free (and expected) to pursue basic research questions of interest to academic colleagues. Individual scientists often worked on several projects at once, covering a spectrum of time horizons over which their research would evolve into relevance for the Navy. In this environment, Carter man- aged to balance projects of immediate interest to the Navy with work of indirect or long-term relevance – using the former to build approval for the latter.
Forrest Carter explaining molecular electronics. Photograph by Charles O’Rear,
courtesy of the National Geographical Society.
Carter saw the new organic conductors as materials that could be parlayed both for medium-term research directly relevant to Navy applications and
for more speculative explorations of a post-silicon microelectronics platform. Institutionally, he was well placed to take advantage of this view. By the mid 1970s, the hot area of organic conductor research was conducting polymers, and many researchers (including Aviram) had shifted from charge-transfer salts to polymers such as polyacetylene and polysulfur nitride.42 As it happened, the funding that enabled the initial attention-getting research on conducting polymers had come from Kenneth Wynne, a grant officer at the Office of Naval Research; through the 1970s, Wynne and the Navy were instrumental in mobilizing and coordinating practitioners in this field.43
In 1976,Wynne established a Navy Committee on Advanced Polymers to explore naval applications of these materials. Conducting polymers held enormous promise for the Navy – plastic electronics could make sensors and communication equipment cheaper, lighter, more durable, and more resistant to corrosion (always a problem at sea and a major area at the NRL), and might enable far-out applications such as advanced batteries for submarines or super-efficient solar cells for satellites. In conjunction with Wynne’s committee, Fred Saalfeld, superintendent of the Chemistry Division at the NRL, organized an Electroactive Polymers program and brought the leaders in the field (most of them Wynne’s grantees) in to consult.
Conducting polymer samples are usually prepared as thin films, meaning that surface effects are important. And since Forrest Carter ‘owned’ the only x-ray photoelectron/Auger electron spectrometers (two key surface analytic tools) in the NRL’s Surface Chemistry branch, he could contribute key data to the Electroactive Polymers program. While some researchers resented being dragooned into the project by Saalfeld, Carter took to it with relish.44 This can be seen particularly in the reports on the program’s annual symposia for 1978 and 1979 (Lockhart, 1979; Fox, 1980), where Carter provided the program’s broad theoretical outlines (Carter, 1980a), many of its empirical findings (Brant et al., 1980), and – critically – the outlines of a long-range vision linking conducting polymers to molecular computing (Carter, 1979, 1980b). In a high-profile program supported by powerful managers, Carter got results and, therefore, accrued significant latitude to pursue longer term, less directly Navy-relevant research. By 1978, Carter was spending this social capital on a futuristic vision that extended Aviram and Ratner’s work.
Carter took Aviram and Ratner’s picture of discrete organic molecules substituting one-to-one for silicon diodes and transistors and began filling in the hypothetical details. For instance, their rectifier had simply been a free-floating molecule; Carter now outlined ways that such components could be anchored to a solid substrate. Their rectifier had not been connected to anything; now Carter began sketching a theory for ‘molecular wires’ to bridge components. At the same time, the raw materials for Carter’s speculations were the new conducting polymers, such that he could justify his work as merely the logical next step in NRL’s Electroactive Polymers program. For instance, his molecular wires were merely unbranched chains of a conducting polymer such as polysulfur nitride. Polysulfur nitride is made from repeating units of sulfur nitride (SN); so he
envisioned anchoring one end of the polymer to a substrate of silicon, then adding SN units as needed and ending the wire with a molecular component: Si–N=SN–SN–SN– ... SN–SN–SN–molecular transistor.
Organizationally as well, he represented this work as a mere extension of the Electroactive Polymers program. Thus, his first effort at building a community around his vision for molecular computing was a symposium in 1981 at (and financed by) NRL on ‘Molecular Electronic Devices’ (Carter, 1982). This was explicitly modeled on the Electroactive Polymers symposia and included many of the same participants (KennethWynne’s conducting polymer grantees), and the talks emphasized those technical problems common to both electroactive polymers and molecular computing, such as ensuring electronic connection between organic and inorganic components of a circuit and the need for an improved theory of carrier mobility in organic molecules.
At the same time, to put Carter’s vision for molecular computing into practice clearly required a long leap away from the Electroactive Polymers program, and a substantial leap beyond the NRL’s mission and organizational boundaries.Thus, he needed some way to justify molecular computing’s relevance to the Navy. Concurrently (and unlike Aviram) he saw the need for an eye-catching rhetoric that would attract influential patrons beyond his own organization. By the early 1980s, therefore, he had found that rhetorical hook by reframing molecular computing to emphasize its implications for national security and economic competitiveness.
It’s important to remember how panicky the American state, microelectronics industry, and media were about the Japanese capture of certain semiconductor markets in the late 1970s (Prestowitz, 1988). The incremental progress of Silicon Valley firms was now seen as a liability, because they relied on an innovation pathway for silicon that supposedly uncreative Japanese firms could (so it was argued) simply copy without introducing real high-tech breakthroughs. Carter tapped into these fears by arguing that only a truly radical change, based on advances in fields well outside silicon microelectronics, could deliver a circuit so small, so fast, and so technologically disruptive (and therefore not easily copied) that it would vault American firms back into the lead and put foreign competitors at a permanent disadvantage. In so doing, Carter explicitly justified his molecular electronics work by pointing to the military’s mandate to confront all threats to national security and competitiveness, including disruptive economic threats.
This was appealing rhetoric at the time, and the Navy was initially very supportive of Carter’s molecular electronics.The NRL was eager to sponsor work that would (as one of Carter’s bosses put it) ‘lead to the discovery of important new phenomena and exciting new technologies’ (Jarvis, 1980) – that is, research that contributed both to national security objectives and to basic research. Like the Air Force and Westinghouse (and, to a lesser extent, IBM), the Navy was at the periphery of the microelectronics industry, but in the face of an external threat it was open to an institutional entrepreneur like Carter convincing it that its particular expertise (elec-troactive polymers) could form the basis of a radical alternative to silicon. As an associate director of NRL, Albert Schindler, put it:
We are all familiar with the revolution in computer power as exemplified by the hand-held computer. NRL recognizes that this [Molecular Electronic Devices] workshop may lead to another revolution of equal if not greater importance. While we are involved in ... the development of high speed, very large-scale [silicon] integrated circuitry, this workshop may point the way to a quantum jump advancement rather than just incremental improvements. (Schindler, 1982)
Finding common ground with his patrons in the language of revolution, Carter began to steer his vision of molecular computing in more speculative, radical directions. His early proposals still bear some family resemblance to traditional microelectronics and computing; even if the wires, diodes, and transistors are ‘molecular’, they still act like components that anyone can buy at Radio Shack. Later, though, he emphasized the possibilities of less recognizable kinds of molecular computing, particularly so-called cellular automata. For this, he envisioned depositing a periodic array of molecules, with each molecule capable of two states (zero or one) and chemically ‘programmed’ to change its state depending on the states of its neighboring molecules according to some prefixed set of rules. A set of inputs to this array would chemically cause various changes to the automata molecules until they would reach an end state and yield a set of outputs. Such an array, Carter believed, would not be constrained to process information the way a silicon computer does; instead, his molecular cellular automata would represent a leap to a much more human kind of processing:
If the data input to such an array of automata is a two-dimensional array of picture elements or pixels then pattern recognition routines and pattern motion could be parallel processed. ... Such a three-dimensional array processor could reduce data in a manner comparable to the optic nerve. (Carter, 1984c)
Such an array could then ‘Discern between Sedans, Trucks, Tanks ... between Ships, Boats, Canoes ... between Bombers, 707’s [sic], Birds’. With promises like this, Carter found an enthusiastic constituency among generals and admirals beyond the borders of the NRL.