Haigh and Ceruzzi 2021: Difference between revisions

From Whiki
Jump to navigation Jump to search
Line 474: Line 474:


== The computer becomes a personal plaything ==
== The computer becomes a personal plaything ==
One influential hobbyist project was the “TV-Typewriter,” designed by Don Lan- caster and published in Radio-Electronics in September 1973. This device allowed one to display alphanumeric characters, encoded in ASCII, on an ordinary television set. It presaged the advent of video displays and keyboards as the primary input-output devices for personal c…
Page 7
1974 was the annus mirabilis of personal computing. In January, Hewlett-Packard introduced its HP-65 programmable calculator. That summer Intel announced the 8080, an improved microprocessor. In July Radio-Electronics described the Mark-8. In late December subscribers to Popular Electronics received their January 1975 issue, with a prototype of the “Altair” minicomputer on th…
Page 7
CP/M was the final piece of the puzzle that, when assembled, made personal com- puters a practical reality. A personal computer’s DOS had little to do with mainframe operating systems such as Multics. There was no need to schedule and coordinate the jobs of many users: an Altair had one user. There was no need to drive a roomful of chain printers, card punches, and tape drives: a personal computer had only a couple of ports to worry about. What was needed was rapid and accurate storage and retrieval of files from a floppy disk. A typical file would in fact be stored as a set of fragments, inserted wherever free space was available on the disk. It was the job of the operating system to find those free spaces, store data fragments there, track them, and reassemble them when needed. Doing that gave the user an illusion that the disk was just like a traditional file cabinet filled with paper fil…
Page 14
Pen
Page 14
In 1977 three companies, Commodore, Apple, and Tandy began to produce relatively affordable and polished personal computers intended to expand the market beyond electronics hobbyists to computer-curious consumers. Each included video circuitry to drive a television or monitor, a cassette interface, and keyboard. This integration of standard hardware and a shift to new, cheaper, and higher capacity dynamic RAM chips greatly reduced the cost of a usable computer system. Burning BASIC onto ROM chips made it faster and easier to start using the computer after turning it on.
Page 15
The original PET’s chief drawback was its calculator-style keyboard; its main strength was a powerful built-in version of BASIC. Several generations of improved PETs were introduced over the next five years.
Page 15
Colorful cellophane strips placed over the monochrome video screens enlivened
Page 20
the simple graphics of Breakout, and other games of the era. It has also been remembered for the efficiency of its electronic design: early in their partnership Steve Jobs, then work- ing at Atari, submitted an astonishingly efficient hardware design produ…
Page 21
The idea of a standard model was tried and failed: a consortium of Japanese companies worked with Micro- soft to replicate the success of their VHS videotape standard by introducing a home computer standard called MSX in 1983. At least twenty consumer electronics compa- nies produced compatible machines. Most vanished quickly, although implementa- tions by local producers sold well in Japa…
Page 26
The dividing line between a “personal computer” and a “home computer” was initially determined by marketing and customer response. The Atari 800 personal computer, introduced in 1979 as a competitor to the Apple II, was better built, faster, and had more standard features, although it offered less scope for expansion. However, because of Apple’s head start with small business and education users, and the Atari’s superior chips for sound and animated graphics, the Atari 800 was treated as a gaming machine and sold almost exclusively to home users.
Page 26
Homes and computers both had long, almost entirely separate, histories. For the new idea of a “home computer” to make sense, the computer obviously had to change, becoming cheaper, smaller, and less intimidating. Less obviously, the home itself had to be reimagined as a place that needed a computer. Computer enthusiasts and adver- tisers struggled to do this plausibly. One early idea was to apply the digital control functions of computers, proven in industrial and laboratory settings, to the home. With extra hardware, computers could control heating systems, turn on and off lights, and open garage doors. Such projects appealed only to electronics hobby…
Page 27
Neither was well equipped for applications such as word p ing.
Page 32
Sales peaked in 1984, but Commodore was still building the Commodore 64 when it declared bankruptcy a decade later. More than twelve million were produced, making it the bestselling desktop computer model in history.
Page 35
In practice, the most compelling and widely used applications for home computers were video games. Many of the most popular programs for personal computers were recreations of popular arcade games like Space Invaders, Frogger, and Asteroids.
Page 35
The proliferation of home computers changed the way people were first exposed to computer technology. In the 1940s and 1950s, most of the people hired as programmers had their first experience of computing on the job. In the late 1960s, as computer science developed as a field within universities, students might program for the first time in a science or engineering class and then decide to major in computing. In the 1970s, more students encountered programming in high school, usually via a timesharing system. In each case, access to computers was limited and took place outside the home. Until the mid-1980s, the proportion of computer science students who were women was rising, following the pattern in other technical and professional subjects. In the 1980s, however, the trend reversed in computing.
Page 37
The home computer market collapsed further and faster, as the whole idea of the home computer as a separate class of machine dwindled in the mid-1980s. When domestic sales of personal computers really began to take off again in the early 1990s, people were buying cheap versions of computers designed for business use.
== The computer becomes office equipment ==
…during the 1950s and 1960s computerizing meant shifting work out of offices and into data processing centers. Most office workers never even saw the computer. Offices sent paper forms to the data processing department. Every week, month, or quarter they received back stacks of fanfold paper printout with infor- mation on sales, accounts, and everything else the computer was trac…
Page 1
The same m cessors, RAM chips, video interfaces, small printers, and floppy disk drives that made enthusiast and home computing possible also produced computers cheap enough to sit on the desks of office workers. Two application areas were particularly important in lay- ing the groundwork for IBM to introduce a general-purpose personal computer of its own. The first was word processing, which assembled those same technologies to pro- duce buttoned-down office machines rather than hobbyist personal computers. The sec- ond was the invention of the spreadsheet, the first compelling business application for regular personal computers. Both helped to change perceptions of personal computers, shifting their main market from enthusiasts and home users to office…
Page 2
Word processing is a concept with a complicated history. Before a word processor was a software package, like Microsoft Word, it was a special kind of computer. But before even that, a word processor was an office worker and word processing was an idea about typing pools that ran like factory assembly lines. That idea gained traction after the American Management Association and an obscure publication called Administrative Management began to promote it. Companies had invested large sums in specialized equipment to make their manufacturing workers more productive. Office work, in contrast, remained inefficient. According to the American Management Association, word processing could solve this. Personal secretaries would be eliminated and their work transferred to word processors in a central typing pool. This higher volume of work would justify investment in expensive technology, further boosting their productivity.
Page 2
The word processing idea was tied to machinery, but not originally to computers. In 1971 IBM began to call its dictating machines and automatic typewriters “word processing machines” in its advertisements.
Page 3
The term was not widely used until the early-1970s, taking off just as Cuisinart’s food processors began to appear in American kitchens. By this point, the falling cost of interactive computing was making it more cost effective to use computers to store, edit, and print various kinds of text.
Page 3
Legal documents were the first big market for computer text editing, as they were complicated, went through many drafts, and had lots of money attached to them.
Page 3
The market for text editing systems designed for office work grew separately but parallel to the enthusiast market for personal computers. The template was set in 1973 by Vydec, a start-up led by former Hewlett-Packard engineers, which offered the first system able to display a full page of text on screen, store it on floppy disk, and print it. Its small and relatively affordable daisywheel printer, a recent invention, was named after a disk that rotated to punch the correct letter. This produced typewriter-quality output, albeit slowly a…
Page 3
Thanks to the arrival of microprocessors and the rapidly falling cost of RAM chips, many other firms had entered the market for video screen word processors by 1977, including NBI (“Nothing But Initials”) in Colorado, Lanier in Atlanta, and CPT in Minneapolis. Lanier was initially the most successful, but the lion’s share of the corporate word processing market was eventually taken by Wang Labs.
Page 3
The Wang Word Processing System (WPS), shown in figure 8.1, was unveiled at a trade show in New York in June 1976 and, according to some accounts, nearly caused a riot.
Page 4
CP/M remained popular well into the 1980s for cheaper personal computers, par- ticularly portable systems. The first successful portable was the Osborne 1, released in 1981. It looked a lot like a sewing machine: a bulky box with a handle on one end (figure 8.2). Releasing catches detached a keyboard to reveal two floppy disk drives and a tiny five-inch screen. Its portability was limit…
Page 6
There was exactly one great reason for a business user to get hold of an Apple. Visi- Calc launched in October 1979. Its creators were Daniel Bricklin and Robert Frankston, who had met while working on Project MAC at MIT. Bricklin had worked for Digital Equipment Corporation and in the late 1970s attended the Harvard Business School. There he came across the calculations that generations of business school students had to master: performing arithmetic on spreadsheets: rows and columns of numbers, typically documenting a company’s performance for a set of months, quarters, or years. He recalled one of his professors posting, changing, and analyzing such tables on the blackboard, using figures that his assistant had calculated by hand the night bef…
Page 8
Some software publishers worked like book p ers, paying royalties to the authors of programs, and others purchased the rights for a flat fee. They began as tiny operations, duplicating disks and packing them into zip- lock bags, to be sold with ads in specialist magazines or through the network of dealers that sprang up to handle the new machin…
Page 8
VisiCalc played wonderfully to the Apple’s strengths and minimized its weak- nesses. Fylstra noted that “the Apple II was essential to VisiCalc.” 22Spreadsheets were small, so its limited memory capacity and disk storage was not a handicap, as it would be for database work. They used text mostly for labels, so the all-caps display was not a problem (figure 8.3). The screen served as a scrollable window onto a larger spreadsheet, so the forty-column display worked much better for spreadsheets than word processing. Because the Apple drove the display directly rather than sending text to a terminal like a CP/M machine or timesharing system, the spreadsheet experience was smoother on it than it would have been on a more expensive platform. This fluidity encouraged users to play around with models and data to answer what if questi…
Page 9
The apparently objective computer output and attractive charts helped spread- sheet users to present their ideas forcefully, but as Levy noted, spreadsheets had an important difference from earlier modeling software: they hid the formulas created by users. Printed output showed the numbers produced by the model but not the assump- tions used to generate them, making it easy to tweak the formulas to get the desi…
Page 10
VisiCalc symbolized a shift toward packaged application software as the driving force behind personal computing. Unlike mainframe users, companies buying a personal computer were not usually going to hire a team of programmers to write custom software for it. Neither could most users realistically satisfy their needs by writing their own pro- grams in BASIC. Hardware was getting cheaper all the time, but programmers only got more expensive. The future lay in packages like this that could sell hundreds of thou- sands, and eventually millions, of copies and so spread their development cost over a huge use…
Page 10
Since the 1950s, capable floating-point hardware support had been the defining characteristic of large scientifi- cally oriented computers. The 8088 used in the original PC did not support floating point and its performance on technical calculations was me…
Page 15
Hard disks introduced new complexities into personal computing, requiring users to manage their directory structures, and opened new mar- kets for hardware and software to back up their contents. The popular Norton Utilities package, created by Peter Norton, included programs to restore accidentally deleted files, navigate directory structures, and optimize hard disk performan…
Page 16
With the PC’s announcement, IBM also announced the availability of word pro- cessing, accounting, games software, and a version of VisiCalc. Mitch Kapor, who had previously developed add-ins for VisiCalc and knew exactly how it could be improved, partnered with an experienced programmer, Jonathan Sachs, to start a rival firm, the Lotus Development Corporati…
Page 17
Lotus 1-2-3 was so popular that it inspired several clones, which copied the Lotus menu structure and macro command language. This raised a novel legal question: could copyright law be stretched to protect the look and feel of a program as well as its actual code? Lotus was initially successful when it sued the makers of a blatant clone, called The Twin, but eventually lost in another case (Lotus v. Borland) that established that command menus were not covered by copyright protection.44
Page 18
WordStar was the most popular word processing program for the IBM PC for the first few years. As with VisiCalc, it was a straight conversion, in this case from CP/M, which did not take full advantage of the capabilities of the PC.
Page 18
WordPerfect release 4.2 in 1986 set the standard for the rest of the 1980s and finally overtook WordStar in sales. At its peak around 1990, WordPerfect controlled around half the market for word processing software.
Page 19
S ware companies ran campaigns to discourage piracy. Some hoped that the hefty manu- als supplied with their packages and the telephone support they provided to registered users would discourage piracy. A flood of independent guidebooks and the increasing ubiquity of photocopiers made that less of a problem. Lotus and several other leading firms turned to copy protection, introducing deliberate errors into floppy disks that users would be unable to reproduce with an ordinary disk drive. The floppy disk was needed even when the program was loaded from a hard drive. That was unpopular with users—the special disks didn’t always work, and if they were lost or damaged the program would be useless. Software companies eventually abandoned these schemes in the face of complaints from large companies forced to manage thousands of key di…
Page 20
Although almost all of today’s personal computers and most servers are the direct descen- dants of the IBM PC, not a single one of the billion or so IBM-compatible machines sold from 2015 to 2019 was made by IBM. What began in 1981 as a single proprietary machine had by the late 1980s become the basis for a worldwide industry of thousands of companies that collectively produced millions of PCs every…
Page 21
In the long term, only MS-DOS computers that were fully compatible with the IBM PC could survive. Producing a compatible PC was harder than licensing MS-DOS. The core of what made a computer an IBM PC was the BIOS code stored on a ROM chip. IBM owned that code. It relied on copyright, which protects written works, rather than patents, which protect inventions, to prevent the duplication of its …
Page 23
AST wrote its own BIOS, but even that became unnecessary after Phoenix Technologies reverse engi- neered the IBM BIOS and started selling compatible chips as a standard part. The PC motherboard became just one more commodity available from a dozen different sup- pliers. The floodgates opened for PC …
Page 24
The most portable computer with a real keyboard was Radio Shack’s TR Model 100, developed by Kyocera of Japan (figure 8.7). It ran for about 20 hours off standard batteries and weighed only three pounds. Achieving those goals involved some significant compromises—no built-in disk drives, only 8 to 32 KB of memory, and a screen limited to eight lines of text. Its most enthusiastic users were journalists, who had previously dictated copy over telephone lines.
Page 32
The PC’s position at the end of the 1980s was unassailable. The IBM PC had evolved from a single model to the basis for a new kind of computing.
Page 34
By the end of the 1980s, most PC companies purchased standard parts and screwed them together.
Page 35
Only 15 percent of American households owned a computer in 1990. Among African American house- holds, the figure was 7 percent. Even among the richest 20 percent of households, two thirds had not yet made the purchase
== The computer becomes a graphical tool ==
On January 24, 1984, Steve Jobs, wearing a double-breasted navy blazer and garish green bow tie, took the stage at De Anza College (close to Apple’s headquarters) and pulled a tiny Macintosh computer out of a bag. The computer sprang to life, proclaim- ing itself “insanely great” before running a slide show on its crisp monochrome screen to demonstrate its new graphical user interface (GUI). In the popular imagination this moment divides the history of personal computing into two eras: the dark ages of text- based computing, inherited from timesharing systems, versus the enlightened world of windows and g…
Page 1
Conventional personal computers could display graphics as well as text, but although that power was exploited by individual programs such as video games and charting software, it was ignored by MS-DOS.
Page 1
Throughout the 1980s, c ers with graphical user interfaces had only a tiny share of the market and were much more expensive than mainstream personal computers, which is why we could tell the story of mainstream office computing through 1989 without mentioning them.
Page 1
The most obvious new feature of the Macintosh was its graphical user interface (GUI). Its key elements were invented over the course of a few years in the mid-1970s by a small team working in a single research facility, Xerox’s Palo Alto Research Center (PARC). But, less obviously, graphical computing as developed at PARC depended on new hardware capabilities—powerful processors, large memories (the lack of which crippled the first Macintosh), and high-resolution screens. To explain the diffusion of graphical user interfaces, we must understand the spread of those capabilities, initially to a new generation of microprocessor-based personal computers marketed as graphics workstati…
Page 2
Rather than perfect timesharing, the PARC team was determined to develop a new kind of interactive computing experience. Development of the hardware and soft- ware for a new computer, the Alto, was at the center of the lab’s work from 1972 onward.
Page 3
Much of the architecture of personal computers powerful enough to support graphical user inter- faces came from minicomputers such as the DEC VAX. But even when equipped with specialized graphics hardware, VAX machines were never intended for personal use. The Xerox PARC team had started by designing and building what was essentially a personal minicomputer. Each Alto coupled high-resolution graphics hardware directly to a powerful processor with, by the standards of the day, an absurdly large me…
Page 3
In fact, the Alto had a novel architecture in which processor capabilities were spread around the machine rather than clustered on one circuit board. Each Alto had its own hard drive with a removable platter, like those used with IBM mainframes.
Page 3
Researchers at PARC refined the mouse and coupled it with a unique high-resolution screen, arranged in portrait orientation to mimic a sheet of paper. This was bitmapped, so that its almost half-million pixels could be manipulated by flipping bits in memory.
Page 4
Unlike an o nary book, it would be dynamic, which to Kay meant it had to be highly interactive but easy and fun, unlike existing systems such as Doug Englebart’s NLS.
Page 4
Smalltalk was designed with flexibility and interactivity in mind, to put graphical objects of different kinds on screen and interact with them. Traditional programming languages assumed a text-based user interface. Applications coded with them were con- trolled with typed commands or selections from text menus. The program would print a list of options and wait for users to push a key to select one. Kay wanted the Dynabook to feel personal and interactive, displaying pictures for its users to interact wi…
Page 4
As well as a new kind of user interface, Smalltalk codified and began to spread a new approach to programming languages called object-oriented programming.
Page 5
Traditional languages define data structures separately from the code that manipulates them. The new approach let programmers produce highly modularized code, in which data structures are defined together with the operations that programmers use to access their values or update their contents. These hybrid bundles of data and code were called objects by Kay. Each object was an instance of a standard class. New classes could be defined as special cases of existing ones, with additional capabilities or characteristics.
Page 5
Because all data was held inside objects, it could be m lated only by using the methods explicitly provided in the code defining the corre- sponding classes. That enforced modularity and made it easier to reuse code between systems and to maintain systems. Smalltalk conceptualized the interactions between these objects as a kind of dialog achieved through the exchange of messages, an idea captured in the name Kay gave the lang…
Page 5
T tional interface methods were, to use a term popularized by Kay, modal. Users issued the desired command, which put the system into a mode. What it did in response to their next input would depend on the mode. For example, in delete mode, selecting a file would delete it. In edit mode, the same action would open it for editing. Kay favored a different interface style, in which users would first select the object they wanted to work on and then manipulate it to accomplish the desired operation. Pro- viding that kind of open-ended interaction in a conventional programming language would be frustrating and inefficient—the program would have to be structured as a loop that constantly checked whether the user had just carried out each of a huge number of possible actions. In Smalltalk, the programmer could specify the code to…
Page 5
run when a particular region of the screen, button, or scroll bar was triggered, and then the system itself would figure out what objects should be alerted in response to a par- ticular click. This was called event-driven code.
Page 6
Smalltalk went beyond Lisp by providing what was later called an integrated development environment (IDE), which included a text editor, a browser to explore the hierarchies of classes defined in code, and debugging tools to examine the current state of objects as programs executed.
Page 7
Object-oriented programming was harder to grasp than some of the other novel features of Alto, such as mice and graphical controls, and spread more slowly. Some high-profile languages of the late 1970s, such as Niklaus Wirth’s follow-up to Pascal, Modula-2, were designed to support increased modularity, but the full object-oriented approach was little known outside PARC until an article about it appeared in the August 1981 issue of Byte ma…
Page 7
Gypsy text editor produced by Larry Tesler and Timothy Mott in 1976. Gypsy took the capabilities of a previous program, Bravo, developed by a group including Butler Lampson and Charles Simonyi, and reworked it with the first user interface to resemble that of now-standard systems such as Microsoft Word. For example, to add text, users simply used the mouse to set an insertion point and then typed. To copy text, one highlighted it with the mouse and then pushed the Copy key. Xerox researchers, following Kay, called this style of operation modeless because the results of triggering a function were consistent and did not depend on a previously selected command mode.13 Like Bravo, Gypsy exploited the graphical screen of the Alto to display text with differ- ent fonts, accurate spacing of letters, embedded graphics, and formatting features such as bold and italic text. Computerized publishing expert Jonathan Seybold dubbed this what you see is what you get (WYSIWYG), repurposing a catchphrase of Flip Wilson, the first African American comedian to make regular television appearances. Wilson used the phrase in character as Geraldine Jones, a brashly self-confident woman, as winking acknowl- edgement of the tension between his cross-gender performance and Geraldine’s lack of pretense. The PARC staff borrowed it to define a simpler form of representational fidelity: the printed output would match the visual content of the screen as close…
Page 8
This was made possible by another PARC invention, the laser printer. This merged the printing and paper handling mechanisms from a high-end Xerox copier with a powerful embedded computer able to draw high resolution images onto the copier drum with a laser, replacing the usual optical mechanism used to create an impression from the source document.
Page 8
15By the late 1970s, a new buzzword, distributed computing, had emerged to describe the idea of having big and little computers work together over computer networks—for example, using a minicomputer or personal computer to
Page 8
By 1978, a program called Laurel had been developed for the Altos. This introduced what later became the standard way of work- ing with email: users downloaded their messages to their personal computers to file and read them. Replies were uploaded back to the ser…
Page 9
Approaches of this kind were called client-server computing—a program running on one computer (the client) made a request for a program running on another com- puter (the server) to do something, that is, to provide a service.
Page 9
Workstation companies targeted small markets that would not support the cost of developing new technologies. Instead, they depended on what was called an open systems approach—using standard processors, memory chips, networking standards, peripheral connections, and so on. Combined with the inherent price-performance advantages of microprocessor-based systems over minicomputers, this gave them a huge price-performance adva…
Page 12
Lisa had exactly the core capabilities that would define the most powerful personal computers of the next decade: hard disks, networking, a graphical user interface, and slots for expan- sion. Users could load several applications simultaneously, cutting and pasting between their windows. That wasn’t quite multitasking, as background applications were sus- pended, but the operating system did prevent applications from overwriting eac…
Page 14
The desktop publishing industry began in 1985 with the launch of Aldus P Maker, designed by Paul Brainerd (see figure 9.4). Brainerd had previously developed computerized production systems used by newspapers, and he recognized that a large potential market had opened up now that personal computers with the capabilities needed for page design were available. 31PageMaker let amateurs tinker with fonts and graphics until their newsletters or posters looked just right (to them, if not to trained designers). Professionals could produce slick-looking pages more rapidly than ever before.
Page 17
PageMaker worked with the new Apple LaserWriter printer. This cost $6,995, far more than the Macintosh it plugged into, yet still aggressively low by Apple’s standards because rendering pages described in Adobe’s new PostScript language required the printer to hold a more powerful processor and more memory than the computer did.32 As one reviewer concluded, “I can’t count the number of times I’ve shown some- one my Macintosh and they’ve said: ‘But it’s just a toy. . . .’ Now at least I can show PageMaker to them and say ‘Let’s see your IBM do that.’” 33Thanks to PageMake…
Page 17
Macintosh, unlike Lisa or Star, offered a compelling business case to a small but well- defined group of users. Graphical computing was still too expensive for general office use, but for people who needed to produce high-quality printed output, it was a bar- gain if it eliminated the cost and delays of working with a traditional print shop…
== The PC becomes a minicomputer ==
By the late 1990s, the PC had killed the minicomputer and the graphics work- station. Yet from the viewpoint of technology and architecture, the situation is the reverse: the personal computer as we know it today was invented over the course of the 1990s, not in 1981 with IBM’s first model or in 1977 by Apple. The PC architectures of the 2000s have more in common with those of 1980s minicomputers than they do with MS-DOS or CP/M. Since 2000, Windows has been based on an operating sys- tem designed by a former DEC engineer and patterned after a minicomputer system. From this perspective, the minicomputer never died. Rather, minicomputers shrank and replaced PCs without their users ever realiz…
Page 1
One obvious limitation of DOS was that it forced programmers who wanted to take advantage of the increasingly powerful graphical capabilities of PCs to bypass it to deal directly with the underlying hardware.
Page 2
4Windows and GEM were designed to work with special Maci applications, raising a further problem: because programs were written for DOS, few users ran Windows or GEM; but because few users ran Windows or GEM, most pro- grams were written for DOS.
Page 4
Windows 3.0 was a breakout hit, the product that finally shifted main- stream computer users into the age of the graphical user interface (figure 10.1). Windows was still not as elegant as the Macintosh system, but Apple charged a hefty premium. Someone looking for a new computer could get a bigger hard drive, larger screen, and more memory by choosing a Windows computer. Windows worked well enough to get work done with a growing number of powerful application programs that closely resem- bled their Macintosh coun…
Page 4
By the end of the decade, Intel controlled the evolution of the PC hardware plat- form almost as completely as IBM had controlled it in the mid-1980s. Intel used its new dominance to speed the adoption of some new technologies, such as the universal serial bus (USB), by building them into its chipsets. USB was a boon to computer users, replacing custom connectors and controllers for peripherals such as scanners, printers, keyboards, mice, and external disk drives with a single compact and flexible socket. Intel used the same power to derail the adoption of other technologies, such as high-speed IEEE 1394 (FireWire) peripheral connecti…
Page 18
Since the 1960s, it had been common practice for the processor to implement complex instructions with microcode. When a program- mer asked the VAX to evaluate a polynomial, that triggered a long series of simpler internal steps. 26Above all, complex instruction sets were supposed to make computers run faster, in part by reducing the number of times the computer had to fetch and decode new co…
Page 20
Those assumptions had been long accepted, but in the mid-1970s John Cocke of IBM argued that a computer using more and simpler instructions to complete a given task would outperform one with fewer and more complex instructions.
== The computer becomes a universal media device ==
From the 1980s to the early 2000s, two processes ran in parallel. On one track, the personal computer gained new capabilities. With them, it inched closer to becoming a universal media device—making telephone calls, playing and storing audio files, playing movies, storing and editing photographs, and playing games. On the other, less visible track, computers were making their way inside music players, televisions, cameras, and musical instruments. They dissolved the technologies inside but left the husk intact.
Page 1
A theoretical breakthrough came in 1965, with the p tion by James Cooley and John Tukey of a method of carrying out a Fourier transform of a signal that was much faster and thus more practical than classic methods. 3In the words of computer scientist Allen Newell, the discovery of the fast Fourier transform (FFT) “created the field of digital signal processing and thus penetrated the major bas- tion of analog computation.” 4The FFT allowed one to decompose a complex signal into combinations of basic periodic frequencies, just as a musical chord played on a piano is the result of hammers hitting several strings, plus their harmonics. Once decomposed, a computer can process the signal in any number of ways. Over time, these techniques migrated from large, expensive computers, like those used to handle communications with space probes, into cheap personal computers and consumer electroni…
Page 2
Homer Dudley d onstrated a keyboard-driven speech synthesizer at the 1939 World’s Fair. By the 1960s, researchers had built computerized speech synthesizers able to automatically turn text into recognizable speech.
Page 5
There was nothing new about the electronic transmission of pictures. Since the 1920s, photojournalists had used wire transmission, over public telephone lines, to rush images from and to newspaper offices. Those analog machines fixed the photograph to a drum and scanned it in a spiral pattern. As historian Jonathan Coopersmith has shown, entrepreneurs had been trying just as long to turn facsimile transmission into a general-purpose method for delivering business documents. By the 1960s Xerox had a viable service, but because its analog machines were built around high-precision components, they remained too expensive to really take…
Page 14
The Group 3 digital coding scheme was devised in 1977 in Japan around the potential of cheap microprocessors, in what Coopersmith called “the most important event in fax history since 1843.” Group 3 compressed each scanned page to transmit digitally in as little as fifteen seconds, much faster than earlier analog fax machines taking up to six minutes. When it was formally accepted in 1980, about 250,000 fax machines were in use in the United States. By 1990 there were five million. 19In Japan, fax was even more widely used, as written Japanese was hard to represent in telegram, telex, or email but easy to transmit as an image.
Page 15
Scanners were a popular, but expensive, part of desktop publishing operations in the late 1980s, along with Macintosh computers, PageMaker software, and laser print- ers. By the mid-1990s, the price of color scanners had dropped to a few hundred dol- lars and hard drives were big enough to hold large image collections. Scanners became popular consumer add-ons, and families began to digitize their photo coll…
Page 16
Industrial grade scanners let businesses scan and destroy incoming paperwork, converting it to electronic images. Specialist scanners, fitted with devices to turn pages, were used to digitize entire library collections by groups such as the Internet Archive and Google’s Books project.
Page 16
In 1945, working on the First Draft EDVAC design, John von Neumann was fascinated by the potential of
Page 16
the iconoscope, an electronic tube then used in television cameras, as a storage device. Even the term pixel, introduced with the transition to digital images, was a contraction of picture element, a term used since the early days of experimental television.
Page 17
Because it was very compact and power efficient, high-capacity flash memory was a crucial enabling technology for the creation of new portable devices.
Page 18
Early memory cards held only a few megabytes, needing aggressive compression to hold even a dozen images. That was provided by a new image format, the JPEG (named for the Joint Photographic Experts Group). In 1991, when libjpeg, a widely used open source code module for JPEG compression, was released, it took a powerful PC to create these files. By the late 1990s, the necessary computer power could be put into a camera, although early models would be tied up for several seconds processing each image. Once the memory card was full, users moved the files onto a computer. Digital photography was another of the practices made possible by the arrival of PCs with voluminous hard drives as a standard feature of middle-class households.
Page 19
When digital video disc (DVD) players arrived in 1997, initially priced around a $1,000, they became the fastest-adopted consumer devices in Ameri- can history. By 2003, half the homes in the United States had a DVD player, and players could be purchased for as little as $50. DVD was, in effect, the extension of CD technology to play digital video as well as audio. The discs were the same size, and DVD players could also handle…
Page 20
The convergence of computer and television technology was complete. Televisions had the same range of digital inputs as computer monitors, displayed similar resolu- tions, and were built from the same technologies. In fact, televisions were themselves computers. As the cost of powerful computer chips fell, even affordable televisions began to incorporate smart TV features. They had USB ports to play videos and music from hard disk drives, Ethernet ports, and Wi-Fi connections to access computer net- works and they let users download and run appl…
Page 21
What made it practical for users to start building up music libraries was the spread of effective compression technology. The MP3 file format could compress a music CD to perhaps 20 MB. That sacrificed audio quality, but it still sounded better than a tape copy.
Page 22
Launching that model, Steve Jobs was able to boast that Apple had sold 110 million iPods. 39It was the firm’s most popular computer, outsell- ing the combined sales of all Macintosh models more than ten times over.
Page 26
The creation of mobile devices, many built around licensed ARM processor cores, was made easier by the maturation of another technology: general-purpose field pro- grammable gate array (FPGA) chips that could be programmed electronically for par- ticular applications. This was a much cheaper process than producing custom silicon and was ideal for prototype devices or equipment with small production runs, for which conventional ASIC chips would not be v…
Page 27
Doom introduced the concept of the game engine, by separating the code needed to manage events in the game world and present them to players from the “assets” such as objects, monsters, and tunnels stored in data files. 48Infocom and Sierra On-Line had taken a similar approach to adventure games, but the high-performance action games had previously integrated the functions closely. Doom required elaborate and highly reusable graphics code, making the engine approach to software engineering (already established in areas such as expert systems, databases, and graphics rendering) highly effecti…
== The computer becomes a publishing platform ==
By June 1993, Andreesen and Eric Bina, a Unix staff specialist at the center, had released a test version of a browser that they later named Mosaic. Mosaic’s seamless integration of text and images made the potential of the Web instantly apparent (see figure 12.3). 14The first Mosaic users were people who already had powerful Unix workstations and fast Internet connec- tions, found mostly in universities and research labs. Its availability accelerated the Web’s growth. A web crawler program created by an MIT student discovered only 130 active Web servers in mid-1993, but 623 when it was run again at the end of that…
Page 6
The Web was just a thin layer on top of the Internet’s existing infrastructure. Because there was no central database of hyperlinks, users could follow links out from a page but not go the other way to see everything that linked to a page. Between the time a link was created and clicked, the page to which it pointed might have been edited to remove relevant information or deleted completely. Most of the external links on Web pages eventually stop working. Ted Nelson and Doug Engelbart were among the Web’s harshest critics. Nelson didn’t even consider the Web to be true hypertext. Xanadu was supposed to hold old versions of a page forever, so that the linked material would always be available. Even Tim Berners-Lee complained that only half of his vision had come true with commercial browsers like Netscape. He initially wanted a Web that was as easy to write to as it was to surf.
Page 10
Google’s big advantage came in figuring out how to rank the pages that held a search term, which it did by favoring websites that had been linked to by large numbers of other sites. Spam pages were unlikely to be linked to and therefore fell to the bottom of the rankings. This method was inspired by a system for the retrieval of scientific information developed by Eugene Garfield, called the Science Citation Index. It indexed scientific papers and ranked their impact by noting how many other papers referenced them.
Page 13
Everything that makes publishing to the Web easy makes indexing or cataloging it hard. Whether with humans, as Yahoo used to do, or with algorithms, as Google does, that is an enormous task requiring vast amounts of money and human talent.
Page 14
The economics of conventional publishing were comparatively straightforward: publishers made money on each book or record sold. Selling more copies meant making more money, so that each hit underwrote the cost of many flops. In contrast, a popular website ran up huge bills for network bandwidth and servers without receiving any income from readers to cover this expense. Grabbing more readers meant bigger losses, not bigger profits.
Page 14
Although Windows NT could do a creditable job serving Web pages from cheap, standard PC hardware, it never dominated the market for servers the same way it did for desktop operating systems. Most early websites ran on Unix servers or on BSD, which had evolved from a package of Unix upgrades to a free-standing alternative with no AT&T code. Unix systems were expensive, which drove up the cost of operating Internet sites. Instead of shifting to Windows and PC hardware to reduce costs, Web companies saved even more by relying on the free Linux operating sy…
Page 28
By the early 2000s the other key software components of a Web application server were also increasingly likely to be free software. The first to gain dominance was Apache, which has been the most widely used Web server since 1996.
Page 28
Most Web applications run on the stack of software called LAMP, which stands for Linux, Apache, MySQL, and PHP, a system that gradually replaced Perl as the default choice for coding Web applications.
Page 29
Microsoft was never able to turn the Web into a pro- prietary system because it couldn’t match its domination of the browser side of the Web with similar control over the servers that generated Web pages. If Microsoft’s Internet information server had also held a market share of over 90 percent, then Microsoft could have gradually shifted the Web from a system based on open stan- dards to a system in which an all-Microsoft stack of software would suffice. As most websites used free software, even the success of Internet Explorer did not give Micro- soft the power to unilaterally set Web standards for its …
Page 30
Firefox was the first open source desktop computer application widely used by Windows and Macintosh users. Its triumph signaled a shift in the computing landscape. Microsoft’s hold on desktop operating systems and office applications remained secure, but the firm’s attempts to dominate and enclose the Internet were visibly crumbling.
== The computer becomes a network ==
Rather than retrieving and displaying pages stored as static files, browsers have become a universal interface for online applications running in the cloud—a distributed network of gigantic data centers each composed of thousands of computers.
Page 1
By the 1990s, many mainframe and minicomputer firms like Unisys (the heir to Univac) and Data General had reoriented themselves to sell powerful servers based on standard processor chips. The spread of the Internet expanded this market. Major websites soon outstripped the capabilities of any single server, even a late-1990s flagship Unisys server with thirty-two Intel processors. Companies set up farms of servers running Web applications, with a load balancing system to route each new request to the least busy server. Storage area networks provided ultra-high-speed connections between servers and disk pools. The technological lines separating main- frames, minicomputers, and personal computers were st…
Page 2
The new approach was pioneered by Google. It became one of the world’s most valu- able companies by providing much better results than its competitors in two areas: Web search and Web advertising. Its success is usually attributed to superior algo- rithms, particularly the PageRank algorithm its founders created as graduate students. That gives only a part of the picture. Google’s algorithms provided better search results to its users, and its advertising system made more money by selecting relevant ads that users might click. But running those clever algorithms consumed more processor cycles and RAM than the simpler approaches of its comp…
Page 3
Conventional servers were expensive because they used more reliable, higher per- formance components. Google achieved reliability and performance with an extra layer of software.
Page 3
More processor power and network bandwidth is now devoted to transmitting and decom- pressing streaming video than to any other task.
Page 12
Microsoft’s rivals had met little success competing with Windows, but Java raised the possibility of taking down the personal computer itself. Sun, Oracle, and IBM joined together to promulgate a new standard for the network computer. This was a hybrid between a personal computer and a terminal. Like a terminal, it worked only when connected to a network and had no disk drive of its own. Like a personal com- puter, it had a capable processor to run Java programs locally rather than rely entirely on the processor power of the server as terminals d…
Page 19
Historians, having long memories, like to quibble about exactly how new the model really was. 20Back in the 1970s, for example, timesharing companies were popu- lar more for the access they offered to online applications rather than for simple access to an interactive computer. Terminals did not need to have any software installed on them. With a longer perspective, enthusiasm for freestanding personal computers in the 1980s and for client-server applications in the 1990s may look like an odd depar- ture from the historical norm. But as the story of Java shows, it took considerable work to remake Web browsers into a smooth and capable interface for online applications, able to serve as a modern replacement for the text terminals of t…
Page 21
Back in the 1950s, coding had been identified as the most routine, and worst paid, aspect of programming. That work was soon automated by software tools, and the job title went out of use during the 1960s. Title inflation followed—programmers were called analysts or software engineers. The programming staff at firms like Google are usually called engineers, despite efforts by the traditional engineering professions to reserve the title for people achieving the status of professional engineer (a four-year accredited degree and professional examina- tion, followed by a period of supervised work experience, culminating in a state licen…
== The computer is everywhere and nowhere ==
By that point, a smaller, cheaper handheld computer had appeared. The Palm Pilot, released in 1996, was less technically ambitious than the MessagePad in every way. This was clearest in the handwriting recognition. Newton tried to recognize cursive text written anywhere on the screen. Palm required users to write characters one at a time in the input box, letters on the left, numbers on the right. They weren’t even ordinary let- ters: each was replaced by a stylized representation in a new alphabet called Graffiti. Once users had adjusted to this system, text entry was reliab…
Page 4
GPS is operated and controlled by the US Air Force. The US military reserves some capabilities for its own use, but regular GPS service is free to all users, laying the foundation for commercial exploita- tion by companies like Apple, Google, and Uber. In this regard, GPS is, like the Internet itself, a government-sponsored technology that became the foundation for huge private wealth. The European Union, Russia, and China have each developed or planned to develop similar satellite-based systems (Galileo, GLONASS, and BeiDou, respec…
== Epilogue: A Tesla in the valley ==
Once computers became part of every infrastructure, the idea of the c puter as a machine in the tradition of ENIAC, a self-contained device whose users tackled different jobs by creating new programs, has become less relevant. The conceptual prob- lem with the idea of a universal solvent was always that, if any such substance was ever concocted, no flask could contain it. Our protagonist, which dissolved so much in the world that once seemed permanent, has finally dissolved its…

Revision as of 00:58, 12 January 2025

Haigh and Ceruzzi, A New History of Modern Computing (2021)

Becoming Universal: Introducing a New History of Computing

The wholesale shift of video and music reproduction to digital technologies likewise challenges us to integrate media history into the long his- tory of computing. Since the original book was written, the computer had become something new, which meant that the book also had to become something n…

Yet this discussion is rarely grounded in the longer and deeper history of computer technology.

Our aim here is to integrate Internet and Web history into the core narrative of the history of computing, along with the history of iPods, video game consoles, home computers, digital cam- eras, and smartphone apps.

The computer has a relatively short history, which for our purposes begins in the 1940s.

Computer scientists have adopted a term from Alan Turing, the universal machine, to describe the remarkable flexibility of programmable computers. To prove a mathe- matical point he described a class of imaginary machines (now called Turing machines) that processed symbols on an unbounded tape according to rules held in a table. By encoding the rules themselves on the tape, Turing’s universal machine was able to com- pute any number computable by a more specialized machine of the same ilk. Computer scientists came to find this useful as a model of ability of all programmable computers to carry out arbitrary sequences of operations, and hence (if unlimited time and storage were available) to mimic each other by using code to replicate missing h…

Today about half the world’s inhabitants use hand-held computers daily to facilitate almost every imaginable human task.

Computers will never do everything, be used by everyone, or replace every other technology, but they are more nearly universal than any other technology. In that broader sense the computer began as a highly specialized technology and has moved toward universality and ubiquity. We think of this as a progression toward practical universality, in contrast to the theoretical universality often claimed for computers as embodiments of Turing machines.

To the extent that it has become a universal machine, the computer might also be called a universal solvent, achieving something of that old dream of alchemy by making an astounding variety of other technologies vanish into itself. Maps, filing cabinets, video tape players, typewriters, paper memos, and slide rules are rarely used now, as their functions have been replaced by software running on personal computers, smart- phones, and networks. We conceptualize this convergence of tasks on a single platform as a dissolving of those technologies and, in many cases, their business models by a device that comes ever closer to the status of universal technological sol…

In many cases the computer has dissolved the insides of other technologies while leaving their outward forms intact.

Decades ago, when the scope of computing was smaller, it made sense to see electronic comput- ing as a continuation of the tradition of scientific computation. The first major history of computing, The Computer from Pascal to von Neumann by computing pioneer Her- man Goldstine, concluded in the 1940s with the invention of the modern …

In A History of Computing Technology, published in 1985, Michael Williams started with the invention of numbers and reached electronic computers about two thirds of the way through. By the 1990s the importance of computer applications to business administration was being documented by historians, so it was natural for Martin Campbell-Kelly and William Aspray, when writing Computer: A History of the Infor- mation Machine, to replace discussion of slide rules and astrolabes with mechanical office machines, filing cabinets, and administrative proce…

The breadth of technologies displaced by the computer and practices remade around it makes it seem arbitrary to begin with chapters that tell the stories of index cards but not of televisions; of slide rules but not of pinball machines; or of typewriters but not of the postal system. But to include those stories, each of our chapters would need to become a long book of its own, written by different experts.

ENIAC is usually called something like the “first electronic, general purpose, programmable computer.”9

… tronic distinguishes it from electromechanical computers whose logic units worked thousands of times more slowly. Often called relay calculators, these computers carried out computations one instruction at a time under the control of paper tapes. They were player pianos that produced numbers rather than music…

General purpose and programmable separated ENIAC from s cial purpose electronic machines whose sequence of operations was built into hardware and so could not be reprogrammed to carry out fundamentally different tasks.

Inventing the computer

This was not exactly the beginning of the computer age.

ENIAC’s place in computer history rests on more than being the first device to merit check marks for electronic and programmable on a comparison sheet of early machines. It fixed public impressions of what a computer looked like and what it could do. It even inspired the practice of naming early computers with five- or six-letter acronyms ending with AC. During a period of about five years as the only programmable elec- tronic computer available for scientific use, ENIAC lived up to the hype by pioneering applications such as Monte Carlo simulation, numerical weather prediction, and the modeling of supersonic air f…

E lier meanings of program included a concert program, the program of study for a degree, and the programming of radio stations. In each case the program defined a sequence of

actions over time.

Discussion of programming a computer first appeared in the ENIAC project. By 1945 it had settled on something like its modern meaning: a computer program was a configuration that carried out the operations needed for a job. The act of creating it was called programming.3

ENIAC was not the first programmable computer, but it was the first to automate the job of deciding what to do next after a sequence of operations finished.

Producing the entire table by hand took months of work. Hard as the computers worked, their backlog of work grew ever larger. New guns were being shipped to Europe without the tables needed to operate them.

Because ENIAC was both electronic and general purpose, its designers faced a unique challenge. In Mauchly’s words, “Calculations can be performed at high speed only if instructions are supplied at high speed.” 6That required a control method faster than paper tape. It also meant avoiding frequent stops for human intervention.

Mauchly sketched out several possible mechanisms to select automatically between different preset courses of action depending on the values ENIAC had already calcu- lated. Computer scientists call this conditional branching and view it as a defining fea- ture of the modern co…

Altogether, ENIAC was not so much a single computer as a kit of forty modules from which a different com- puter was constructed for each problem.

In the end, ENIAC spent s thing like 15 percent of its production time calculating them and the rest on other, varied jobs, including many to aid the Los Alamos and Argonne laboratories in the development of nuclear weapons and reactors.

ENIAC was also a workplace of around two thousand square feet. Its panels were arranged in a U shape, working like a set of room dividers to enclose an inner space in which its operators worked.

Data went in and out of ENIAC on punched cards: small rectangles of cardboard each able to store 80 digits as a pattern of holes. The women spent much of their time punching input data onto cards and running output cards through an IBM tabulating machine to print their contents.

ENIAC used twenty-eight vacuum tubes to hold each decimal digit. That approach would not scale far. It took years of engineering frus- trations to make delay line memory work reliably, but the idea was simple and compel- ling. Pulses representing several hundred digits moved through a fluid-filled tube. Signals received at one end were immediately retransmitted at the other end, so that the same sequence was cycling constantly. Whenever a number reached the end of the tube it was available to be copied to the computer’s processor …

Von Neumann’s First Draft of a Report on the EDVAC described logical structures rather than the specifics of hardware. One of its most novel features was that, as the team had decided by September 1944, coded instructions were stored in the same stor- age devices used to hold d…

As it took more than a year for anyone to get around to filing a patent on ENIAC, this disclosure of the new architecture put the modern computer into the public domain.

The idea of loading a program into main memory was important and set EDVAC apart from existing computer designs. However, following work done by Haigh in col- laboration with Mark Priestley and Crispin Rope, we prefer to separate the enormous influence of EDVAC into three clusters of ideas (or paradigm…

The first of these, the EDVAC hardware paradigm, specified an all-electronic machine with a large high-speed memory using binary number storage (figure 1.2).

The second was the von Neumann architecture paradigm.

Storage and arithmetic was binary, using what would soon be called bits (a contraction of binary digits) to encode information. Each 32-bit chunk of memory (soon to be called a word) was referenced with an address number.

The third cluster of ideas was a system of instruction codes: the modern code para- digm. The flow of instructions and data mirrored the way humans performed scientific calculations as a series of mathematical operations, using mechanical calculators, books of tables, and pencil and paper. Even Eckert and Mauchly credited von Neumann with devising the proposed instruction code. It represented each instruction with an opera- tion code, usually followed by parameters or an…

Most computers today harness several processor cores running in parallel, but the concept of processing a stream of instructions from an addressable memory remains the most lasting of all the First Draft’s contributions. Computer scientist Alan Perlis remarked that “sometimes I think the only universal in the computing field is the fetch-execute cycle.”

…would-be computer builders to see ENIAC and learn more about the ideas contained in the First Draft, in the summer of 1946 the Moore School and the US military co-sponsored a course on the “Theory and Techniques for Design of Electronic Digital Computers.”

Early computers had to transmit digital pulses lasting perhaps one hundred thousandth of a second through miles of wire with complete reliability. They relied on digital logic circuits of huge complexity. They included thousands of vacuum tubes, which under normal use would be expected to fail often enough to make any computer useless. Building one meant soldering hundreds of thousands of electrical joints. But the biggest challenge was producing a stable memory able to hold tens of thousands of bits stable long enough to run a program.

The device p larly known as the Williams tube, after engineer Freddy Williams, stored bits as charges on a cathode ray tube (CRT) similar to those used in televisions and radar sets of that period. This produced a pattern of visible dots on the tube. By adding a mechanism to read charged spots as well as write them, a single tube could be used to store two thou- sand bits. The challenge was to read them reliably and to constantly write them back to the screen before they faded a…

Each computer project launched in the 1940s was an experiment, and designers tried out many variations on the EDVAC theme. Even computers modeled on von Neumann’s slightly later IAS design, built at places like Los Alamos, the Bureau of Standards, and the RAND Corporation, diverged by, for example, using different memory technologies.

For example, many programs worked on data held in a matrix structure (essentially a table). The programmer defined a loop to repeat a sequence of operations for each cell in the matrix.

Storing programs and data in addressable memory was a hallmark of the EDVAC approach. A single memory location was a word of memory. But computer designers made different choices about how large each word should be and how instructions should be encoded within it. The original EDVAC design called for thirty-two bits in each word. Early computers used word lengths between seventeen (EDSAC) and forty (the IAS computer and the Manchester Mark 1) bits.

One crucial engineering d sion was whether to move all the bits in a word sequentially on a single wire or send them together along a set of parallel wires. This was the original meaning of serial and parallel in computer design.

Babbage’s efforts a hundred years earlier to build a mechanical computer were remarkable but had no direct influence on work in the 1940s; the ENIAC team didn’t know about them and even Howard Aiken, who helped to revive Babbage’s reputation as a com- puter pioneer, designed his computer in ignorance of the details of Babbage’s wo…

While von Neumann was aware of, and intrigued by, Turing’s concept of a “universal machine,” we see no evidence that it shaped his design for EDVAC.

General purpose computers can do many things. The disadvantage to that flexibility is that getting any particular task done takes minutely detailed programming. It did not take Grace Hopper long to realize that reusing pieces of Mark 1 code for new problems could speed this work. Her group built up a paper tape library of standard sequences, called subroutines, for routine operations such as calculating logarithms or converting numbers between decimal and binary format.

The arrival of EDVAC-like computers opened up new possibilities for automating program preparation. The computer itself could be programmed to handle the chores involved in reusing code, such as renumbering memory addresses within each subroutine according to its eventual position in memory. These new tools were called assemblers because they assembled subroutines and new code into a single executable program. Assemblers quickly picked up another function. Humans found it easier to refer to instructions by short abbreviations, called mnemonics.

The list of instruction mnemonics and parameters was called assembly language. The assembler translated each line into the corresponding numerical instruction that the computer could execute.

Of all the 1940s computers, EDSAC had the most convenient programming system. Every time the machine was reset, code wired into read-only memory was automatically triggered to read an instruction tape, translating mnemonics on the fly and loading the results into memory. David Wheeler developed an elegant and influential way of calling subroutines, so that the computer could easily jump back to what it was doing previously when the subroutine finished. 25EDSAC users built a robust library of subroutine tapes and published their code in the first textbook on computer programming.26

Symbolic assemblers let p mers use labels rather than numbers to specify addresses, which eliminated the need to edit the code every time locations changed.

Eckert and Mauchly, with the help of about a dozen technical employees of their division, designed and built the Univac in a modest factory at 3747 Ridge Avenue in Philadelphia (see figure 1.3).

Aiken could not imagine that “the basic logics of a machine designed for the numerical solution of differential equa- tions [could] coincide with the basic logics of a machine intended to make bills for a department store.” 38Eckert and Mauchly knew otherwise. Univac inaugurated the era of large computers for what were later called “data processing” applicati…

The closest technology in widespread administrative use was punched card machines, the core product of IBM.

Punched card machines were often called unit record equipment because a single card encoded information about one thing, such as a sales transaction or employee. A typical small installation consisted of several key punches to get the data onto cards, plus several specialized devices such as tabulators, sorters, and collators. Each machine was configured to carry out the same operation on every card in the deck.

For most customers, what was revolutionary about the Univac was the use of tape in place of punched cards. Univac could scan through a reel of tape, reading selected records, performing some process, and writing the results to another tape. 39Carrying out all the operations needed for a job meant carrying decks of cards around the room, running them through one machine after another. 40That made punched card process- ing labor-intensive. In contrast, the Univac could perform a long sequence of auto- matic operations before fetching the next record from memory. It replaced not only existing calculating machines but also the people who tended them. Customers regarded the Univac as an information processing system, not a calculator. Published descriptions of the Univac nearly always referred to it as a “tape” machine. For General Electric, for example, “the speed of computing” was “of tertiary import…

Univac #1 was used to cross tabulate census data for four states. Data initially punched onto eleven million cards (one for each person), was transferred to tape for processing by the Univac. 43The machine was also used for tabu- lating another subset of the population involving about five million households. Each problem took several months to compl…

Its uniprinter, based on a Remington Rand electric t writer, could print only about ten characters per second. That proved a poor match for the high speed tape and processor, but in 1954 Remington Rand delivered the Univac High Speed Printer, able to print a full 130-character line at a time.

On Friday, October 15, 1954, the GE Univac first produced payroll checks for the Appliance Park employees. 49Punched card machines had been doing that job for years, but for an electronic digital computer, which recorded data as invisible magnetic spots on reels of tape, it was a significant milestone.

The computer becomes a scientific supertool

By 1956 IBM had installed more large computers than Univac. 3That owed much to its 704 computer, announced in 1954 as the successor to the 701 but incorporating three key improvements.

Above all, core memory provides random access, taking a small and consistent amount of time to retrieve any word from memory.

IBM and other manufacturers switched over to core memory during the mid- 1950s. Magnetic core memories were retrofitted to ENIAC and MIT’s Whirlwind computer in the summer of 1953.

Early computers wasted much of their incredibly expensive time waiting for data to arrive from peripherals. Magnetic tapes and drums supplied information much faster than punched cards, but not nearly quickly enough to keep processors busy. Programs that processed data from tape usually spent most of their time running loops of code to repeatedly check if the next chunk of data had arrived. Printers were even slower.

In the late 1930s in what may have been the first attempt to build an electronic digital computer, John V. Atanasoff had the idea of using a rotating drum as temporary memory, storing data on 1600 capacitors, arrayed in 32 rows. 9After World War II, the drum re- emerged as a reliable, rugged, inexpensive but slow memory device.

Finding a reliable memory was by far the hardest part of putting together a computer in the early 1950s.

The Librascope/General Precision LGP delivered in 1956, had a repertoire of only sixteen instructions and looked like an oversized office desk. Centering the design on a 4096-word drum simplified the rest of the machine hugely. It needed only 113 vacuum tubes and 1,350 diodes, against the Univac’s 5,400 tubes and 18,000 diodes. At $30,000 for a complete system, including a Flexowriter for input and output, it was also one of the cheapest early computers. More than 400 were sold. 17It provided a practical choice for customers unable or unwilling to pay for a large computer.

Scientific users drove the development and adoption of more a tious programming tools, called compilers. Whereas assemblers made writing machine instructions faster and more convenient, compilers could translate mathematical equa- tions and relatively easy-to-understand code written in high-level languages into code that the computer could exec…

… for a particular problem.” 21In those days the ideas of assembling, linking, and compiling code were not rigorously separated. Each term referred to the idea of knitting together program code and library subroutines to produce a single executable program.

Many factors contributed to the success of Fortran. One was that its syntax—the choice of symbols and the rules for using them—was close as possible to that of algebra, given the difficulty of indicating superscripts or subscripts on punched cards. Engineers liked its familiarity; they also liked the clear, concise, and easy-to-read user’s manual. Per- haps the most important factor was performance. The Fortran compiler generated machine code that was as efficient and fast as code written by hum…

SHARE soon developed an impressive library of routines that each member could use, many of them for mathematical tasks such as inverting matrices. The working practices adopted by SHARE had much in common with later open source projects. There were mechanisms for distributing code and documents, bug reporting proce- dures, and ad hoc groups of programmers from different companies who pooled their efforts to develop particular sys…

When the IBM 709 arrived in 1956, SHARE launched the SOS (defined variously as standing for Share Operating System and SHARE 709 System) project to develop an ambitious successor to the GM system. SOS aimed to automate much of the work carried out by operators. For that reason, it was the first piece of software to be called an operating system.

By the 1960s Fortran programming was an increasingly vital skill for scientists and engi- neers. Computer analysis and simulation underpinned scientific breakthroughs in X-ray crystallography, particle physics, and radio astronomy. Engineers used computers to simulate supersonic airflow, model the stresses on bridges and buildings, and design more efficient en…

In a stack, the item most recently stored is the first to be retrieved—as in a stack of papers accumulating on a desk.

The introduction of the transistor as a replacement for the vacuum tube mirrors the story of core memory. Transistors could serve the same roles as vacuum tubes in digital logic circuits but were smaller, more reliable, and initially more expensive. They used less power and could be driven faster.

Most p grammers never touched the machine that ran their programs. They wrote programs in pencil on special coding sheets, which they gave to keypunch operators who typed the code onto cards. The deck of punch cards holding the source code was read by a small IBM 1401 computer and transferred to a reel of tape. The operator took this tape and mounted it on a tape drive connected to the mainframe. The programmer had to wait until a batch was run to get her results. Usually these indicated a mistake or need to further refine the problem. She submitted a new deck and endured another long wait. That method of operation was a defining characteristic of the mainframe era.

To pack data more efficiently into memory, Stretch adopted a flexible word length of anything between 1 and 64 bits. Stretch engineer Werner Buchholz introduced the idea of the byte—the smallest unit of information a computer could retrieve and pro- cess. 45Bytes too were originally variable length, up to 8 bits. On later machines, how- ever, IBM standardized the byte at 8 bits. Combining several bytes when needed let more computers manipulate words of 16, 24, 32, or 64 bits. This approach was even- tually adopted throughout the computer…

Virtual memory is a way to make a computer’s fast main memory seem bigger than it is, by swapping data with a slower but larger storage medium such as a disk. A user of the Atlas saw a machine with a virtual memory of one million, 48-bit words. Special hardware translated requests to read or write from these virtual addresses into actions on the machine’s much smaller physical memory, a capability known as dynamic address translation.47 When there was not enough physical memory to hold all the pages, it looked for pages that were not currently in use to swap from core memory out to drum memory. Optimizing this process was crucial. Whenever a program needed to work with a page of memory that had been swapped out, Atlas would have to copy it back from drum memory, which slowed things down enormously. Atlas was designed for multipro- gramming, so that it could get on with another program while the pa…

The computer becomes a data processing device

Business data processing was a largely separate field from scientific computing, with its own journals, associations, and professional identities. In the computer’s march toward

Page 1

universality this was the second major community of computing to develop. From the late 1950s to the 1980s it was unquestionably the largest.

Page 2

Nowhere did we see a computer that was doing jobs to a regular planned schedule.” Companies were slow to adapt to the potential of the new technology, instead trying to transfer existing punched card procedures directly to computer.4

Page 2

…clerical automation soon overtook scientific calculation as the biggest market for computer technology. Com- puters designed specifically for administrative use ran from relatively affordable machines intended for use alongside existing punched card installations to multimil- lion dollar computers built to handle the core administrative tasks of large corpora- tions. Data processing spurred developments including the adoption of disk drives, the development of the IBM’s dominant System/360 mainframe architecture, and the emergence of softw…

Page 2

…a general-purpose computer storing programs and data on a magnetic drum using ERA technology.9 The 650 was launched in 1954. It was the first mass-produced computer, with almost two thousand manufactured. Like ENIAC, the 650 had to be used with a full set of conventional punched card machines to prepare, process, and print cards…

Page 4

Thomas Watson Jr. directed that IBM allow universities to acquire a 650 at up to a 60% discount, if the university agreed to offer courses in business data processing or sci- entific computing. 11Many universities took up this offer, making the 650 the machine on which thousands of scientists learned to program. Donald Knuth later dedicated his mon- umental series, The Art of Computer Programming, to “the Type 650 computer once installed at Case Institute of Technology, in remembrance of many pleasant even…

Page 4

The 1401’s success owed a lot to a piece of peripheral equipment that IBM intro- duced with it. The type 1403 printer moved a continuous chain of characters laterally across the page. Magnetically driven hammers struck the chain at the precise place where a character was to be printed. In one minute it printed 600 lines, far more than anything else on the market, and could stand up under heavy…

Page 4

By 1962, just eleven years after IBM leased its first computer, the firm was receiv- ing more income from computers than from punched card machines. By 1966 IBM had installed an estimated 11,300 systems from the 1400 family, accounting for about a third of all US-built comp…

Page 5

At nearly every university computer center, someone figured out a sequence that would sound out the school’s fight song when sent to the printer. Printer music, recorded by his father, underpinned Icelandic composer Jóhann Jóhannsson’s acclaimed orchestral work IBM 1401, A User’s Manual.

Page 5

An array of spinning disks stored more data at a lower cost than its older cousin, the magnetic drum. Drums had a line of fixed read and write heads, one for each track on the drum. They were rugged and fast but expensive. Disk drives used a single movable head for each side of a disk.

Page 5

In 1956 IBM publicly announced the Model 350 disk storage unit. It used a stack of fifty aluminum disks, each 24 inches in diameter, rotating at 1,200 rpm. Total storage capacity was five million characters. The disk was attached to a small drum computer, the 305, better known as the RAMAC (random access method of accounting and control).

Page 5

What IBM meant by “random access” was that data could be fetched rapidly from any part of a disk, in contrast to the sequential operation of a deck of punched cards or a reel of tape. RAMAC made it feasible to work one record at a time. It would have taken a big IBM system six minutes to search a tape and retrieve one value. The relatively inexpen- sive RAMAC could locate a record and start printing out its contents in less than a…

Page 7

Many companies, particularly those installing smaller computers, expanded and upgraded their existing punched card departments to data processing departments. The terminology came from IBM, which wanted to tie business computing to its existing strength in punched card machines. Its punched card machines became data processing equipment, and its computers electronic data processing systems.

Page 8

There were five main kinds of work in data processing departments. In ascending order of status and pay, these were key punch work, machine operation, programming, systems analysis, and management. Not coincidentally, the proportion of women doing each job fell in exactly the same order. All program code and input data had to be punched onto cards. Because this work had strong parallels with typing it was almost invariably given to women. Keypunch work was the biggest and worst paid job category in data processing. Operating other punched card equipment was treated as men’s work by most American companies, although practices varied. Computer operation was seen as an extension of the same labor, so operators were a mix of men and women. Systems analysis was the job of looking at business procedures to redesign them around the new technology. Analysts were not expected to write programs, but they were expected to produce very detailed specifications for computer processes. In big companies this was an extension of the existing work of the overwhelmingly male systems and procedures groups, which prior to computerization had taken on jobs such as documenting procedures and redesigning forms. After computerization it remained a job for men.

Page 9

Programming was the only job without a clear parallel in existing administrative punched card work. It was slotted in conceptually between the existing work of machine operation and systems analysis. Programmers would take the specifications written by the analysts and convert them into computer instructions. Within the ide- ology of data processing this was seen as a less creative task than analysis—successful programmers would aspire to become analysts and eventually managers, leaving the machine itself further behind as their careers progressed. Early descriptions sometimes mention the lower-status job of coder, turning instruction mnemonics into numerical codes, but that task was soon automated by increasingly powerful assembl…

Page 9

The openness of programming jobs to women during the 1950s is often exagger- ated.

Page 9

Bigger computers relied on magnetic tape. The IBM 705 could store five million characters on a reel of magnetic tape. At 15,000 characters per second, it would take about six minutes to process an entire tape.

Page 10

Sorting records held on punched cards was easy, if slow. Each card generally held a single record, which is why unit record equipment was one of IBM’s euphemisms for punched card machines. Decks dropped into the sorter were separated into ten output trays based on the value of a selected digit. Running each output deck through the machine again sorted on the next digit. Eventually the records would all be in order and the deck could be reassembled. It wasn’t practical to cut a tape into tiny pieces and splice them back together, so finding an effective way to sort tape files was crucial for the efficient application of computers to business.

Page 10

In a similar way, programmers in data p ing installations started producing generalized sort routines rather than constantly rewriting the same code for different applications.

Page 11

Producing a new report in a computerized system meant writing a program. Fred Gruenberger, who worked on the system, noted that managers were “vitriolic” when told it would cost thousands of dollars to tweak a report, because on a tabulator “all you have to do to change your report is move five wires.” The new system automatically generated a report program when fed cards describing the desired report format.26

Page 11

…the US government announced that it would not purchase or lease computer equipment unless it could handle COBOL.29 As a result, COBOL became one of the first languages to be sufficiently standardized that a program could be compiled on computers from different vendors and run to produce the same results. That occurred in December 1960, when almost identical programs ran on a Univac II and an RCA 50…

Page 13

Part of COBOL’s ancestry can be traced to Grace Hopper, who in 1956 had developed a compiler sometimes called FLOW-MATIC, which was geared toward business applica- tions.

Page 13

It was from Hopper that COBOL inherited its most famous attribute, long variable and command names intended to make the code read as English, in contrast with Fortran, which aspired to mimic mathematical notation. COBOL code usually centered on retrieving data from files and manipulating it…

Page 13

These product lines were justified by a notion that business and scientific users had fundamentally different hardware needs, which no longer held up. Business customers were expected to work with large sets of data on which they performed simple arithme- tic, whereas scientific customers did advanced calculation on small sets of data. In reality, scientists and engineers were handling large data sets for applications like finite-element analysis, a technique developed for building complex aerospace structures. And business applications were growing increasingly ambitious. In its final report, the SPREAD Com- mittee recommended a single unified product line for scienc…

Page 14

The 360 used a combination of software and the microprogrammed instructions for what Larry Moss of IBM called emulation of older machines, implying that it was “as good as” (or even better than) the original, rather than mere “simulation” or worse, “imitation.” The 360 Model 65 sold especially well because of its ability to emulate the large business 7070 computer. IBM made sure that the low-end models 30 and 40 effectively emulated the 1401. 39Their faster circuits could run old programs as much as ten times faster than a real 1401. By 1967, according to some estimates, over half of all 360 jobs were run in emulation mode. Despite its name, software is more perma- nent than hardware. Decades later, 1401 programs were still running routine payroll and other data processing jobs on computers from a variety of suppliers. The program- mers who coded these had no idea how long-lived their work w…

Page 15

The 360’s word length was 32 bits, a slight reduction from earlier large scientific computers that let IBM standardize on 8-bit bytes. Four bytes fit neatly into one word. A byte could encode 256 different combinations: representing uppercase and lowercase letters, the decimal digits one to

Page 15

ten, punctuation, accent marks, and control codes with room to spare. Since 4 bits were adequate to encode a decimal or hexadecimal digit (the 360 supported both) one could pack two into each byte.

Page 16

IBM represented characters using EBCDIC (extended binary coded decimal inter- change code), an extension of a code developed for punched card equipment. It was well designed and offered room for future expansion, but it was not compatible with the ASCII standard adopted by the American National Standards Institute in 1963. ASCII standard- ized only seven bits, not eight. Punched paper tape was still in common use, and the com- mittee felt that punching eight holes across a standard piece of tape would weaken it too much. The computing world split, with EBCDIC used at IBM and ASCII …

Page 16

System/360 was designed for technical computing as well as business data processing. In both markets, IBM’s fast and rugged chain printers were a crucial selling point.

Page 18

In the end, IBM settled on two operating systems. Most users ran DOS, the disk operating system. Created in just a year by a separate team, it threw out the grand goals of OS/360 in order to provide efficient batch mode processing on the smaller System/360 machines that took over from 1401s as the workhorses of data processing. DOS quickly became the world’s most widely used operating system.

Page 18

Although System/360 was intended to work equally well for scientific and data pro- cessing applications, it was much more successful for data processing.

Page 19

Computers had been sold to American business by people like General Electric’s Roddy Osborn as the basis of a managerial revolution. In reality they were usually put to work to speed up jobs already being carried out with punched card machines.

Page 19

Handling data on disk was much more complicated than working with tape. Imagine that a disk holds one hundred thousand customer records. The big benefit of disk storage is random access—a program can request data from any part of the disk. But how does the program know where to find the desired record without having to read its entire contents? Pioneers came up with a variety of methods for structuring and indexing records held on disk, such as hashing, inverted files, and linked lists. Each was hard for a typical programmer to understand and implement.

Page 22

The software i try started when companies providing programming services, reusing bits of code from one project to the next, realized that they could produce generalized programs able to handle the needs of multiple companies.

Page 27

In 1969, the firm announced its intention to unbundle its software p ages and customer education from its hardware by charging for them separately.

The computer becomes a real-time control system

ENIAC had been built to calculate the trajectories of shells in order to produce printed firing tables.

Page 2

Digital computers p cessed data encoded as digits stored in their registers and memory units. They were programmed to execute sequences of mathematical and logical operations needed for a particular task—adding, subtracting, multiplying, and dividing the numbers and shuffling them around in memory.

Page 2

SAGE introduced more fundamentally new features to computing than any other project of its era, including networking computers and sensors and the development of interactive computer graphics.

Page 5

DEC not only permitted modification by its customers, it encouraged it. The tiny company could not afford to develop the specialized interfaces, installation hardware, and software that were needed to turn a general-purpose computer into a useful prod- uct. Its customers welcomed the opportunity. 29DEC soon began publishing detailed specifications about the inner workings of its products and it distributed them widely. Stan Olsen said he wanted the equivalent of “a Sears Roebuck catalog” for Digital’s products, with plenty of tutorial information on how to hook them up to industrial or laboratory equipment. 30DEC printed these manuals on cheap newsprint, distributing them liberally to potential custom…

Page 10

Its limited memory steered programmers away from high-level programming languages, toward assembly language or even machine code. Yet the simplicity of the PDP-8’s architecture, coupled with DEC’s policy of making information freely available, made it an easy computer to underst…

Page 11

The first steps in miniaturization involved printed circuit boards, which were used to eliminate wires and pack components more closely together by etching a pattern on a plastic board covered with copper or some other electrical conductor.

Page 15

Printed circuits found their first civilian applications in hearing aid production, where miniaturization and portability had long been crucial.

Page 15

Pen

Page 15

For the PDP-8 it relied on flip-chip modules: small printed circuit boards on which transistors, resistors, and other components were mounted. These in turn were plugged into a hinged chassis that opened like a book. The result was a system consisting of a processor, a control panel, and core memory in a package small enough to be embed- ded into other equip…

Page 15

For Noyce the invention of the IC was less the result of a sudden flash of insight than the result of a gradual build-up of engineering knowledge at Fairchild about materials, fabrication, and circuits since the company’s founding in 1957.

Page 16

The relationship between printing, photography, and microelectronics has been a close one.

Page 18

Apollo software was written on mainframes and translated into binary data. The code was wired into read-only core ropes: long strings of wires woven through magnetic cores, which stored a binary one if a wire went through it, or a zero if the wire went around it. Hamilton described the technique of scrubbing the code as the Auge Kugel method: the term was (incorrect) German for eyeball.56 In other words, one looked at the code and decided if it was correct or not. NASA was not sure whether the academic culture of MIT was disciplined enough for the job. It had the Instrumentation Lab print out the code listings on reams of fan-fold paper and ship them to TRW in Los Angeles, where John Norton, a TRW employee, would scrutinize the code and indi- cate any anomalies he found. Some at MIT resented Norton’s intrusion, but he did manage to find problems with the code. 57Hamilton was known as the “rope mother.” She kept a sense of humor, calling some of the anomalies in the code FLTs or “funny little things,” and the women who wired the ropes at the Raytheon plant in suburban Boston, LOLs—“little old ladies.” Many of them had worked at nearby Waltham Watch, part of a long tradition of female labor in precision manufacturi…

Page 22

In the early 1960s, aerospace needs for powerful, light, and miniaturized electronics drove dramatic improvements in the state of the art. New technologies such as inte- grated circuits found their first applications in missiles and space rockets. However, once those technologies were adopted by other industries, space applications became more conservat…

Page 24

To support the Shuttle and Apollo programs NASA pioneered fly-by-wire technol- ogy. Existing aircraft ran hydraulic lines from controls, such as the throttle, all the way to the engines, rudder, and other controls. NASA flew the first digital fly-by-wire sys- tem in 1972. Computer control meant sending digital signals down wires, to circuits controlling electr…

The computer becomes an interactive tool

A prototype for what they called the compatible time-sharing system (CTSS) was working by the end of 1961 on MIT’s IBM 709. With no disk drive, they had to dedi- cate a tape drive to each user. That limited the system to four simultaneous users, each operating a modified Flexowri…

Page 6

Users each received a private area of the disk, known as a directory, to store files between sessions.

Page 7

CTSS provided the main system for student computing at MIT until 1969. MIT users developed many new applications to exploit the possibilities of timesharing. They built text editors, so that programs could be written and edited interactively on com- puter terminals instead of punched onto cards, and created a range of programming tools and languages optimized for online u…

Page 7

Timesharing’s spread was helped by the arrival in the mid-1960s of cheap and reliable teletype terminals based on the new ASCII standard for character encoding–a contrast with the expensive, special-purpose terminal equipment built for systems like SAGE and SABRE. Timesharing systems and interactive minicomputers were often used with a new device from the Teletype Corporation, the Model 33 ASR (automatic send- receive), shown in figure 5.4…

Page 8

The Model 33 was cheaper, simpler, and more rugged than the Flexowriter used by earlier small computers. It functioned at up to ten characters a second, working either as a typewriter that sent key codes directly to a computer or in offline mode to punch those codes onto paper tape. It came to symbolize the minicomputer era and the beginning of the personal computer era that followed it. The Control and Escape keys still found on keyboards today owe their ubiquity to the Model 33.

Page 8

Dartmouth initially used a General Electric 235 computer connected to a smaller GE Datanet computer, which controlled Teletype terminals across the campus. Com- puter assignments were incorporated into the core curriculum, particularly first-year mathematics courses. Dartmouth built up a large library of shared programs, includ- ing popular games such as a foo…

Page 10

The most fundamental c lenge in the development of any timesharing system was handling the unpredictable demands made by its users. Trying to support more users and capabilities made this dramatically harder.

Page 13

and written into textbooks for the emerging discipline of computer science, such as Denning’s own Operating Systems Theory (with E.

Page 14

…in the 1960s, the processor of a machine like the GE 645 was not a chip or even a board. It filled perhaps a hundred circuit boards spread over many cabinets, stuffed with thousands of electronic components connected by miles of wire. Even the architectural modifications made for timesharing meant rewir- ing large sections of the machine and adding cabinets holding circuit boards for things like dynamic address translation. Adding a second processor meant wiring the central parts of an entire second computer to a shared bank of core me…

Page 15

The GE 645 and IBM Model 360/67 were not the first computers to support multi-processor configurations. But timesharing was the first compelling application for it.

Page 15

The most popular successor to the single processor Cray 1 was the dual processor Cray X-MP. This was the world’s fastest computer from its launch in 1982 to 1985, when the Cray 2, with four processors, was introduced. Super- computers could really be justified only for organizations like Los Alamos or the National Center for Atmospheric Research with huge individual jobs. Harnessing that power forced programmers to split these application programs into separate parts, called threads, that could run simultaneously on different processors, communicating with each other to coordinate their work. Like the other architectural innovations pioneered by super- computers, that approach eventually made its way from supercomputers to minicom- puters, workstations, personal computers, and eventu…

Page 15

The timesharing community was small, and developers and code both moved between installations. For example, all timesharing systems needed an online text editor. Dennis Ritchie later noted that the editor code used at Bell Labs in the late 1970s could be traced back to Lampson and Deutsch’s QED for the SDS- 940. QED had also been adapted for CTSS at MIT and for Multics.

Page 16

Using a PDP-10 could not only be fun but addictive: it was no accident that it was the computer on which Adventure—perhaps the longest-lived of all computer games—was

Page 18

written by Will Crowther at BBN.

Page 19

This almost exclusive focus on what would later be called systems software may surprise readers who are used to thinking of software as a synonym for computer pro- gram. This was not the case in the 1960s. Software first became part of the computing lexicon in the early 1960s, to complement hardware by describing the other “wares” sold to computer use…

Page 20

The name Unix, pronounced almost identically with “eunuchs,” humorously signaled a stripped-down or “castrated” substitute for Multics.

Page 24

Unix had short commands, quick to type on a slow teletype, a compact kernel to leave lots of memory free for user programs, and a pervasive stress on efficiency. A lot of ideas from Multics were reimplemented in Unix using much simpler mechanisms. These included its hierarchical file system, the idea of a separate program (called the “shell”) to interact with users and interpret their commands, and aspects of its approaches to input and output.

Page 25

Once Unix was rewritten in C it was easier to port it to other computers. Instead of writing a whole operating system, all that was needed was a C compiler able to generate code in the new machine’s language and some work to tweak the Unix kernel and standard librar- ies to accommodate its q…

Page 26

C was optimized for operating systems programming. C code can do almost anything that assembly language can but is easier to write and structure.

The computer becomes a communications platform

MIT’s Mail command had been proposed in a staff planning memo at the end of 1964 and was implemented in mid-1965 when Tom Van Vleck and Noel Morris, junior members of the Institute’s research staff, took the initiative to write the necessary code. Although Mail, and other systems of the 1960s, could send messages only to other users of the same computer, this was not quite as restrictive as it sounds.

Page 2

By the late 1960s, electronic mail was an almost universal feature of timesharing operating systems.

Page 2

The Plato IV terminals still included microfiche projectors, but they also supported excep- tionally crisp computer-generated graphics with a resolution of 512×512 pixels. Unlike the vector graphics systems we discussed earlier, they used bitmap graphics, treating the screen as a grid of pixel dots. That was made possible by a unique display technology developed by the project’s leader, Donald Bitzer. Holding the screen image in memory, so that programs could update it by changing the values stored there, would have required at least 32 KB of RAM. Chip memory was not available when Bitzer started his design, and even in 1972 it would have taken 256 of Intel’s new memory chips to hold a screen full of data. Some computers in the 1950s had used display tubes as memory. Bitzer took a similar approach, designing a plasma display whose orange pixels could be read as well as written, so that the screen served in effect as its own m…

Page 4

Conventional n working followed the template of the telephone systems: automated switches made a connection between two points, usually a computer and a terminal, for the duration of the session. The ARPANET was the first large-scale trial of the packet switching approach to networking. That meant breaking down communication into a series of self-contained packets of data, each including address information specifying its source and destina…

Page 8

Splicing the file transfer code into the popular mail program Sndmsg that BBN had developed for the PDP-10 provided a simple but effective system for network mail. Now that electronic mail could go anywhere on the ARPANET, it was necessary to specify the timesharing system of recipients as well as their usernames. Tomlinson chose the @ sign, a standard but underused part of the teletype keyboard, to separate the two.22

Page 9

…the ARPANET worked on common protocols, not common code. Because the network interconnected many computer models, running many operating systems, users could not demand that the recipients of their mail run the same program that they were using. They would still be able to swap messages if both systems stuck to commu- nications protocols defining things like how the message would be addressed, the sequence of signals exchanged to begin and end transmission, and how text would be enc…

Page 10

Pen

Page 10

These new radio networks were incompatible with each other and with the fast- growing ARPANET. That spurred a group led by Vinton Cerf and Robert Kahn to work on internetworking to interconnect them. It developed a new transmission control protocol (TCP) that was suitable for unreliable radio networks as well as the leased tele- phone lines used by the ARPA…

Page 13

Changes during the 1970s and 1980s opened up t phone networks in the US, and many other countries, for use by data services. AT&T made two far-reaching decisions, the first of which was not to discriminate between voice and data sent over its lines. Both voice and modem signals were in the audio range and in principle no different, even though data might sound funny to a person listening in on the line. The second was to introduce a jack, the RJ11 to connect tele- phones without the need for a company technician to visit. That allowed a user to connect a modem directly to the phone network, rather than having to dial a number, wait for a high-pitched tone indicating a connection, and place the receiver into a cradle that acoustically coupled the modem to the phone l…

Page 19

Plans were even drawn up to convert the Internet over to X.25 and the other OSI protocols. That never happened, of course. Instead TCP/IP and other Internet technologies unexpectedly moved beyond the niche markets they dominated in the 1980s to do the job for which OSI was created: standardizing data communications around nonproprietary standards.

The computer becomes a personal plaything

One influential hobbyist project was the “TV-Typewriter,” designed by Don Lan- caster and published in Radio-Electronics in September 1973. This device allowed one to display alphanumeric characters, encoded in ASCII, on an ordinary television set. It presaged the advent of video displays and keyboards as the primary input-output devices for personal c…

Page 7

1974 was the annus mirabilis of personal computing. In January, Hewlett-Packard introduced its HP-65 programmable calculator. That summer Intel announced the 8080, an improved microprocessor. In July Radio-Electronics described the Mark-8. In late December subscribers to Popular Electronics received their January 1975 issue, with a prototype of the “Altair” minicomputer on th…

Page 7

CP/M was the final piece of the puzzle that, when assembled, made personal com- puters a practical reality. A personal computer’s DOS had little to do with mainframe operating systems such as Multics. There was no need to schedule and coordinate the jobs of many users: an Altair had one user. There was no need to drive a roomful of chain printers, card punches, and tape drives: a personal computer had only a couple of ports to worry about. What was needed was rapid and accurate storage and retrieval of files from a floppy disk. A typical file would in fact be stored as a set of fragments, inserted wherever free space was available on the disk. It was the job of the operating system to find those free spaces, store data fragments there, track them, and reassemble them when needed. Doing that gave the user an illusion that the disk was just like a traditional file cabinet filled with paper fil…

Page 14

Pen

Page 14

In 1977 three companies, Commodore, Apple, and Tandy began to produce relatively affordable and polished personal computers intended to expand the market beyond electronics hobbyists to computer-curious consumers. Each included video circuitry to drive a television or monitor, a cassette interface, and keyboard. This integration of standard hardware and a shift to new, cheaper, and higher capacity dynamic RAM chips greatly reduced the cost of a usable computer system. Burning BASIC onto ROM chips made it faster and easier to start using the computer after turning it on.

Page 15

The original PET’s chief drawback was its calculator-style keyboard; its main strength was a powerful built-in version of BASIC. Several generations of improved PETs were introduced over the next five years.

Page 15

Colorful cellophane strips placed over the monochrome video screens enlivened

Page 20

the simple graphics of Breakout, and other games of the era. It has also been remembered for the efficiency of its electronic design: early in their partnership Steve Jobs, then work- ing at Atari, submitted an astonishingly efficient hardware design produ…

Page 21

The idea of a standard model was tried and failed: a consortium of Japanese companies worked with Micro- soft to replicate the success of their VHS videotape standard by introducing a home computer standard called MSX in 1983. At least twenty consumer electronics compa- nies produced compatible machines. Most vanished quickly, although implementa- tions by local producers sold well in Japa…

Page 26

The dividing line between a “personal computer” and a “home computer” was initially determined by marketing and customer response. The Atari 800 personal computer, introduced in 1979 as a competitor to the Apple II, was better built, faster, and had more standard features, although it offered less scope for expansion. However, because of Apple’s head start with small business and education users, and the Atari’s superior chips for sound and animated graphics, the Atari 800 was treated as a gaming machine and sold almost exclusively to home users.

Page 26

Homes and computers both had long, almost entirely separate, histories. For the new idea of a “home computer” to make sense, the computer obviously had to change, becoming cheaper, smaller, and less intimidating. Less obviously, the home itself had to be reimagined as a place that needed a computer. Computer enthusiasts and adver- tisers struggled to do this plausibly. One early idea was to apply the digital control functions of computers, proven in industrial and laboratory settings, to the home. With extra hardware, computers could control heating systems, turn on and off lights, and open garage doors. Such projects appealed only to electronics hobby…

Page 27

Neither was well equipped for applications such as word p ing.

Page 32

Sales peaked in 1984, but Commodore was still building the Commodore 64 when it declared bankruptcy a decade later. More than twelve million were produced, making it the bestselling desktop computer model in history.

Page 35

In practice, the most compelling and widely used applications for home computers were video games. Many of the most popular programs for personal computers were recreations of popular arcade games like Space Invaders, Frogger, and Asteroids.

Page 35

The proliferation of home computers changed the way people were first exposed to computer technology. In the 1940s and 1950s, most of the people hired as programmers had their first experience of computing on the job. In the late 1960s, as computer science developed as a field within universities, students might program for the first time in a science or engineering class and then decide to major in computing. In the 1970s, more students encountered programming in high school, usually via a timesharing system. In each case, access to computers was limited and took place outside the home. Until the mid-1980s, the proportion of computer science students who were women was rising, following the pattern in other technical and professional subjects. In the 1980s, however, the trend reversed in computing.

Page 37

The home computer market collapsed further and faster, as the whole idea of the home computer as a separate class of machine dwindled in the mid-1980s. When domestic sales of personal computers really began to take off again in the early 1990s, people were buying cheap versions of computers designed for business use.

The computer becomes office equipment

…during the 1950s and 1960s computerizing meant shifting work out of offices and into data processing centers. Most office workers never even saw the computer. Offices sent paper forms to the data processing department. Every week, month, or quarter they received back stacks of fanfold paper printout with infor- mation on sales, accounts, and everything else the computer was trac…

Page 1

The same m cessors, RAM chips, video interfaces, small printers, and floppy disk drives that made enthusiast and home computing possible also produced computers cheap enough to sit on the desks of office workers. Two application areas were particularly important in lay- ing the groundwork for IBM to introduce a general-purpose personal computer of its own. The first was word processing, which assembled those same technologies to pro- duce buttoned-down office machines rather than hobbyist personal computers. The sec- ond was the invention of the spreadsheet, the first compelling business application for regular personal computers. Both helped to change perceptions of personal computers, shifting their main market from enthusiasts and home users to office…

Page 2

Word processing is a concept with a complicated history. Before a word processor was a software package, like Microsoft Word, it was a special kind of computer. But before even that, a word processor was an office worker and word processing was an idea about typing pools that ran like factory assembly lines. That idea gained traction after the American Management Association and an obscure publication called Administrative Management began to promote it. Companies had invested large sums in specialized equipment to make their manufacturing workers more productive. Office work, in contrast, remained inefficient. According to the American Management Association, word processing could solve this. Personal secretaries would be eliminated and their work transferred to word processors in a central typing pool. This higher volume of work would justify investment in expensive technology, further boosting their productivity.

Page 2

The word processing idea was tied to machinery, but not originally to computers. In 1971 IBM began to call its dictating machines and automatic typewriters “word processing machines” in its advertisements.

Page 3

The term was not widely used until the early-1970s, taking off just as Cuisinart’s food processors began to appear in American kitchens. By this point, the falling cost of interactive computing was making it more cost effective to use computers to store, edit, and print various kinds of text.

Page 3

Legal documents were the first big market for computer text editing, as they were complicated, went through many drafts, and had lots of money attached to them.

Page 3

The market for text editing systems designed for office work grew separately but parallel to the enthusiast market for personal computers. The template was set in 1973 by Vydec, a start-up led by former Hewlett-Packard engineers, which offered the first system able to display a full page of text on screen, store it on floppy disk, and print it. Its small and relatively affordable daisywheel printer, a recent invention, was named after a disk that rotated to punch the correct letter. This produced typewriter-quality output, albeit slowly a…

Page 3

Thanks to the arrival of microprocessors and the rapidly falling cost of RAM chips, many other firms had entered the market for video screen word processors by 1977, including NBI (“Nothing But Initials”) in Colorado, Lanier in Atlanta, and CPT in Minneapolis. Lanier was initially the most successful, but the lion’s share of the corporate word processing market was eventually taken by Wang Labs.

Page 3

The Wang Word Processing System (WPS), shown in figure 8.1, was unveiled at a trade show in New York in June 1976 and, according to some accounts, nearly caused a riot.

Page 4

CP/M remained popular well into the 1980s for cheaper personal computers, par- ticularly portable systems. The first successful portable was the Osborne 1, released in 1981. It looked a lot like a sewing machine: a bulky box with a handle on one end (figure 8.2). Releasing catches detached a keyboard to reveal two floppy disk drives and a tiny five-inch screen. Its portability was limit…

Page 6

There was exactly one great reason for a business user to get hold of an Apple. Visi- Calc launched in October 1979. Its creators were Daniel Bricklin and Robert Frankston, who had met while working on Project MAC at MIT. Bricklin had worked for Digital Equipment Corporation and in the late 1970s attended the Harvard Business School. There he came across the calculations that generations of business school students had to master: performing arithmetic on spreadsheets: rows and columns of numbers, typically documenting a company’s performance for a set of months, quarters, or years. He recalled one of his professors posting, changing, and analyzing such tables on the blackboard, using figures that his assistant had calculated by hand the night bef…

Page 8

Some software publishers worked like book p ers, paying royalties to the authors of programs, and others purchased the rights for a flat fee. They began as tiny operations, duplicating disks and packing them into zip- lock bags, to be sold with ads in specialist magazines or through the network of dealers that sprang up to handle the new machin…

Page 8

VisiCalc played wonderfully to the Apple’s strengths and minimized its weak- nesses. Fylstra noted that “the Apple II was essential to VisiCalc.” 22Spreadsheets were small, so its limited memory capacity and disk storage was not a handicap, as it would be for database work. They used text mostly for labels, so the all-caps display was not a problem (figure 8.3). The screen served as a scrollable window onto a larger spreadsheet, so the forty-column display worked much better for spreadsheets than word processing. Because the Apple drove the display directly rather than sending text to a terminal like a CP/M machine or timesharing system, the spreadsheet experience was smoother on it than it would have been on a more expensive platform. This fluidity encouraged users to play around with models and data to answer what if questi…

Page 9

The apparently objective computer output and attractive charts helped spread- sheet users to present their ideas forcefully, but as Levy noted, spreadsheets had an important difference from earlier modeling software: they hid the formulas created by users. Printed output showed the numbers produced by the model but not the assump- tions used to generate them, making it easy to tweak the formulas to get the desi…

Page 10

VisiCalc symbolized a shift toward packaged application software as the driving force behind personal computing. Unlike mainframe users, companies buying a personal computer were not usually going to hire a team of programmers to write custom software for it. Neither could most users realistically satisfy their needs by writing their own pro- grams in BASIC. Hardware was getting cheaper all the time, but programmers only got more expensive. The future lay in packages like this that could sell hundreds of thou- sands, and eventually millions, of copies and so spread their development cost over a huge use…

Page 10

Since the 1950s, capable floating-point hardware support had been the defining characteristic of large scientifi- cally oriented computers. The 8088 used in the original PC did not support floating point and its performance on technical calculations was me…

Page 15

Hard disks introduced new complexities into personal computing, requiring users to manage their directory structures, and opened new mar- kets for hardware and software to back up their contents. The popular Norton Utilities package, created by Peter Norton, included programs to restore accidentally deleted files, navigate directory structures, and optimize hard disk performan…

Page 16

With the PC’s announcement, IBM also announced the availability of word pro- cessing, accounting, games software, and a version of VisiCalc. Mitch Kapor, who had previously developed add-ins for VisiCalc and knew exactly how it could be improved, partnered with an experienced programmer, Jonathan Sachs, to start a rival firm, the Lotus Development Corporati…

Page 17

Lotus 1-2-3 was so popular that it inspired several clones, which copied the Lotus menu structure and macro command language. This raised a novel legal question: could copyright law be stretched to protect the look and feel of a program as well as its actual code? Lotus was initially successful when it sued the makers of a blatant clone, called The Twin, but eventually lost in another case (Lotus v. Borland) that established that command menus were not covered by copyright protection.44

Page 18

WordStar was the most popular word processing program for the IBM PC for the first few years. As with VisiCalc, it was a straight conversion, in this case from CP/M, which did not take full advantage of the capabilities of the PC.

Page 18

WordPerfect release 4.2 in 1986 set the standard for the rest of the 1980s and finally overtook WordStar in sales. At its peak around 1990, WordPerfect controlled around half the market for word processing software.

Page 19

S ware companies ran campaigns to discourage piracy. Some hoped that the hefty manu- als supplied with their packages and the telephone support they provided to registered users would discourage piracy. A flood of independent guidebooks and the increasing ubiquity of photocopiers made that less of a problem. Lotus and several other leading firms turned to copy protection, introducing deliberate errors into floppy disks that users would be unable to reproduce with an ordinary disk drive. The floppy disk was needed even when the program was loaded from a hard drive. That was unpopular with users—the special disks didn’t always work, and if they were lost or damaged the program would be useless. Software companies eventually abandoned these schemes in the face of complaints from large companies forced to manage thousands of key di…

Page 20

Although almost all of today’s personal computers and most servers are the direct descen- dants of the IBM PC, not a single one of the billion or so IBM-compatible machines sold from 2015 to 2019 was made by IBM. What began in 1981 as a single proprietary machine had by the late 1980s become the basis for a worldwide industry of thousands of companies that collectively produced millions of PCs every…

Page 21

In the long term, only MS-DOS computers that were fully compatible with the IBM PC could survive. Producing a compatible PC was harder than licensing MS-DOS. The core of what made a computer an IBM PC was the BIOS code stored on a ROM chip. IBM owned that code. It relied on copyright, which protects written works, rather than patents, which protect inventions, to prevent the duplication of its …

Page 23

AST wrote its own BIOS, but even that became unnecessary after Phoenix Technologies reverse engi- neered the IBM BIOS and started selling compatible chips as a standard part. The PC motherboard became just one more commodity available from a dozen different sup- pliers. The floodgates opened for PC …

Page 24

The most portable computer with a real keyboard was Radio Shack’s TR Model 100, developed by Kyocera of Japan (figure 8.7). It ran for about 20 hours off standard batteries and weighed only three pounds. Achieving those goals involved some significant compromises—no built-in disk drives, only 8 to 32 KB of memory, and a screen limited to eight lines of text. Its most enthusiastic users were journalists, who had previously dictated copy over telephone lines.

Page 32

The PC’s position at the end of the 1980s was unassailable. The IBM PC had evolved from a single model to the basis for a new kind of computing.

Page 34

By the end of the 1980s, most PC companies purchased standard parts and screwed them together.

Page 35

Only 15 percent of American households owned a computer in 1990. Among African American house- holds, the figure was 7 percent. Even among the richest 20 percent of households, two thirds had not yet made the purchase

The computer becomes a graphical tool

On January 24, 1984, Steve Jobs, wearing a double-breasted navy blazer and garish green bow tie, took the stage at De Anza College (close to Apple’s headquarters) and pulled a tiny Macintosh computer out of a bag. The computer sprang to life, proclaim- ing itself “insanely great” before running a slide show on its crisp monochrome screen to demonstrate its new graphical user interface (GUI). In the popular imagination this moment divides the history of personal computing into two eras: the dark ages of text- based computing, inherited from timesharing systems, versus the enlightened world of windows and g…

Page 1

Conventional personal computers could display graphics as well as text, but although that power was exploited by individual programs such as video games and charting software, it was ignored by MS-DOS.

Page 1

Throughout the 1980s, c ers with graphical user interfaces had only a tiny share of the market and were much more expensive than mainstream personal computers, which is why we could tell the story of mainstream office computing through 1989 without mentioning them.

Page 1

The most obvious new feature of the Macintosh was its graphical user interface (GUI). Its key elements were invented over the course of a few years in the mid-1970s by a small team working in a single research facility, Xerox’s Palo Alto Research Center (PARC). But, less obviously, graphical computing as developed at PARC depended on new hardware capabilities—powerful processors, large memories (the lack of which crippled the first Macintosh), and high-resolution screens. To explain the diffusion of graphical user interfaces, we must understand the spread of those capabilities, initially to a new generation of microprocessor-based personal computers marketed as graphics workstati…

Page 2

Rather than perfect timesharing, the PARC team was determined to develop a new kind of interactive computing experience. Development of the hardware and soft- ware for a new computer, the Alto, was at the center of the lab’s work from 1972 onward.

Page 3

Much of the architecture of personal computers powerful enough to support graphical user inter- faces came from minicomputers such as the DEC VAX. But even when equipped with specialized graphics hardware, VAX machines were never intended for personal use. The Xerox PARC team had started by designing and building what was essentially a personal minicomputer. Each Alto coupled high-resolution graphics hardware directly to a powerful processor with, by the standards of the day, an absurdly large me…

Page 3

In fact, the Alto had a novel architecture in which processor capabilities were spread around the machine rather than clustered on one circuit board. Each Alto had its own hard drive with a removable platter, like those used with IBM mainframes.

Page 3

Researchers at PARC refined the mouse and coupled it with a unique high-resolution screen, arranged in portrait orientation to mimic a sheet of paper. This was bitmapped, so that its almost half-million pixels could be manipulated by flipping bits in memory.

Page 4

Unlike an o nary book, it would be dynamic, which to Kay meant it had to be highly interactive but easy and fun, unlike existing systems such as Doug Englebart’s NLS.

Page 4

Smalltalk was designed with flexibility and interactivity in mind, to put graphical objects of different kinds on screen and interact with them. Traditional programming languages assumed a text-based user interface. Applications coded with them were con- trolled with typed commands or selections from text menus. The program would print a list of options and wait for users to push a key to select one. Kay wanted the Dynabook to feel personal and interactive, displaying pictures for its users to interact wi…

Page 4

As well as a new kind of user interface, Smalltalk codified and began to spread a new approach to programming languages called object-oriented programming.

Page 5

Traditional languages define data structures separately from the code that manipulates them. The new approach let programmers produce highly modularized code, in which data structures are defined together with the operations that programmers use to access their values or update their contents. These hybrid bundles of data and code were called objects by Kay. Each object was an instance of a standard class. New classes could be defined as special cases of existing ones, with additional capabilities or characteristics.

Page 5

Because all data was held inside objects, it could be m lated only by using the methods explicitly provided in the code defining the corre- sponding classes. That enforced modularity and made it easier to reuse code between systems and to maintain systems. Smalltalk conceptualized the interactions between these objects as a kind of dialog achieved through the exchange of messages, an idea captured in the name Kay gave the lang…

Page 5

T tional interface methods were, to use a term popularized by Kay, modal. Users issued the desired command, which put the system into a mode. What it did in response to their next input would depend on the mode. For example, in delete mode, selecting a file would delete it. In edit mode, the same action would open it for editing. Kay favored a different interface style, in which users would first select the object they wanted to work on and then manipulate it to accomplish the desired operation. Pro- viding that kind of open-ended interaction in a conventional programming language would be frustrating and inefficient—the program would have to be structured as a loop that constantly checked whether the user had just carried out each of a huge number of possible actions. In Smalltalk, the programmer could specify the code to…

Page 5

run when a particular region of the screen, button, or scroll bar was triggered, and then the system itself would figure out what objects should be alerted in response to a par- ticular click. This was called event-driven code.

Page 6

Smalltalk went beyond Lisp by providing what was later called an integrated development environment (IDE), which included a text editor, a browser to explore the hierarchies of classes defined in code, and debugging tools to examine the current state of objects as programs executed.

Page 7

Object-oriented programming was harder to grasp than some of the other novel features of Alto, such as mice and graphical controls, and spread more slowly. Some high-profile languages of the late 1970s, such as Niklaus Wirth’s follow-up to Pascal, Modula-2, were designed to support increased modularity, but the full object-oriented approach was little known outside PARC until an article about it appeared in the August 1981 issue of Byte ma…

Page 7

Gypsy text editor produced by Larry Tesler and Timothy Mott in 1976. Gypsy took the capabilities of a previous program, Bravo, developed by a group including Butler Lampson and Charles Simonyi, and reworked it with the first user interface to resemble that of now-standard systems such as Microsoft Word. For example, to add text, users simply used the mouse to set an insertion point and then typed. To copy text, one highlighted it with the mouse and then pushed the Copy key. Xerox researchers, following Kay, called this style of operation modeless because the results of triggering a function were consistent and did not depend on a previously selected command mode.13 Like Bravo, Gypsy exploited the graphical screen of the Alto to display text with differ- ent fonts, accurate spacing of letters, embedded graphics, and formatting features such as bold and italic text. Computerized publishing expert Jonathan Seybold dubbed this what you see is what you get (WYSIWYG), repurposing a catchphrase of Flip Wilson, the first African American comedian to make regular television appearances. Wilson used the phrase in character as Geraldine Jones, a brashly self-confident woman, as winking acknowl- edgement of the tension between his cross-gender performance and Geraldine’s lack of pretense. The PARC staff borrowed it to define a simpler form of representational fidelity: the printed output would match the visual content of the screen as close…

Page 8

This was made possible by another PARC invention, the laser printer. This merged the printing and paper handling mechanisms from a high-end Xerox copier with a powerful embedded computer able to draw high resolution images onto the copier drum with a laser, replacing the usual optical mechanism used to create an impression from the source document.

Page 8

15By the late 1970s, a new buzzword, distributed computing, had emerged to describe the idea of having big and little computers work together over computer networks—for example, using a minicomputer or personal computer to

Page 8

By 1978, a program called Laurel had been developed for the Altos. This introduced what later became the standard way of work- ing with email: users downloaded their messages to their personal computers to file and read them. Replies were uploaded back to the ser…

Page 9

Approaches of this kind were called client-server computing—a program running on one computer (the client) made a request for a program running on another com- puter (the server) to do something, that is, to provide a service.

Page 9

Workstation companies targeted small markets that would not support the cost of developing new technologies. Instead, they depended on what was called an open systems approach—using standard processors, memory chips, networking standards, peripheral connections, and so on. Combined with the inherent price-performance advantages of microprocessor-based systems over minicomputers, this gave them a huge price-performance adva…

Page 12

Lisa had exactly the core capabilities that would define the most powerful personal computers of the next decade: hard disks, networking, a graphical user interface, and slots for expan- sion. Users could load several applications simultaneously, cutting and pasting between their windows. That wasn’t quite multitasking, as background applications were sus- pended, but the operating system did prevent applications from overwriting eac…

Page 14

The desktop publishing industry began in 1985 with the launch of Aldus P Maker, designed by Paul Brainerd (see figure 9.4). Brainerd had previously developed computerized production systems used by newspapers, and he recognized that a large potential market had opened up now that personal computers with the capabilities needed for page design were available. 31PageMaker let amateurs tinker with fonts and graphics until their newsletters or posters looked just right (to them, if not to trained designers). Professionals could produce slick-looking pages more rapidly than ever before.

Page 17

PageMaker worked with the new Apple LaserWriter printer. This cost $6,995, far more than the Macintosh it plugged into, yet still aggressively low by Apple’s standards because rendering pages described in Adobe’s new PostScript language required the printer to hold a more powerful processor and more memory than the computer did.32 As one reviewer concluded, “I can’t count the number of times I’ve shown some- one my Macintosh and they’ve said: ‘But it’s just a toy. . . .’ Now at least I can show PageMaker to them and say ‘Let’s see your IBM do that.’” 33Thanks to PageMake…

Page 17

Macintosh, unlike Lisa or Star, offered a compelling business case to a small but well- defined group of users. Graphical computing was still too expensive for general office use, but for people who needed to produce high-quality printed output, it was a bar- gain if it eliminated the cost and delays of working with a traditional print shop…

The PC becomes a minicomputer

By the late 1990s, the PC had killed the minicomputer and the graphics work- station. Yet from the viewpoint of technology and architecture, the situation is the reverse: the personal computer as we know it today was invented over the course of the 1990s, not in 1981 with IBM’s first model or in 1977 by Apple. The PC architectures of the 2000s have more in common with those of 1980s minicomputers than they do with MS-DOS or CP/M. Since 2000, Windows has been based on an operating sys- tem designed by a former DEC engineer and patterned after a minicomputer system. From this perspective, the minicomputer never died. Rather, minicomputers shrank and replaced PCs without their users ever realiz…

Page 1

One obvious limitation of DOS was that it forced programmers who wanted to take advantage of the increasingly powerful graphical capabilities of PCs to bypass it to deal directly with the underlying hardware.

Page 2

4Windows and GEM were designed to work with special Maci applications, raising a further problem: because programs were written for DOS, few users ran Windows or GEM; but because few users ran Windows or GEM, most pro- grams were written for DOS.

Page 4

Windows 3.0 was a breakout hit, the product that finally shifted main- stream computer users into the age of the graphical user interface (figure 10.1). Windows was still not as elegant as the Macintosh system, but Apple charged a hefty premium. Someone looking for a new computer could get a bigger hard drive, larger screen, and more memory by choosing a Windows computer. Windows worked well enough to get work done with a growing number of powerful application programs that closely resem- bled their Macintosh coun…

Page 4

By the end of the decade, Intel controlled the evolution of the PC hardware plat- form almost as completely as IBM had controlled it in the mid-1980s. Intel used its new dominance to speed the adoption of some new technologies, such as the universal serial bus (USB), by building them into its chipsets. USB was a boon to computer users, replacing custom connectors and controllers for peripherals such as scanners, printers, keyboards, mice, and external disk drives with a single compact and flexible socket. Intel used the same power to derail the adoption of other technologies, such as high-speed IEEE 1394 (FireWire) peripheral connecti…

Page 18

Since the 1960s, it had been common practice for the processor to implement complex instructions with microcode. When a program- mer asked the VAX to evaluate a polynomial, that triggered a long series of simpler internal steps. 26Above all, complex instruction sets were supposed to make computers run faster, in part by reducing the number of times the computer had to fetch and decode new co…

Page 20

Those assumptions had been long accepted, but in the mid-1970s John Cocke of IBM argued that a computer using more and simpler instructions to complete a given task would outperform one with fewer and more complex instructions.

The computer becomes a universal media device

From the 1980s to the early 2000s, two processes ran in parallel. On one track, the personal computer gained new capabilities. With them, it inched closer to becoming a universal media device—making telephone calls, playing and storing audio files, playing movies, storing and editing photographs, and playing games. On the other, less visible track, computers were making their way inside music players, televisions, cameras, and musical instruments. They dissolved the technologies inside but left the husk intact.

Page 1

A theoretical breakthrough came in 1965, with the p tion by James Cooley and John Tukey of a method of carrying out a Fourier transform of a signal that was much faster and thus more practical than classic methods. 3In the words of computer scientist Allen Newell, the discovery of the fast Fourier transform (FFT) “created the field of digital signal processing and thus penetrated the major bas- tion of analog computation.” 4The FFT allowed one to decompose a complex signal into combinations of basic periodic frequencies, just as a musical chord played on a piano is the result of hammers hitting several strings, plus their harmonics. Once decomposed, a computer can process the signal in any number of ways. Over time, these techniques migrated from large, expensive computers, like those used to handle communications with space probes, into cheap personal computers and consumer electroni…

Page 2

Homer Dudley d onstrated a keyboard-driven speech synthesizer at the 1939 World’s Fair. By the 1960s, researchers had built computerized speech synthesizers able to automatically turn text into recognizable speech.

Page 5

There was nothing new about the electronic transmission of pictures. Since the 1920s, photojournalists had used wire transmission, over public telephone lines, to rush images from and to newspaper offices. Those analog machines fixed the photograph to a drum and scanned it in a spiral pattern. As historian Jonathan Coopersmith has shown, entrepreneurs had been trying just as long to turn facsimile transmission into a general-purpose method for delivering business documents. By the 1960s Xerox had a viable service, but because its analog machines were built around high-precision components, they remained too expensive to really take…

Page 14

The Group 3 digital coding scheme was devised in 1977 in Japan around the potential of cheap microprocessors, in what Coopersmith called “the most important event in fax history since 1843.” Group 3 compressed each scanned page to transmit digitally in as little as fifteen seconds, much faster than earlier analog fax machines taking up to six minutes. When it was formally accepted in 1980, about 250,000 fax machines were in use in the United States. By 1990 there were five million. 19In Japan, fax was even more widely used, as written Japanese was hard to represent in telegram, telex, or email but easy to transmit as an image.

Page 15

Scanners were a popular, but expensive, part of desktop publishing operations in the late 1980s, along with Macintosh computers, PageMaker software, and laser print- ers. By the mid-1990s, the price of color scanners had dropped to a few hundred dol- lars and hard drives were big enough to hold large image collections. Scanners became popular consumer add-ons, and families began to digitize their photo coll…

Page 16

Industrial grade scanners let businesses scan and destroy incoming paperwork, converting it to electronic images. Specialist scanners, fitted with devices to turn pages, were used to digitize entire library collections by groups such as the Internet Archive and Google’s Books project.

Page 16

In 1945, working on the First Draft EDVAC design, John von Neumann was fascinated by the potential of

Page 16

the iconoscope, an electronic tube then used in television cameras, as a storage device. Even the term pixel, introduced with the transition to digital images, was a contraction of picture element, a term used since the early days of experimental television.

Page 17

Because it was very compact and power efficient, high-capacity flash memory was a crucial enabling technology for the creation of new portable devices.

Page 18

Early memory cards held only a few megabytes, needing aggressive compression to hold even a dozen images. That was provided by a new image format, the JPEG (named for the Joint Photographic Experts Group). In 1991, when libjpeg, a widely used open source code module for JPEG compression, was released, it took a powerful PC to create these files. By the late 1990s, the necessary computer power could be put into a camera, although early models would be tied up for several seconds processing each image. Once the memory card was full, users moved the files onto a computer. Digital photography was another of the practices made possible by the arrival of PCs with voluminous hard drives as a standard feature of middle-class households.

Page 19

When digital video disc (DVD) players arrived in 1997, initially priced around a $1,000, they became the fastest-adopted consumer devices in Ameri- can history. By 2003, half the homes in the United States had a DVD player, and players could be purchased for as little as $50. DVD was, in effect, the extension of CD technology to play digital video as well as audio. The discs were the same size, and DVD players could also handle…

Page 20

The convergence of computer and television technology was complete. Televisions had the same range of digital inputs as computer monitors, displayed similar resolu- tions, and were built from the same technologies. In fact, televisions were themselves computers. As the cost of powerful computer chips fell, even affordable televisions began to incorporate smart TV features. They had USB ports to play videos and music from hard disk drives, Ethernet ports, and Wi-Fi connections to access computer net- works and they let users download and run appl…

Page 21

What made it practical for users to start building up music libraries was the spread of effective compression technology. The MP3 file format could compress a music CD to perhaps 20 MB. That sacrificed audio quality, but it still sounded better than a tape copy.

Page 22

Launching that model, Steve Jobs was able to boast that Apple had sold 110 million iPods. 39It was the firm’s most popular computer, outsell- ing the combined sales of all Macintosh models more than ten times over.

Page 26

The creation of mobile devices, many built around licensed ARM processor cores, was made easier by the maturation of another technology: general-purpose field pro- grammable gate array (FPGA) chips that could be programmed electronically for par- ticular applications. This was a much cheaper process than producing custom silicon and was ideal for prototype devices or equipment with small production runs, for which conventional ASIC chips would not be v…

Page 27

Doom introduced the concept of the game engine, by separating the code needed to manage events in the game world and present them to players from the “assets” such as objects, monsters, and tunnels stored in data files. 48Infocom and Sierra On-Line had taken a similar approach to adventure games, but the high-performance action games had previously integrated the functions closely. Doom required elaborate and highly reusable graphics code, making the engine approach to software engineering (already established in areas such as expert systems, databases, and graphics rendering) highly effecti…

The computer becomes a publishing platform

By June 1993, Andreesen and Eric Bina, a Unix staff specialist at the center, had released a test version of a browser that they later named Mosaic. Mosaic’s seamless integration of text and images made the potential of the Web instantly apparent (see figure 12.3). 14The first Mosaic users were people who already had powerful Unix workstations and fast Internet connec- tions, found mostly in universities and research labs. Its availability accelerated the Web’s growth. A web crawler program created by an MIT student discovered only 130 active Web servers in mid-1993, but 623 when it was run again at the end of that…

Page 6

The Web was just a thin layer on top of the Internet’s existing infrastructure. Because there was no central database of hyperlinks, users could follow links out from a page but not go the other way to see everything that linked to a page. Between the time a link was created and clicked, the page to which it pointed might have been edited to remove relevant information or deleted completely. Most of the external links on Web pages eventually stop working. Ted Nelson and Doug Engelbart were among the Web’s harshest critics. Nelson didn’t even consider the Web to be true hypertext. Xanadu was supposed to hold old versions of a page forever, so that the linked material would always be available. Even Tim Berners-Lee complained that only half of his vision had come true with commercial browsers like Netscape. He initially wanted a Web that was as easy to write to as it was to surf.

Page 10

Google’s big advantage came in figuring out how to rank the pages that held a search term, which it did by favoring websites that had been linked to by large numbers of other sites. Spam pages were unlikely to be linked to and therefore fell to the bottom of the rankings. This method was inspired by a system for the retrieval of scientific information developed by Eugene Garfield, called the Science Citation Index. It indexed scientific papers and ranked their impact by noting how many other papers referenced them.

Page 13

Everything that makes publishing to the Web easy makes indexing or cataloging it hard. Whether with humans, as Yahoo used to do, or with algorithms, as Google does, that is an enormous task requiring vast amounts of money and human talent.

Page 14

The economics of conventional publishing were comparatively straightforward: publishers made money on each book or record sold. Selling more copies meant making more money, so that each hit underwrote the cost of many flops. In contrast, a popular website ran up huge bills for network bandwidth and servers without receiving any income from readers to cover this expense. Grabbing more readers meant bigger losses, not bigger profits.

Page 14

Although Windows NT could do a creditable job serving Web pages from cheap, standard PC hardware, it never dominated the market for servers the same way it did for desktop operating systems. Most early websites ran on Unix servers or on BSD, which had evolved from a package of Unix upgrades to a free-standing alternative with no AT&T code. Unix systems were expensive, which drove up the cost of operating Internet sites. Instead of shifting to Windows and PC hardware to reduce costs, Web companies saved even more by relying on the free Linux operating sy…

Page 28

By the early 2000s the other key software components of a Web application server were also increasingly likely to be free software. The first to gain dominance was Apache, which has been the most widely used Web server since 1996.

Page 28

Most Web applications run on the stack of software called LAMP, which stands for Linux, Apache, MySQL, and PHP, a system that gradually replaced Perl as the default choice for coding Web applications.

Page 29

Microsoft was never able to turn the Web into a pro- prietary system because it couldn’t match its domination of the browser side of the Web with similar control over the servers that generated Web pages. If Microsoft’s Internet information server had also held a market share of over 90 percent, then Microsoft could have gradually shifted the Web from a system based on open stan- dards to a system in which an all-Microsoft stack of software would suffice. As most websites used free software, even the success of Internet Explorer did not give Micro- soft the power to unilaterally set Web standards for its …

Page 30

Firefox was the first open source desktop computer application widely used by Windows and Macintosh users. Its triumph signaled a shift in the computing landscape. Microsoft’s hold on desktop operating systems and office applications remained secure, but the firm’s attempts to dominate and enclose the Internet were visibly crumbling.

The computer becomes a network

Rather than retrieving and displaying pages stored as static files, browsers have become a universal interface for online applications running in the cloud—a distributed network of gigantic data centers each composed of thousands of computers.

Page 1

By the 1990s, many mainframe and minicomputer firms like Unisys (the heir to Univac) and Data General had reoriented themselves to sell powerful servers based on standard processor chips. The spread of the Internet expanded this market. Major websites soon outstripped the capabilities of any single server, even a late-1990s flagship Unisys server with thirty-two Intel processors. Companies set up farms of servers running Web applications, with a load balancing system to route each new request to the least busy server. Storage area networks provided ultra-high-speed connections between servers and disk pools. The technological lines separating main- frames, minicomputers, and personal computers were st…

Page 2

The new approach was pioneered by Google. It became one of the world’s most valu- able companies by providing much better results than its competitors in two areas: Web search and Web advertising. Its success is usually attributed to superior algo- rithms, particularly the PageRank algorithm its founders created as graduate students. That gives only a part of the picture. Google’s algorithms provided better search results to its users, and its advertising system made more money by selecting relevant ads that users might click. But running those clever algorithms consumed more processor cycles and RAM than the simpler approaches of its comp…

Page 3

Conventional servers were expensive because they used more reliable, higher per- formance components. Google achieved reliability and performance with an extra layer of software.

Page 3

More processor power and network bandwidth is now devoted to transmitting and decom- pressing streaming video than to any other task.

Page 12

Microsoft’s rivals had met little success competing with Windows, but Java raised the possibility of taking down the personal computer itself. Sun, Oracle, and IBM joined together to promulgate a new standard for the network computer. This was a hybrid between a personal computer and a terminal. Like a terminal, it worked only when connected to a network and had no disk drive of its own. Like a personal com- puter, it had a capable processor to run Java programs locally rather than rely entirely on the processor power of the server as terminals d…

Page 19

Historians, having long memories, like to quibble about exactly how new the model really was. 20Back in the 1970s, for example, timesharing companies were popu- lar more for the access they offered to online applications rather than for simple access to an interactive computer. Terminals did not need to have any software installed on them. With a longer perspective, enthusiasm for freestanding personal computers in the 1980s and for client-server applications in the 1990s may look like an odd depar- ture from the historical norm. But as the story of Java shows, it took considerable work to remake Web browsers into a smooth and capable interface for online applications, able to serve as a modern replacement for the text terminals of t…

Page 21

Back in the 1950s, coding had been identified as the most routine, and worst paid, aspect of programming. That work was soon automated by software tools, and the job title went out of use during the 1960s. Title inflation followed—programmers were called analysts or software engineers. The programming staff at firms like Google are usually called engineers, despite efforts by the traditional engineering professions to reserve the title for people achieving the status of professional engineer (a four-year accredited degree and professional examina- tion, followed by a period of supervised work experience, culminating in a state licen…

The computer is everywhere and nowhere

By that point, a smaller, cheaper handheld computer had appeared. The Palm Pilot, released in 1996, was less technically ambitious than the MessagePad in every way. This was clearest in the handwriting recognition. Newton tried to recognize cursive text written anywhere on the screen. Palm required users to write characters one at a time in the input box, letters on the left, numbers on the right. They weren’t even ordinary let- ters: each was replaced by a stylized representation in a new alphabet called Graffiti. Once users had adjusted to this system, text entry was reliab…

Page 4

GPS is operated and controlled by the US Air Force. The US military reserves some capabilities for its own use, but regular GPS service is free to all users, laying the foundation for commercial exploita- tion by companies like Apple, Google, and Uber. In this regard, GPS is, like the Internet itself, a government-sponsored technology that became the foundation for huge private wealth. The European Union, Russia, and China have each developed or planned to develop similar satellite-based systems (Galileo, GLONASS, and BeiDou, respec…

Epilogue: A Tesla in the valley

Once computers became part of every infrastructure, the idea of the c puter as a machine in the tradition of ENIAC, a self-contained device whose users tackled different jobs by creating new programs, has become less relevant. The conceptual prob- lem with the idea of a universal solvent was always that, if any such substance was ever concocted, no flask could contain it. Our protagonist, which dissolved so much in the world that once seemed permanent, has finally dissolved its…