Haigh and Ceruzzi 2021

From Whiki
Jump to navigation Jump to search

Haigh and Ceruzzi, A New History of Modern Computing (2021)

Becoming Universal: Introducing a New History of Computing

The wholesale shift of video and music reproduction to digital technologies likewise challenges us to integrate media history into the long his- tory of computing. Since the original book was written, the computer had become something new, which meant that the book also had to become something n…

Yet this discussion is rarely grounded in the longer and deeper history of computer technology.

Our aim here is to integrate Internet and Web history into the core narrative of the history of computing, along with the history of iPods, video game consoles, home computers, digital cam- eras, and smartphone apps.

The computer has a relatively short history, which for our purposes begins in the 1940s.

Computer scientists have adopted a term from Alan Turing, the universal machine, to describe the remarkable flexibility of programmable computers. To prove a mathe- matical point he described a class of imaginary machines (now called Turing machines) that processed symbols on an unbounded tape according to rules held in a table. By encoding the rules themselves on the tape, Turing’s universal machine was able to com- pute any number computable by a more specialized machine of the same ilk. Computer scientists came to find this useful as a model of ability of all programmable computers to carry out arbitrary sequences of operations, and hence (if unlimited time and storage were available) to mimic each other by using code to replicate missing h…

Today about half the world’s inhabitants use hand-held computers daily to facilitate almost every imaginable human task.

Computers will never do everything, be used by everyone, or replace every other technology, but they are more nearly universal than any other technology. In that broader sense the computer began as a highly specialized technology and has moved toward universality and ubiquity. We think of this as a progression toward practical universality, in contrast to the theoretical universality often claimed for computers as embodiments of Turing machines.

To the extent that it has become a universal machine, the computer might also be called a universal solvent, achieving something of that old dream of alchemy by making an astounding variety of other technologies vanish into itself. Maps, filing cabinets, video tape players, typewriters, paper memos, and slide rules are rarely used now, as their functions have been replaced by software running on personal computers, smart- phones, and networks. We conceptualize this convergence of tasks on a single platform as a dissolving of those technologies and, in many cases, their business models by a device that comes ever closer to the status of universal technological sol…

In many cases the computer has dissolved the insides of other technologies while leaving their outward forms intact.

Decades ago, when the scope of computing was smaller, it made sense to see electronic comput- ing as a continuation of the tradition of scientific computation. The first major history of computing, The Computer from Pascal to von Neumann by computing pioneer Her- man Goldstine, concluded in the 1940s with the invention of the modern …

In A History of Computing Technology, published in 1985, Michael Williams started with the invention of numbers and reached electronic computers about two thirds of the way through. By the 1990s the importance of computer applications to business administration was being documented by historians, so it was natural for Martin Campbell-Kelly and William Aspray, when writing Computer: A History of the Infor- mation Machine, to replace discussion of slide rules and astrolabes with mechanical office machines, filing cabinets, and administrative proce…

The breadth of technologies displaced by the computer and practices remade around it makes it seem arbitrary to begin with chapters that tell the stories of index cards but not of televisions; of slide rules but not of pinball machines; or of typewriters but not of the postal system. But to include those stories, each of our chapters would need to become a long book of its own, written by different experts.

ENIAC is usually called something like the “first electronic, general purpose, programmable computer.”9

… tronic distinguishes it from electromechanical computers whose logic units worked thousands of times more slowly. Often called relay calculators, these computers carried out computations one instruction at a time under the control of paper tapes. They were player pianos that produced numbers rather than music…

General purpose and programmable separated ENIAC from s cial purpose electronic machines whose sequence of operations was built into hardware and so could not be reprogrammed to carry out fundamentally different tasks.

Inventing the computer

This was not exactly the beginning of the computer age.

ENIAC’s place in computer history rests on more than being the first device to merit check marks for electronic and programmable on a comparison sheet of early machines. It fixed public impressions of what a computer looked like and what it could do. It even inspired the practice of naming early computers with five- or six-letter acronyms ending with AC. During a period of about five years as the only programmable elec- tronic computer available for scientific use, ENIAC lived up to the hype by pioneering applications such as Monte Carlo simulation, numerical weather prediction, and the modeling of supersonic air f…

E lier meanings of program included a concert program, the program of study for a degree, and the programming of radio stations. In each case the program defined a sequence of

actions over time.

Discussion of programming a computer first appeared in the ENIAC project. By 1945 it had settled on something like its modern meaning: a computer program was a configuration that carried out the operations needed for a job. The act of creating it was called programming.3

ENIAC was not the first programmable computer, but it was the first to automate the job of deciding what to do next after a sequence of operations finished.

Producing the entire table by hand took months of work. Hard as the computers worked, their backlog of work grew ever larger. New guns were being shipped to Europe without the tables needed to operate them.

Because ENIAC was both electronic and general purpose, its designers faced a unique challenge. In Mauchly’s words, “Calculations can be performed at high speed only if instructions are supplied at high speed.” 6That required a control method faster than paper tape. It also meant avoiding frequent stops for human intervention.

Mauchly sketched out several possible mechanisms to select automatically between different preset courses of action depending on the values ENIAC had already calcu- lated. Computer scientists call this conditional branching and view it as a defining fea- ture of the modern co…

Altogether, ENIAC was not so much a single computer as a kit of forty modules from which a different com- puter was constructed for each problem.

In the end, ENIAC spent s thing like 15 percent of its production time calculating them and the rest on other, varied jobs, including many to aid the Los Alamos and Argonne laboratories in the development of nuclear weapons and reactors.

ENIAC was also a workplace of around two thousand square feet. Its panels were arranged in a U shape, working like a set of room dividers to enclose an inner space in which its operators worked.

Data went in and out of ENIAC on punched cards: small rectangles of cardboard each able to store 80 digits as a pattern of holes. The women spent much of their time punching input data onto cards and running output cards through an IBM tabulating machine to print their contents.

ENIAC used twenty-eight vacuum tubes to hold each decimal digit. That approach would not scale far. It took years of engineering frus- trations to make delay line memory work reliably, but the idea was simple and compel- ling. Pulses representing several hundred digits moved through a fluid-filled tube. Signals received at one end were immediately retransmitted at the other end, so that the same sequence was cycling constantly. Whenever a number reached the end of the tube it was available to be copied to the computer’s processor …

Von Neumann’s First Draft of a Report on the EDVAC described logical structures rather than the specifics of hardware. One of its most novel features was that, as the team had decided by September 1944, coded instructions were stored in the same stor- age devices used to hold d…

As it took more than a year for anyone to get around to filing a patent on ENIAC, this disclosure of the new architecture put the modern computer into the public domain.

The idea of loading a program into main memory was important and set EDVAC apart from existing computer designs. However, following work done by Haigh in col- laboration with Mark Priestley and Crispin Rope, we prefer to separate the enormous influence of EDVAC into three clusters of ideas (or paradigm…

The first of these, the EDVAC hardware paradigm, specified an all-electronic machine with a large high-speed memory using binary number storage (figure 1.2).

The second was the von Neumann architecture paradigm.

Storage and arithmetic was binary, using what would soon be called bits (a contraction of binary digits) to encode information. Each 32-bit chunk of memory (soon to be called a word) was referenced with an address number.

The third cluster of ideas was a system of instruction codes: the modern code para- digm. The flow of instructions and data mirrored the way humans performed scientific calculations as a series of mathematical operations, using mechanical calculators, books of tables, and pencil and paper. Even Eckert and Mauchly credited von Neumann with devising the proposed instruction code. It represented each instruction with an opera- tion code, usually followed by parameters or an…

Most computers today harness several processor cores running in parallel, but the concept of processing a stream of instructions from an addressable memory remains the most lasting of all the First Draft’s contributions. Computer scientist Alan Perlis remarked that “sometimes I think the only universal in the computing field is the fetch-execute cycle.”

…would-be computer builders to see ENIAC and learn more about the ideas contained in the First Draft, in the summer of 1946 the Moore School and the US military co-sponsored a course on the “Theory and Techniques for Design of Electronic Digital Computers.”

Early computers had to transmit digital pulses lasting perhaps one hundred thousandth of a second through miles of wire with complete reliability. They relied on digital logic circuits of huge complexity. They included thousands of vacuum tubes, which under normal use would be expected to fail often enough to make any computer useless. Building one meant soldering hundreds of thousands of electrical joints. But the biggest challenge was producing a stable memory able to hold tens of thousands of bits stable long enough to run a program.

The device p larly known as the Williams tube, after engineer Freddy Williams, stored bits as charges on a cathode ray tube (CRT) similar to those used in televisions and radar sets of that period. This produced a pattern of visible dots on the tube. By adding a mechanism to read charged spots as well as write them, a single tube could be used to store two thou- sand bits. The challenge was to read them reliably and to constantly write them back to the screen before they faded a…

Each computer project launched in the 1940s was an experiment, and designers tried out many variations on the EDVAC theme. Even computers modeled on von Neumann’s slightly later IAS design, built at places like Los Alamos, the Bureau of Standards, and the RAND Corporation, diverged by, for example, using different memory technologies.

For example, many programs worked on data held in a matrix structure (essentially a table). The programmer defined a loop to repeat a sequence of operations for each cell in the matrix.

Storing programs and data in addressable memory was a hallmark of the EDVAC approach. A single memory location was a word of memory. But computer designers made different choices about how large each word should be and how instructions should be encoded within it. The original EDVAC design called for thirty-two bits in each word. Early computers used word lengths between seventeen (EDSAC) and forty (the IAS computer and the Manchester Mark 1) bits.

One crucial engineering d sion was whether to move all the bits in a word sequentially on a single wire or send them together along a set of parallel wires. This was the original meaning of serial and parallel in computer design.

Babbage’s efforts a hundred years earlier to build a mechanical computer were remarkable but had no direct influence on work in the 1940s; the ENIAC team didn’t know about them and even Howard Aiken, who helped to revive Babbage’s reputation as a com- puter pioneer, designed his computer in ignorance of the details of Babbage’s wo…

While von Neumann was aware of, and intrigued by, Turing’s concept of a “universal machine,” we see no evidence that it shaped his design for EDVAC.

General purpose computers can do many things. The disadvantage to that flexibility is that getting any particular task done takes minutely detailed programming. It did not take Grace Hopper long to realize that reusing pieces of Mark 1 code for new problems could speed this work. Her group built up a paper tape library of standard sequences, called subroutines, for routine operations such as calculating logarithms or converting numbers between decimal and binary format.

The arrival of EDVAC-like computers opened up new possibilities for automating program preparation. The computer itself could be programmed to handle the chores involved in reusing code, such as renumbering memory addresses within each subroutine according to its eventual position in memory. These new tools were called assemblers because they assembled subroutines and new code into a single executable program. Assemblers quickly picked up another function. Humans found it easier to refer to instructions by short abbreviations, called mnemonics.

The list of instruction mnemonics and parameters was called assembly language. The assembler translated each line into the corresponding numerical instruction that the computer could execute.

Of all the 1940s computers, EDSAC had the most convenient programming system. Every time the machine was reset, code wired into read-only memory was automatically triggered to read an instruction tape, translating mnemonics on the fly and loading the results into memory. David Wheeler developed an elegant and influential way of calling subroutines, so that the computer could easily jump back to what it was doing previously when the subroutine finished. 25EDSAC users built a robust library of subroutine tapes and published their code in the first textbook on computer programming.26

Symbolic assemblers let p mers use labels rather than numbers to specify addresses, which eliminated the need to edit the code every time locations changed.

Eckert and Mauchly, with the help of about a dozen technical employees of their division, designed and built the Univac in a modest factory at 3747 Ridge Avenue in Philadelphia (see figure 1.3).

Aiken could not imagine that “the basic logics of a machine designed for the numerical solution of differential equa- tions [could] coincide with the basic logics of a machine intended to make bills for a department store.” 38Eckert and Mauchly knew otherwise. Univac inaugurated the era of large computers for what were later called “data processing” applicati…

The closest technology in widespread administrative use was punched card machines, the core product of IBM.

Punched card machines were often called unit record equipment because a single card encoded information about one thing, such as a sales transaction or employee. A typical small installation consisted of several key punches to get the data onto cards, plus several specialized devices such as tabulators, sorters, and collators. Each machine was configured to carry out the same operation on every card in the deck.

For most customers, what was revolutionary about the Univac was the use of tape in place of punched cards. Univac could scan through a reel of tape, reading selected records, performing some process, and writing the results to another tape. 39Carrying out all the operations needed for a job meant carrying decks of cards around the room, running them through one machine after another. 40That made punched card process- ing labor-intensive. In contrast, the Univac could perform a long sequence of auto- matic operations before fetching the next record from memory. It replaced not only existing calculating machines but also the people who tended them. Customers regarded the Univac as an information processing system, not a calculator. Published descriptions of the Univac nearly always referred to it as a “tape” machine. For General Electric, for example, “the speed of computing” was “of tertiary import…

Univac #1 was used to cross tabulate census data for four states. Data initially punched onto eleven million cards (one for each person), was transferred to tape for processing by the Univac. 43The machine was also used for tabu- lating another subset of the population involving about five million households. Each problem took several months to compl…

Its uniprinter, based on a Remington Rand electric t writer, could print only about ten characters per second. That proved a poor match for the high speed tape and processor, but in 1954 Remington Rand delivered the Univac High Speed Printer, able to print a full 130-character line at a time.

On Friday, October 15, 1954, the GE Univac first produced payroll checks for the Appliance Park employees. 49Punched card machines had been doing that job for years, but for an electronic digital computer, which recorded data as invisible magnetic spots on reels of tape, it was a significant milestone.

The computer becomes a scientific supertool

By 1956 IBM had installed more large computers than Univac. 3That owed much to its 704 computer, announced in 1954 as the successor to the 701 but incorporating three key improvements.

Above all, core memory provides random access, taking a small and consistent amount of time to retrieve any word from memory.

IBM and other manufacturers switched over to core memory during the mid- 1950s. Magnetic core memories were retrofitted to ENIAC and MIT’s Whirlwind computer in the summer of 1953.

Early computers wasted much of their incredibly expensive time waiting for data to arrive from peripherals. Magnetic tapes and drums supplied information much faster than punched cards, but not nearly quickly enough to keep processors busy. Programs that processed data from tape usually spent most of their time running loops of code to repeatedly check if the next chunk of data had arrived. Printers were even slower.

In the late 1930s in what may have been the first attempt to build an electronic digital computer, John V. Atanasoff had the idea of using a rotating drum as temporary memory, storing data on 1600 capacitors, arrayed in 32 rows. 9After World War II, the drum re- emerged as a reliable, rugged, inexpensive but slow memory device.

Finding a reliable memory was by far the hardest part of putting together a computer in the early 1950s.

The Librascope/General Precision LGP delivered in 1956, had a repertoire of only sixteen instructions and looked like an oversized office desk. Centering the design on a 4096-word drum simplified the rest of the machine hugely. It needed only 113 vacuum tubes and 1,350 diodes, against the Univac’s 5,400 tubes and 18,000 diodes. At $30,000 for a complete system, including a Flexowriter for input and output, it was also one of the cheapest early computers. More than 400 were sold. 17It provided a practical choice for customers unable or unwilling to pay for a large computer.

Scientific users drove the development and adoption of more a tious programming tools, called compilers. Whereas assemblers made writing machine instructions faster and more convenient, compilers could translate mathematical equa- tions and relatively easy-to-understand code written in high-level languages into code that the computer could exec…

… for a particular problem.” 21In those days the ideas of assembling, linking, and compiling code were not rigorously separated. Each term referred to the idea of knitting together program code and library subroutines to produce a single executable program.

Many factors contributed to the success of Fortran. One was that its syntax—the choice of symbols and the rules for using them—was close as possible to that of algebra, given the difficulty of indicating superscripts or subscripts on punched cards. Engineers liked its familiarity; they also liked the clear, concise, and easy-to-read user’s manual. Per- haps the most important factor was performance. The Fortran compiler generated machine code that was as efficient and fast as code written by hum…

SHARE soon developed an impressive library of routines that each member could use, many of them for mathematical tasks such as inverting matrices. The working practices adopted by SHARE had much in common with later open source projects. There were mechanisms for distributing code and documents, bug reporting proce- dures, and ad hoc groups of programmers from different companies who pooled their efforts to develop particular sys…

When the IBM 709 arrived in 1956, SHARE launched the SOS (defined variously as standing for Share Operating System and SHARE 709 System) project to develop an ambitious successor to the GM system. SOS aimed to automate much of the work carried out by operators. For that reason, it was the first piece of software to be called an operating system.

By the 1960s Fortran programming was an increasingly vital skill for scientists and engi- neers. Computer analysis and simulation underpinned scientific breakthroughs in X-ray crystallography, particle physics, and radio astronomy. Engineers used computers to simulate supersonic airflow, model the stresses on bridges and buildings, and design more efficient en…

In a stack, the item most recently stored is the first to be retrieved—as in a stack of papers accumulating on a desk.

The introduction of the transistor as a replacement for the vacuum tube mirrors the story of core memory. Transistors could serve the same roles as vacuum tubes in digital logic circuits but were smaller, more reliable, and initially more expensive. They used less power and could be driven faster.

Most p grammers never touched the machine that ran their programs. They wrote programs in pencil on special coding sheets, which they gave to keypunch operators who typed the code onto cards. The deck of punch cards holding the source code was read by a small IBM 1401 computer and transferred to a reel of tape. The operator took this tape and mounted it on a tape drive connected to the mainframe. The programmer had to wait until a batch was run to get her results. Usually these indicated a mistake or need to further refine the problem. She submitted a new deck and endured another long wait. That method of operation was a defining characteristic of the mainframe era.

To pack data more efficiently into memory, Stretch adopted a flexible word length of anything between 1 and 64 bits. Stretch engineer Werner Buchholz introduced the idea of the byte—the smallest unit of information a computer could retrieve and pro- cess. 45Bytes too were originally variable length, up to 8 bits. On later machines, how- ever, IBM standardized the byte at 8 bits. Combining several bytes when needed let more computers manipulate words of 16, 24, 32, or 64 bits. This approach was even- tually adopted throughout the computer…

Virtual memory is a way to make a computer’s fast main memory seem bigger than it is, by swapping data with a slower but larger storage medium such as a disk. A user of the Atlas saw a machine with a virtual memory of one million, 48-bit words. Special hardware translated requests to read or write from these virtual addresses into actions on the machine’s much smaller physical memory, a capability known as dynamic address translation.47 When there was not enough physical memory to hold all the pages, it looked for pages that were not currently in use to swap from core memory out to drum memory. Optimizing this process was crucial. Whenever a program needed to work with a page of memory that had been swapped out, Atlas would have to copy it back from drum memory, which slowed things down enormously. Atlas was designed for multipro- gramming, so that it could get on with another program while the pa…

The computer becomes a data processing device

The computer becomes a real-time control system

The computer becomes an interactive tool

The computer becomes a communications platform

The computer becomes a personal plaything