Haigh and Ceruzzi 2021: Difference between revisions

From Whiki
Jump to navigation Jump to search
Line 358: Line 358:


== The computer becomes an interactive tool ==
== The computer becomes an interactive tool ==
A prototype for what they called the compatible time-sharing system (CTSS) was working by the end of 1961 on MIT’s IBM 709. With no disk drive, they had to dedi- cate a tape drive to each user. That limited the system to four simultaneous users, each operating a modified Flexowri…
Page 6
Users each received a private area of the disk, known as a directory, to store files between sessions.
Page 7
CTSS provided the main system for student computing at MIT until 1969. MIT users developed many new applications to exploit the possibilities of timesharing. They built text editors, so that programs could be written and edited interactively on com- puter terminals instead of punched onto cards, and created a range of programming tools and languages optimized for online u…
Page 7
Timesharing’s spread was helped by the arrival in the mid-1960s of cheap and reliable teletype terminals based on the new ASCII standard for character encoding–a contrast with the expensive, special-purpose terminal equipment built for systems like SAGE and SABRE. Timesharing systems and interactive minicomputers were often used with a new device from the Teletype Corporation, the Model 33 ASR (automatic send- receive), shown in figure 5.4…
Page 8
The Model 33 was cheaper, simpler, and more rugged than the Flexowriter used by earlier small computers. It functioned at up to ten characters a second, working either as a typewriter that sent key codes directly to a computer or in offline mode to punch those codes onto paper tape. It came to symbolize the minicomputer era and the beginning of the personal computer era that followed it. The Control and Escape keys still found on keyboards today owe their ubiquity to the Model 33.
Page 8
Dartmouth initially used a General Electric 235 computer connected to a smaller GE Datanet computer, which controlled Teletype terminals across the campus. Com- puter assignments were incorporated into the core curriculum, particularly first-year mathematics courses. Dartmouth built up a large library of shared programs, includ- ing popular games such as a foo…
Page 10
The most fundamental c lenge in the development of any timesharing system was handling the unpredictable demands made by its users. Trying to support more users and capabilities made this dramatically harder.
Page 13
and written into textbooks for the emerging discipline of computer science, such as Denning’s own Operating Systems Theory (with E.
Page 14
…in the 1960s, the processor of a machine like the GE 645 was not a chip or even a board. It filled perhaps a hundred circuit boards spread over many cabinets, stuffed with thousands of electronic components connected by miles of wire. Even the architectural modifications made for timesharing meant rewir- ing large sections of the machine and adding cabinets holding circuit boards for things like dynamic address translation. Adding a second processor meant wiring the central parts of an entire second computer to a shared bank of core me…
Page 15
The GE 645 and IBM Model 360/67 were not the first computers to support multi-processor configurations. But timesharing was the first compelling application for it.
Page 15
The most popular successor to the single processor Cray 1 was the dual processor Cray X-MP. This was the world’s fastest computer from its launch in 1982 to 1985, when the Cray 2, with four processors, was introduced. Super- computers could really be justified only for organizations like Los Alamos or the National Center for Atmospheric Research with huge individual jobs. Harnessing that power forced programmers to split these application programs into separate parts, called threads, that could run simultaneously on different processors, communicating with each other to coordinate their work. Like the other architectural innovations pioneered by super- computers, that approach eventually made its way from supercomputers to minicom- puters, workstations, personal computers, and eventu…
Page 15
The timesharing community was small, and developers and code both moved between installations. For example, all timesharing systems needed an online text editor. Dennis Ritchie later noted that the editor code used at Bell Labs in the late 1970s could be traced back to Lampson and Deutsch’s QED for the SDS- 940. QED had also been adapted for CTSS at MIT and for Multics.
Page 16
Using a PDP-10 could not only be fun but addictive: it was no accident that it was the computer on which Adventure—perhaps the longest-lived of all computer games—was
Page 18
written by Will Crowther at BBN.
Page 19
This almost exclusive focus on what would later be called systems software may surprise readers who are used to thinking of software as a synonym for computer pro- gram. This was not the case in the 1960s. Software first became part of the computing lexicon in the early 1960s, to complement hardware by describing the other “wares” sold to computer use…
Page 20
The name Unix, pronounced almost identically with “eunuchs,” humorously signaled a stripped-down or “castrated” substitute for Multics.
Page 24
Unix had short commands, quick to type on a slow teletype, a compact kernel to leave lots of memory free for user programs, and a pervasive stress on efficiency. A lot of ideas from Multics were reimplemented in Unix using much simpler mechanisms. These included its hierarchical file system, the idea of a separate program (called the “shell”) to interact with users and interpret their commands, and aspects of its approaches to input and output.
Page 25
Once Unix was rewritten in C it was easier to port it to other computers. Instead of writing a whole operating system, all that was needed was a C compiler able to generate code in the new machine’s language and some work to tweak the Unix kernel and standard librar- ies to accommodate its q…
Page 26
C was optimized for operating systems programming. C code can do almost anything that assembly language can but is easier to write and structure.
== The computer becomes a communications platform ==
== The computer becomes a communications platform ==
== The computer becomes a personal plaything ==
== The computer becomes a personal plaything ==

Revision as of 00:54, 12 January 2025

Haigh and Ceruzzi, A New History of Modern Computing (2021)

Becoming Universal: Introducing a New History of Computing

The wholesale shift of video and music reproduction to digital technologies likewise challenges us to integrate media history into the long his- tory of computing. Since the original book was written, the computer had become something new, which meant that the book also had to become something n…

Yet this discussion is rarely grounded in the longer and deeper history of computer technology.

Our aim here is to integrate Internet and Web history into the core narrative of the history of computing, along with the history of iPods, video game consoles, home computers, digital cam- eras, and smartphone apps.

The computer has a relatively short history, which for our purposes begins in the 1940s.

Computer scientists have adopted a term from Alan Turing, the universal machine, to describe the remarkable flexibility of programmable computers. To prove a mathe- matical point he described a class of imaginary machines (now called Turing machines) that processed symbols on an unbounded tape according to rules held in a table. By encoding the rules themselves on the tape, Turing’s universal machine was able to com- pute any number computable by a more specialized machine of the same ilk. Computer scientists came to find this useful as a model of ability of all programmable computers to carry out arbitrary sequences of operations, and hence (if unlimited time and storage were available) to mimic each other by using code to replicate missing h…

Today about half the world’s inhabitants use hand-held computers daily to facilitate almost every imaginable human task.

Computers will never do everything, be used by everyone, or replace every other technology, but they are more nearly universal than any other technology. In that broader sense the computer began as a highly specialized technology and has moved toward universality and ubiquity. We think of this as a progression toward practical universality, in contrast to the theoretical universality often claimed for computers as embodiments of Turing machines.

To the extent that it has become a universal machine, the computer might also be called a universal solvent, achieving something of that old dream of alchemy by making an astounding variety of other technologies vanish into itself. Maps, filing cabinets, video tape players, typewriters, paper memos, and slide rules are rarely used now, as their functions have been replaced by software running on personal computers, smart- phones, and networks. We conceptualize this convergence of tasks on a single platform as a dissolving of those technologies and, in many cases, their business models by a device that comes ever closer to the status of universal technological sol…

In many cases the computer has dissolved the insides of other technologies while leaving their outward forms intact.

Decades ago, when the scope of computing was smaller, it made sense to see electronic comput- ing as a continuation of the tradition of scientific computation. The first major history of computing, The Computer from Pascal to von Neumann by computing pioneer Her- man Goldstine, concluded in the 1940s with the invention of the modern …

In A History of Computing Technology, published in 1985, Michael Williams started with the invention of numbers and reached electronic computers about two thirds of the way through. By the 1990s the importance of computer applications to business administration was being documented by historians, so it was natural for Martin Campbell-Kelly and William Aspray, when writing Computer: A History of the Infor- mation Machine, to replace discussion of slide rules and astrolabes with mechanical office machines, filing cabinets, and administrative proce…

The breadth of technologies displaced by the computer and practices remade around it makes it seem arbitrary to begin with chapters that tell the stories of index cards but not of televisions; of slide rules but not of pinball machines; or of typewriters but not of the postal system. But to include those stories, each of our chapters would need to become a long book of its own, written by different experts.

ENIAC is usually called something like the “first electronic, general purpose, programmable computer.”9

… tronic distinguishes it from electromechanical computers whose logic units worked thousands of times more slowly. Often called relay calculators, these computers carried out computations one instruction at a time under the control of paper tapes. They were player pianos that produced numbers rather than music…

General purpose and programmable separated ENIAC from s cial purpose electronic machines whose sequence of operations was built into hardware and so could not be reprogrammed to carry out fundamentally different tasks.

Inventing the computer

This was not exactly the beginning of the computer age.

ENIAC’s place in computer history rests on more than being the first device to merit check marks for electronic and programmable on a comparison sheet of early machines. It fixed public impressions of what a computer looked like and what it could do. It even inspired the practice of naming early computers with five- or six-letter acronyms ending with AC. During a period of about five years as the only programmable elec- tronic computer available for scientific use, ENIAC lived up to the hype by pioneering applications such as Monte Carlo simulation, numerical weather prediction, and the modeling of supersonic air f…

E lier meanings of program included a concert program, the program of study for a degree, and the programming of radio stations. In each case the program defined a sequence of

actions over time.

Discussion of programming a computer first appeared in the ENIAC project. By 1945 it had settled on something like its modern meaning: a computer program was a configuration that carried out the operations needed for a job. The act of creating it was called programming.3

ENIAC was not the first programmable computer, but it was the first to automate the job of deciding what to do next after a sequence of operations finished.

Producing the entire table by hand took months of work. Hard as the computers worked, their backlog of work grew ever larger. New guns were being shipped to Europe without the tables needed to operate them.

Because ENIAC was both electronic and general purpose, its designers faced a unique challenge. In Mauchly’s words, “Calculations can be performed at high speed only if instructions are supplied at high speed.” 6That required a control method faster than paper tape. It also meant avoiding frequent stops for human intervention.

Mauchly sketched out several possible mechanisms to select automatically between different preset courses of action depending on the values ENIAC had already calcu- lated. Computer scientists call this conditional branching and view it as a defining fea- ture of the modern co…

Altogether, ENIAC was not so much a single computer as a kit of forty modules from which a different com- puter was constructed for each problem.

In the end, ENIAC spent s thing like 15 percent of its production time calculating them and the rest on other, varied jobs, including many to aid the Los Alamos and Argonne laboratories in the development of nuclear weapons and reactors.

ENIAC was also a workplace of around two thousand square feet. Its panels were arranged in a U shape, working like a set of room dividers to enclose an inner space in which its operators worked.

Data went in and out of ENIAC on punched cards: small rectangles of cardboard each able to store 80 digits as a pattern of holes. The women spent much of their time punching input data onto cards and running output cards through an IBM tabulating machine to print their contents.

ENIAC used twenty-eight vacuum tubes to hold each decimal digit. That approach would not scale far. It took years of engineering frus- trations to make delay line memory work reliably, but the idea was simple and compel- ling. Pulses representing several hundred digits moved through a fluid-filled tube. Signals received at one end were immediately retransmitted at the other end, so that the same sequence was cycling constantly. Whenever a number reached the end of the tube it was available to be copied to the computer’s processor …

Von Neumann’s First Draft of a Report on the EDVAC described logical structures rather than the specifics of hardware. One of its most novel features was that, as the team had decided by September 1944, coded instructions were stored in the same stor- age devices used to hold d…

As it took more than a year for anyone to get around to filing a patent on ENIAC, this disclosure of the new architecture put the modern computer into the public domain.

The idea of loading a program into main memory was important and set EDVAC apart from existing computer designs. However, following work done by Haigh in col- laboration with Mark Priestley and Crispin Rope, we prefer to separate the enormous influence of EDVAC into three clusters of ideas (or paradigm…

The first of these, the EDVAC hardware paradigm, specified an all-electronic machine with a large high-speed memory using binary number storage (figure 1.2).

The second was the von Neumann architecture paradigm.

Storage and arithmetic was binary, using what would soon be called bits (a contraction of binary digits) to encode information. Each 32-bit chunk of memory (soon to be called a word) was referenced with an address number.

The third cluster of ideas was a system of instruction codes: the modern code para- digm. The flow of instructions and data mirrored the way humans performed scientific calculations as a series of mathematical operations, using mechanical calculators, books of tables, and pencil and paper. Even Eckert and Mauchly credited von Neumann with devising the proposed instruction code. It represented each instruction with an opera- tion code, usually followed by parameters or an…

Most computers today harness several processor cores running in parallel, but the concept of processing a stream of instructions from an addressable memory remains the most lasting of all the First Draft’s contributions. Computer scientist Alan Perlis remarked that “sometimes I think the only universal in the computing field is the fetch-execute cycle.”

…would-be computer builders to see ENIAC and learn more about the ideas contained in the First Draft, in the summer of 1946 the Moore School and the US military co-sponsored a course on the “Theory and Techniques for Design of Electronic Digital Computers.”

Early computers had to transmit digital pulses lasting perhaps one hundred thousandth of a second through miles of wire with complete reliability. They relied on digital logic circuits of huge complexity. They included thousands of vacuum tubes, which under normal use would be expected to fail often enough to make any computer useless. Building one meant soldering hundreds of thousands of electrical joints. But the biggest challenge was producing a stable memory able to hold tens of thousands of bits stable long enough to run a program.

The device p larly known as the Williams tube, after engineer Freddy Williams, stored bits as charges on a cathode ray tube (CRT) similar to those used in televisions and radar sets of that period. This produced a pattern of visible dots on the tube. By adding a mechanism to read charged spots as well as write them, a single tube could be used to store two thou- sand bits. The challenge was to read them reliably and to constantly write them back to the screen before they faded a…

Each computer project launched in the 1940s was an experiment, and designers tried out many variations on the EDVAC theme. Even computers modeled on von Neumann’s slightly later IAS design, built at places like Los Alamos, the Bureau of Standards, and the RAND Corporation, diverged by, for example, using different memory technologies.

For example, many programs worked on data held in a matrix structure (essentially a table). The programmer defined a loop to repeat a sequence of operations for each cell in the matrix.

Storing programs and data in addressable memory was a hallmark of the EDVAC approach. A single memory location was a word of memory. But computer designers made different choices about how large each word should be and how instructions should be encoded within it. The original EDVAC design called for thirty-two bits in each word. Early computers used word lengths between seventeen (EDSAC) and forty (the IAS computer and the Manchester Mark 1) bits.

One crucial engineering d sion was whether to move all the bits in a word sequentially on a single wire or send them together along a set of parallel wires. This was the original meaning of serial and parallel in computer design.

Babbage’s efforts a hundred years earlier to build a mechanical computer were remarkable but had no direct influence on work in the 1940s; the ENIAC team didn’t know about them and even Howard Aiken, who helped to revive Babbage’s reputation as a com- puter pioneer, designed his computer in ignorance of the details of Babbage’s wo…

While von Neumann was aware of, and intrigued by, Turing’s concept of a “universal machine,” we see no evidence that it shaped his design for EDVAC.

General purpose computers can do many things. The disadvantage to that flexibility is that getting any particular task done takes minutely detailed programming. It did not take Grace Hopper long to realize that reusing pieces of Mark 1 code for new problems could speed this work. Her group built up a paper tape library of standard sequences, called subroutines, for routine operations such as calculating logarithms or converting numbers between decimal and binary format.

The arrival of EDVAC-like computers opened up new possibilities for automating program preparation. The computer itself could be programmed to handle the chores involved in reusing code, such as renumbering memory addresses within each subroutine according to its eventual position in memory. These new tools were called assemblers because they assembled subroutines and new code into a single executable program. Assemblers quickly picked up another function. Humans found it easier to refer to instructions by short abbreviations, called mnemonics.

The list of instruction mnemonics and parameters was called assembly language. The assembler translated each line into the corresponding numerical instruction that the computer could execute.

Of all the 1940s computers, EDSAC had the most convenient programming system. Every time the machine was reset, code wired into read-only memory was automatically triggered to read an instruction tape, translating mnemonics on the fly and loading the results into memory. David Wheeler developed an elegant and influential way of calling subroutines, so that the computer could easily jump back to what it was doing previously when the subroutine finished. 25EDSAC users built a robust library of subroutine tapes and published their code in the first textbook on computer programming.26

Symbolic assemblers let p mers use labels rather than numbers to specify addresses, which eliminated the need to edit the code every time locations changed.

Eckert and Mauchly, with the help of about a dozen technical employees of their division, designed and built the Univac in a modest factory at 3747 Ridge Avenue in Philadelphia (see figure 1.3).

Aiken could not imagine that “the basic logics of a machine designed for the numerical solution of differential equa- tions [could] coincide with the basic logics of a machine intended to make bills for a department store.” 38Eckert and Mauchly knew otherwise. Univac inaugurated the era of large computers for what were later called “data processing” applicati…

The closest technology in widespread administrative use was punched card machines, the core product of IBM.

Punched card machines were often called unit record equipment because a single card encoded information about one thing, such as a sales transaction or employee. A typical small installation consisted of several key punches to get the data onto cards, plus several specialized devices such as tabulators, sorters, and collators. Each machine was configured to carry out the same operation on every card in the deck.

For most customers, what was revolutionary about the Univac was the use of tape in place of punched cards. Univac could scan through a reel of tape, reading selected records, performing some process, and writing the results to another tape. 39Carrying out all the operations needed for a job meant carrying decks of cards around the room, running them through one machine after another. 40That made punched card process- ing labor-intensive. In contrast, the Univac could perform a long sequence of auto- matic operations before fetching the next record from memory. It replaced not only existing calculating machines but also the people who tended them. Customers regarded the Univac as an information processing system, not a calculator. Published descriptions of the Univac nearly always referred to it as a “tape” machine. For General Electric, for example, “the speed of computing” was “of tertiary import…

Univac #1 was used to cross tabulate census data for four states. Data initially punched onto eleven million cards (one for each person), was transferred to tape for processing by the Univac. 43The machine was also used for tabu- lating another subset of the population involving about five million households. Each problem took several months to compl…

Its uniprinter, based on a Remington Rand electric t writer, could print only about ten characters per second. That proved a poor match for the high speed tape and processor, but in 1954 Remington Rand delivered the Univac High Speed Printer, able to print a full 130-character line at a time.

On Friday, October 15, 1954, the GE Univac first produced payroll checks for the Appliance Park employees. 49Punched card machines had been doing that job for years, but for an electronic digital computer, which recorded data as invisible magnetic spots on reels of tape, it was a significant milestone.

The computer becomes a scientific supertool

By 1956 IBM had installed more large computers than Univac. 3That owed much to its 704 computer, announced in 1954 as the successor to the 701 but incorporating three key improvements.

Above all, core memory provides random access, taking a small and consistent amount of time to retrieve any word from memory.

IBM and other manufacturers switched over to core memory during the mid- 1950s. Magnetic core memories were retrofitted to ENIAC and MIT’s Whirlwind computer in the summer of 1953.

Early computers wasted much of their incredibly expensive time waiting for data to arrive from peripherals. Magnetic tapes and drums supplied information much faster than punched cards, but not nearly quickly enough to keep processors busy. Programs that processed data from tape usually spent most of their time running loops of code to repeatedly check if the next chunk of data had arrived. Printers were even slower.

In the late 1930s in what may have been the first attempt to build an electronic digital computer, John V. Atanasoff had the idea of using a rotating drum as temporary memory, storing data on 1600 capacitors, arrayed in 32 rows. 9After World War II, the drum re- emerged as a reliable, rugged, inexpensive but slow memory device.

Finding a reliable memory was by far the hardest part of putting together a computer in the early 1950s.

The Librascope/General Precision LGP delivered in 1956, had a repertoire of only sixteen instructions and looked like an oversized office desk. Centering the design on a 4096-word drum simplified the rest of the machine hugely. It needed only 113 vacuum tubes and 1,350 diodes, against the Univac’s 5,400 tubes and 18,000 diodes. At $30,000 for a complete system, including a Flexowriter for input and output, it was also one of the cheapest early computers. More than 400 were sold. 17It provided a practical choice for customers unable or unwilling to pay for a large computer.

Scientific users drove the development and adoption of more a tious programming tools, called compilers. Whereas assemblers made writing machine instructions faster and more convenient, compilers could translate mathematical equa- tions and relatively easy-to-understand code written in high-level languages into code that the computer could exec…

… for a particular problem.” 21In those days the ideas of assembling, linking, and compiling code were not rigorously separated. Each term referred to the idea of knitting together program code and library subroutines to produce a single executable program.

Many factors contributed to the success of Fortran. One was that its syntax—the choice of symbols and the rules for using them—was close as possible to that of algebra, given the difficulty of indicating superscripts or subscripts on punched cards. Engineers liked its familiarity; they also liked the clear, concise, and easy-to-read user’s manual. Per- haps the most important factor was performance. The Fortran compiler generated machine code that was as efficient and fast as code written by hum…

SHARE soon developed an impressive library of routines that each member could use, many of them for mathematical tasks such as inverting matrices. The working practices adopted by SHARE had much in common with later open source projects. There were mechanisms for distributing code and documents, bug reporting proce- dures, and ad hoc groups of programmers from different companies who pooled their efforts to develop particular sys…

When the IBM 709 arrived in 1956, SHARE launched the SOS (defined variously as standing for Share Operating System and SHARE 709 System) project to develop an ambitious successor to the GM system. SOS aimed to automate much of the work carried out by operators. For that reason, it was the first piece of software to be called an operating system.

By the 1960s Fortran programming was an increasingly vital skill for scientists and engi- neers. Computer analysis and simulation underpinned scientific breakthroughs in X-ray crystallography, particle physics, and radio astronomy. Engineers used computers to simulate supersonic airflow, model the stresses on bridges and buildings, and design more efficient en…

In a stack, the item most recently stored is the first to be retrieved—as in a stack of papers accumulating on a desk.

The introduction of the transistor as a replacement for the vacuum tube mirrors the story of core memory. Transistors could serve the same roles as vacuum tubes in digital logic circuits but were smaller, more reliable, and initially more expensive. They used less power and could be driven faster.

Most p grammers never touched the machine that ran their programs. They wrote programs in pencil on special coding sheets, which they gave to keypunch operators who typed the code onto cards. The deck of punch cards holding the source code was read by a small IBM 1401 computer and transferred to a reel of tape. The operator took this tape and mounted it on a tape drive connected to the mainframe. The programmer had to wait until a batch was run to get her results. Usually these indicated a mistake or need to further refine the problem. She submitted a new deck and endured another long wait. That method of operation was a defining characteristic of the mainframe era.

To pack data more efficiently into memory, Stretch adopted a flexible word length of anything between 1 and 64 bits. Stretch engineer Werner Buchholz introduced the idea of the byte—the smallest unit of information a computer could retrieve and pro- cess. 45Bytes too were originally variable length, up to 8 bits. On later machines, how- ever, IBM standardized the byte at 8 bits. Combining several bytes when needed let more computers manipulate words of 16, 24, 32, or 64 bits. This approach was even- tually adopted throughout the computer…

Virtual memory is a way to make a computer’s fast main memory seem bigger than it is, by swapping data with a slower but larger storage medium such as a disk. A user of the Atlas saw a machine with a virtual memory of one million, 48-bit words. Special hardware translated requests to read or write from these virtual addresses into actions on the machine’s much smaller physical memory, a capability known as dynamic address translation.47 When there was not enough physical memory to hold all the pages, it looked for pages that were not currently in use to swap from core memory out to drum memory. Optimizing this process was crucial. Whenever a program needed to work with a page of memory that had been swapped out, Atlas would have to copy it back from drum memory, which slowed things down enormously. Atlas was designed for multipro- gramming, so that it could get on with another program while the pa…

The computer becomes a data processing device

Business data processing was a largely separate field from scientific computing, with its own journals, associations, and professional identities. In the computer’s march toward

Page 1

universality this was the second major community of computing to develop. From the late 1950s to the 1980s it was unquestionably the largest.

Page 2

Nowhere did we see a computer that was doing jobs to a regular planned schedule.” Companies were slow to adapt to the potential of the new technology, instead trying to transfer existing punched card procedures directly to computer.4

Page 2

…clerical automation soon overtook scientific calculation as the biggest market for computer technology. Com- puters designed specifically for administrative use ran from relatively affordable machines intended for use alongside existing punched card installations to multimil- lion dollar computers built to handle the core administrative tasks of large corpora- tions. Data processing spurred developments including the adoption of disk drives, the development of the IBM’s dominant System/360 mainframe architecture, and the emergence of softw…

Page 2

…a general-purpose computer storing programs and data on a magnetic drum using ERA technology.9 The 650 was launched in 1954. It was the first mass-produced computer, with almost two thousand manufactured. Like ENIAC, the 650 had to be used with a full set of conventional punched card machines to prepare, process, and print cards…

Page 4

Thomas Watson Jr. directed that IBM allow universities to acquire a 650 at up to a 60% discount, if the university agreed to offer courses in business data processing or sci- entific computing. 11Many universities took up this offer, making the 650 the machine on which thousands of scientists learned to program. Donald Knuth later dedicated his mon- umental series, The Art of Computer Programming, to “the Type 650 computer once installed at Case Institute of Technology, in remembrance of many pleasant even…

Page 4

The 1401’s success owed a lot to a piece of peripheral equipment that IBM intro- duced with it. The type 1403 printer moved a continuous chain of characters laterally across the page. Magnetically driven hammers struck the chain at the precise place where a character was to be printed. In one minute it printed 600 lines, far more than anything else on the market, and could stand up under heavy…

Page 4

By 1962, just eleven years after IBM leased its first computer, the firm was receiv- ing more income from computers than from punched card machines. By 1966 IBM had installed an estimated 11,300 systems from the 1400 family, accounting for about a third of all US-built comp…

Page 5

At nearly every university computer center, someone figured out a sequence that would sound out the school’s fight song when sent to the printer. Printer music, recorded by his father, underpinned Icelandic composer Jóhann Jóhannsson’s acclaimed orchestral work IBM 1401, A User’s Manual.

Page 5

An array of spinning disks stored more data at a lower cost than its older cousin, the magnetic drum. Drums had a line of fixed read and write heads, one for each track on the drum. They were rugged and fast but expensive. Disk drives used a single movable head for each side of a disk.

Page 5

In 1956 IBM publicly announced the Model 350 disk storage unit. It used a stack of fifty aluminum disks, each 24 inches in diameter, rotating at 1,200 rpm. Total storage capacity was five million characters. The disk was attached to a small drum computer, the 305, better known as the RAMAC (random access method of accounting and control).

Page 5

What IBM meant by “random access” was that data could be fetched rapidly from any part of a disk, in contrast to the sequential operation of a deck of punched cards or a reel of tape. RAMAC made it feasible to work one record at a time. It would have taken a big IBM system six minutes to search a tape and retrieve one value. The relatively inexpen- sive RAMAC could locate a record and start printing out its contents in less than a…

Page 7

Many companies, particularly those installing smaller computers, expanded and upgraded their existing punched card departments to data processing departments. The terminology came from IBM, which wanted to tie business computing to its existing strength in punched card machines. Its punched card machines became data processing equipment, and its computers electronic data processing systems.

Page 8

There were five main kinds of work in data processing departments. In ascending order of status and pay, these were key punch work, machine operation, programming, systems analysis, and management. Not coincidentally, the proportion of women doing each job fell in exactly the same order. All program code and input data had to be punched onto cards. Because this work had strong parallels with typing it was almost invariably given to women. Keypunch work was the biggest and worst paid job category in data processing. Operating other punched card equipment was treated as men’s work by most American companies, although practices varied. Computer operation was seen as an extension of the same labor, so operators were a mix of men and women. Systems analysis was the job of looking at business procedures to redesign them around the new technology. Analysts were not expected to write programs, but they were expected to produce very detailed specifications for computer processes. In big companies this was an extension of the existing work of the overwhelmingly male systems and procedures groups, which prior to computerization had taken on jobs such as documenting procedures and redesigning forms. After computerization it remained a job for men.

Page 9

Programming was the only job without a clear parallel in existing administrative punched card work. It was slotted in conceptually between the existing work of machine operation and systems analysis. Programmers would take the specifications written by the analysts and convert them into computer instructions. Within the ide- ology of data processing this was seen as a less creative task than analysis—successful programmers would aspire to become analysts and eventually managers, leaving the machine itself further behind as their careers progressed. Early descriptions sometimes mention the lower-status job of coder, turning instruction mnemonics into numerical codes, but that task was soon automated by increasingly powerful assembl…

Page 9

The openness of programming jobs to women during the 1950s is often exagger- ated.

Page 9

Bigger computers relied on magnetic tape. The IBM 705 could store five million characters on a reel of magnetic tape. At 15,000 characters per second, it would take about six minutes to process an entire tape.

Page 10

Sorting records held on punched cards was easy, if slow. Each card generally held a single record, which is why unit record equipment was one of IBM’s euphemisms for punched card machines. Decks dropped into the sorter were separated into ten output trays based on the value of a selected digit. Running each output deck through the machine again sorted on the next digit. Eventually the records would all be in order and the deck could be reassembled. It wasn’t practical to cut a tape into tiny pieces and splice them back together, so finding an effective way to sort tape files was crucial for the efficient application of computers to business.

Page 10

In a similar way, programmers in data p ing installations started producing generalized sort routines rather than constantly rewriting the same code for different applications.

Page 11

Producing a new report in a computerized system meant writing a program. Fred Gruenberger, who worked on the system, noted that managers were “vitriolic” when told it would cost thousands of dollars to tweak a report, because on a tabulator “all you have to do to change your report is move five wires.” The new system automatically generated a report program when fed cards describing the desired report format.26

Page 11

…the US government announced that it would not purchase or lease computer equipment unless it could handle COBOL.29 As a result, COBOL became one of the first languages to be sufficiently standardized that a program could be compiled on computers from different vendors and run to produce the same results. That occurred in December 1960, when almost identical programs ran on a Univac II and an RCA 50…

Page 13

Part of COBOL’s ancestry can be traced to Grace Hopper, who in 1956 had developed a compiler sometimes called FLOW-MATIC, which was geared toward business applica- tions.

Page 13

It was from Hopper that COBOL inherited its most famous attribute, long variable and command names intended to make the code read as English, in contrast with Fortran, which aspired to mimic mathematical notation. COBOL code usually centered on retrieving data from files and manipulating it…

Page 13

These product lines were justified by a notion that business and scientific users had fundamentally different hardware needs, which no longer held up. Business customers were expected to work with large sets of data on which they performed simple arithme- tic, whereas scientific customers did advanced calculation on small sets of data. In reality, scientists and engineers were handling large data sets for applications like finite-element analysis, a technique developed for building complex aerospace structures. And business applications were growing increasingly ambitious. In its final report, the SPREAD Com- mittee recommended a single unified product line for scienc…

Page 14

The 360 used a combination of software and the microprogrammed instructions for what Larry Moss of IBM called emulation of older machines, implying that it was “as good as” (or even better than) the original, rather than mere “simulation” or worse, “imitation.” The 360 Model 65 sold especially well because of its ability to emulate the large business 7070 computer. IBM made sure that the low-end models 30 and 40 effectively emulated the 1401. 39Their faster circuits could run old programs as much as ten times faster than a real 1401. By 1967, according to some estimates, over half of all 360 jobs were run in emulation mode. Despite its name, software is more perma- nent than hardware. Decades later, 1401 programs were still running routine payroll and other data processing jobs on computers from a variety of suppliers. The program- mers who coded these had no idea how long-lived their work w…

Page 15

The 360’s word length was 32 bits, a slight reduction from earlier large scientific computers that let IBM standardize on 8-bit bytes. Four bytes fit neatly into one word. A byte could encode 256 different combinations: representing uppercase and lowercase letters, the decimal digits one to

Page 15

ten, punctuation, accent marks, and control codes with room to spare. Since 4 bits were adequate to encode a decimal or hexadecimal digit (the 360 supported both) one could pack two into each byte.

Page 16

IBM represented characters using EBCDIC (extended binary coded decimal inter- change code), an extension of a code developed for punched card equipment. It was well designed and offered room for future expansion, but it was not compatible with the ASCII standard adopted by the American National Standards Institute in 1963. ASCII standard- ized only seven bits, not eight. Punched paper tape was still in common use, and the com- mittee felt that punching eight holes across a standard piece of tape would weaken it too much. The computing world split, with EBCDIC used at IBM and ASCII …

Page 16

System/360 was designed for technical computing as well as business data processing. In both markets, IBM’s fast and rugged chain printers were a crucial selling point.

Page 18

In the end, IBM settled on two operating systems. Most users ran DOS, the disk operating system. Created in just a year by a separate team, it threw out the grand goals of OS/360 in order to provide efficient batch mode processing on the smaller System/360 machines that took over from 1401s as the workhorses of data processing. DOS quickly became the world’s most widely used operating system.

Page 18

Although System/360 was intended to work equally well for scientific and data pro- cessing applications, it was much more successful for data processing.

Page 19

Computers had been sold to American business by people like General Electric’s Roddy Osborn as the basis of a managerial revolution. In reality they were usually put to work to speed up jobs already being carried out with punched card machines.

Page 19

Handling data on disk was much more complicated than working with tape. Imagine that a disk holds one hundred thousand customer records. The big benefit of disk storage is random access—a program can request data from any part of the disk. But how does the program know where to find the desired record without having to read its entire contents? Pioneers came up with a variety of methods for structuring and indexing records held on disk, such as hashing, inverted files, and linked lists. Each was hard for a typical programmer to understand and implement.

Page 22

The software i try started when companies providing programming services, reusing bits of code from one project to the next, realized that they could produce generalized programs able to handle the needs of multiple companies.

Page 27

In 1969, the firm announced its intention to unbundle its software p ages and customer education from its hardware by charging for them separately.

The computer becomes a real-time control system

ENIAC had been built to calculate the trajectories of shells in order to produce printed firing tables.

Page 2

Digital computers p cessed data encoded as digits stored in their registers and memory units. They were programmed to execute sequences of mathematical and logical operations needed for a particular task—adding, subtracting, multiplying, and dividing the numbers and shuffling them around in memory.

Page 2

SAGE introduced more fundamentally new features to computing than any other project of its era, including networking computers and sensors and the development of interactive computer graphics.

Page 5

DEC not only permitted modification by its customers, it encouraged it. The tiny company could not afford to develop the specialized interfaces, installation hardware, and software that were needed to turn a general-purpose computer into a useful prod- uct. Its customers welcomed the opportunity. 29DEC soon began publishing detailed specifications about the inner workings of its products and it distributed them widely. Stan Olsen said he wanted the equivalent of “a Sears Roebuck catalog” for Digital’s products, with plenty of tutorial information on how to hook them up to industrial or laboratory equipment. 30DEC printed these manuals on cheap newsprint, distributing them liberally to potential custom…

Page 10

Its limited memory steered programmers away from high-level programming languages, toward assembly language or even machine code. Yet the simplicity of the PDP-8’s architecture, coupled with DEC’s policy of making information freely available, made it an easy computer to underst…

Page 11

The first steps in miniaturization involved printed circuit boards, which were used to eliminate wires and pack components more closely together by etching a pattern on a plastic board covered with copper or some other electrical conductor.

Page 15

Printed circuits found their first civilian applications in hearing aid production, where miniaturization and portability had long been crucial.

Page 15

Pen

Page 15

For the PDP-8 it relied on flip-chip modules: small printed circuit boards on which transistors, resistors, and other components were mounted. These in turn were plugged into a hinged chassis that opened like a book. The result was a system consisting of a processor, a control panel, and core memory in a package small enough to be embed- ded into other equip…

Page 15

For Noyce the invention of the IC was less the result of a sudden flash of insight than the result of a gradual build-up of engineering knowledge at Fairchild about materials, fabrication, and circuits since the company’s founding in 1957.

Page 16

The relationship between printing, photography, and microelectronics has been a close one.

Page 18

Apollo software was written on mainframes and translated into binary data. The code was wired into read-only core ropes: long strings of wires woven through magnetic cores, which stored a binary one if a wire went through it, or a zero if the wire went around it. Hamilton described the technique of scrubbing the code as the Auge Kugel method: the term was (incorrect) German for eyeball.56 In other words, one looked at the code and decided if it was correct or not. NASA was not sure whether the academic culture of MIT was disciplined enough for the job. It had the Instrumentation Lab print out the code listings on reams of fan-fold paper and ship them to TRW in Los Angeles, where John Norton, a TRW employee, would scrutinize the code and indi- cate any anomalies he found. Some at MIT resented Norton’s intrusion, but he did manage to find problems with the code. 57Hamilton was known as the “rope mother.” She kept a sense of humor, calling some of the anomalies in the code FLTs or “funny little things,” and the women who wired the ropes at the Raytheon plant in suburban Boston, LOLs—“little old ladies.” Many of them had worked at nearby Waltham Watch, part of a long tradition of female labor in precision manufacturi…

Page 22

In the early 1960s, aerospace needs for powerful, light, and miniaturized electronics drove dramatic improvements in the state of the art. New technologies such as inte- grated circuits found their first applications in missiles and space rockets. However, once those technologies were adopted by other industries, space applications became more conservat…

Page 24

To support the Shuttle and Apollo programs NASA pioneered fly-by-wire technol- ogy. Existing aircraft ran hydraulic lines from controls, such as the throttle, all the way to the engines, rudder, and other controls. NASA flew the first digital fly-by-wire sys- tem in 1972. Computer control meant sending digital signals down wires, to circuits controlling electr…

The computer becomes an interactive tool

A prototype for what they called the compatible time-sharing system (CTSS) was working by the end of 1961 on MIT’s IBM 709. With no disk drive, they had to dedi- cate a tape drive to each user. That limited the system to four simultaneous users, each operating a modified Flexowri…

Page 6

Users each received a private area of the disk, known as a directory, to store files between sessions.

Page 7

CTSS provided the main system for student computing at MIT until 1969. MIT users developed many new applications to exploit the possibilities of timesharing. They built text editors, so that programs could be written and edited interactively on com- puter terminals instead of punched onto cards, and created a range of programming tools and languages optimized for online u…

Page 7

Timesharing’s spread was helped by the arrival in the mid-1960s of cheap and reliable teletype terminals based on the new ASCII standard for character encoding–a contrast with the expensive, special-purpose terminal equipment built for systems like SAGE and SABRE. Timesharing systems and interactive minicomputers were often used with a new device from the Teletype Corporation, the Model 33 ASR (automatic send- receive), shown in figure 5.4…

Page 8

The Model 33 was cheaper, simpler, and more rugged than the Flexowriter used by earlier small computers. It functioned at up to ten characters a second, working either as a typewriter that sent key codes directly to a computer or in offline mode to punch those codes onto paper tape. It came to symbolize the minicomputer era and the beginning of the personal computer era that followed it. The Control and Escape keys still found on keyboards today owe their ubiquity to the Model 33.

Page 8

Dartmouth initially used a General Electric 235 computer connected to a smaller GE Datanet computer, which controlled Teletype terminals across the campus. Com- puter assignments were incorporated into the core curriculum, particularly first-year mathematics courses. Dartmouth built up a large library of shared programs, includ- ing popular games such as a foo…

Page 10

The most fundamental c lenge in the development of any timesharing system was handling the unpredictable demands made by its users. Trying to support more users and capabilities made this dramatically harder.

Page 13

and written into textbooks for the emerging discipline of computer science, such as Denning’s own Operating Systems Theory (with E.

Page 14

…in the 1960s, the processor of a machine like the GE 645 was not a chip or even a board. It filled perhaps a hundred circuit boards spread over many cabinets, stuffed with thousands of electronic components connected by miles of wire. Even the architectural modifications made for timesharing meant rewir- ing large sections of the machine and adding cabinets holding circuit boards for things like dynamic address translation. Adding a second processor meant wiring the central parts of an entire second computer to a shared bank of core me…

Page 15

The GE 645 and IBM Model 360/67 were not the first computers to support multi-processor configurations. But timesharing was the first compelling application for it.

Page 15

The most popular successor to the single processor Cray 1 was the dual processor Cray X-MP. This was the world’s fastest computer from its launch in 1982 to 1985, when the Cray 2, with four processors, was introduced. Super- computers could really be justified only for organizations like Los Alamos or the National Center for Atmospheric Research with huge individual jobs. Harnessing that power forced programmers to split these application programs into separate parts, called threads, that could run simultaneously on different processors, communicating with each other to coordinate their work. Like the other architectural innovations pioneered by super- computers, that approach eventually made its way from supercomputers to minicom- puters, workstations, personal computers, and eventu…

Page 15

The timesharing community was small, and developers and code both moved between installations. For example, all timesharing systems needed an online text editor. Dennis Ritchie later noted that the editor code used at Bell Labs in the late 1970s could be traced back to Lampson and Deutsch’s QED for the SDS- 940. QED had also been adapted for CTSS at MIT and for Multics.

Page 16

Using a PDP-10 could not only be fun but addictive: it was no accident that it was the computer on which Adventure—perhaps the longest-lived of all computer games—was

Page 18

written by Will Crowther at BBN.

Page 19

This almost exclusive focus on what would later be called systems software may surprise readers who are used to thinking of software as a synonym for computer pro- gram. This was not the case in the 1960s. Software first became part of the computing lexicon in the early 1960s, to complement hardware by describing the other “wares” sold to computer use…

Page 20

The name Unix, pronounced almost identically with “eunuchs,” humorously signaled a stripped-down or “castrated” substitute for Multics.

Page 24

Unix had short commands, quick to type on a slow teletype, a compact kernel to leave lots of memory free for user programs, and a pervasive stress on efficiency. A lot of ideas from Multics were reimplemented in Unix using much simpler mechanisms. These included its hierarchical file system, the idea of a separate program (called the “shell”) to interact with users and interpret their commands, and aspects of its approaches to input and output.

Page 25

Once Unix was rewritten in C it was easier to port it to other computers. Instead of writing a whole operating system, all that was needed was a C compiler able to generate code in the new machine’s language and some work to tweak the Unix kernel and standard librar- ies to accommodate its q…

Page 26

C was optimized for operating systems programming. C code can do almost anything that assembly language can but is easier to write and structure.

The computer becomes a communications platform

The computer becomes a personal plaything