Nelson and Terras 2012

From Whiki
Jump to navigation Jump to search

Nelson, Brent and Melissa Terras, eds. Digitizing Medieval and Early Modern Material Culture. Tempe: Iter, 2012.

Introduction, by Brent Nelson and Melissa Terras (1-20)

"complementary relationship between object- and text-based evidence" (3)

Richard Grassby: etic analysis (attending to the material object itself, attributes) vs emic analysis (studying the significance of the object to the humans who interacted with it, social context) (4-5)

Beyond Remediation: The Role of Textual Studies in Implementing New Knowledge Environments, by Alan Galey, Brent Nelson, Richard Cunningham, Ray Siemens (21-48)

"Culture is not a transmissible thing, to be passed on like old taxidermy whether the next generation wants it or not, but a network of imaginative investments that cannot be contained within material artefacts, yet cannot be understood without them." (22)

Tanselle -- coming out of Greg-Bowers tradition; "denies that the electronic medium can fundamentally alter his field" (28); difference between WORK, DOCUMENT, WITNESS, REPRODUCTION, COPY; "For Tanselle, a change in the medium of the work's reproduction from book to screen makes no difference to his foundational distinction; however, what Tanselle does not allow is that our conception of works as ineluctable entities may depend at least in part on an effect of the still dominant medium for reproducing these works, namely the fixity of print that emerged only a little more than a century ago." (28)

"What distinguished the electronic edition from the bibliographic one may not then be any of the former's single features, but instead its capacity simultaneously to be more than one kind of edition." (30)

reading of George Herbert's "Easter Wings" -- "the process of meaning-making at work here depends not upon a linear progression of one medium (printed text) subsuming another (illustration), but rather upon poetic effects made possible by different orders of information, thought, and experience all co-present within the same print artefact" (33)

Pierre Belon, De aquatilibus (1553), oblong quarto form to fit illustrations of fish

The Materiality of Markup and the TEI, by James Cummings (49-81)

More than was Dreamt of in Our Philosophy: Encoding Hamlet for the Shakespeare Quartos Archive, by Judith Siefring and Pip Willcox (83-111)

Digitizing Non-Linear Texts in TEI P5: The Case of the Early Modern Reversed Manuscript, by Angus Vine and Sebastiaan Verwij (113-136)

reversed manuscripts -- challenge the orientation of the screen in the digital edition

"The question that thus arises is how best to represent the conflicting textual structures that define the manuscript commonplace book and miscellany. Besides observing the integrity either of the artefact or of the text in the order in which it appears in the book, a third option presents itself; namely, a genetic approach, one that follows subsequent scribal stints in the order in which they have been written, and that may not simply be observed by turning the page. In other words, the editor must differentiate between the spatial relations of manuscript items on the one hand (i.e., how they physically appear on the page) and their temporal relations (which came first, which came later) on the other. Ideally, the editor would then record both sets of information simultaneously." (127)

Palaeography and the 'Virtual Library' of Manuscripts, by Peter A. Stokes (137-169)

A Probabilistic Analysis of a Middle English Text, by Jacob Thaisen (171-200)

The Digitization of Bookbindings, by Athanasios Velios and Nicholas Pickwoad (201-228)

Digitizing Collection, Composition, and Product: Tracking the Work of Little Gidding, by Paul Dyck, Ryan Rempel, with Stuart Williams (229-256)

"we have found that XML and digital images offer an effective way not only of delivering information about material culture but, more importantly, of doing primary research on cultural objects. Our ideal method is digital from the ground up, aimed at a process rather than a product, or put another way, a product that is always produced by a scholarly process. An important correlative of this process is that, from beginning to end, our goal is to keep central the material object of study. On the back-end, then, we have a set of XML-encoded texts being dynamically combined in imitation of the material object, and on the front-end, we have the image of the object itself. In designing this interface, we are blurring the distinction between editing and presentation: we imagine the presentation interface as a version of the editing interface rather than as a new project." (229-230)
"The Ferrars' method shares with today's electronic compositional culture some important and mutually enlightening similarities. Most notable is the transportability of the text or image fragment and its redeployment in new compositions. While subsequent centuries witnessed the addition of images to printed texts and the assembly of various cut-outs in scrapbook form, the Little Gidding concordances stand out as serious books made entirely of fragments from other books. They are the apotheosis of the commonplace book; the proof that a most important text -- and simultaneously a rare object -- can be compiled out of commonly available materials. Since the textual transportability that they employ relies upon a vigorous system of textual encoding, they supply the model for their own digitization, allowing us in turn to model their material process of composing the page." (230)
"The research of the past few decades has brought to light the particularities of early modern book culture, and in particular, its profound mixture of what we tend to describe as manuscript and print practices. For all their oddity, the Little Gidding concordances can now be seen as strangely typical of their time: they bring together traditional practices of composition with the new experience of relatively cheap print. Rather than copying by hand, the makers used scissors and glue to form a new text out of the material of the source. In this way, the broadly circulated text of the Gospels was made into a private book and circulated as a manuscript might." (232)


"The Ferrars, in building their own multimedia mixes, would have likely understood their practice within the theory of commonplaces, the gathering of materials under common heads, allowing their employment to particular rhetorical ends. The concordances in fact use the word 'head' to describe the 150 chapters: these heads each contain a collection of materials. Unlike other early modern commonplace books, though, these have narrative profluence. They thus serve simultaneously as narrative and as a storehouse of materials, fit for reemployment toward emergent occasions." (236)
"this LG reference system -- what we might generatively think of as Little Gidding markup -- has been layered on top of the canonical markup of the biblical text, that is, the marking of the text into books, chapters, and verses. This is all to point out that the LG community began with one of the most structurally encoded texts available and then restructured it, using the biblical text's canonical markup as an ordering system, but also using typeface as markup." (236)

black letter and roman letter "culturally significant as codes of authority" -- had "previously been used as structural marks" in King James Bible to indicate additional material, like chp summaries (236)

"The goal of our project is to reveal the creative work of LG as primarily annotation: the material linking of a multitude of fragments that comment upon each other to form the text's multilinear readings. With a digital approach we can encode these annotations as a compositional structure and encode the fragments. This encoding does not produce a static facsimile, but rather scripts a process, allowing the user to virtually unmix the remix and, in seeing how the books were constructed and from what, to discover much about the LG methods, answering questions including what gospel sources were favoured over others, which images from a series were included or left out, and what choices were made in shaping the overall effect of the books." (237)

designing the DTD as they worked on the book, sometimes making several DTD changes a day (238)

"The XML editor served not to arrange previously gathered information within a previously established structure but as an arena in which to record information as that information was gathered and simultaneously to experiment with encoding structures." (238)
"In starting with a skeletal structure and then populating it, we have an opportunity to reconstruct in part the Little Gidding compositional process itself, in a highly computing-assisted way." (240)

gospelgrab -- Python program to pull modern-spelling KJV text

"This automated, high-speed re-enactment of the LG concordance-making process proided a base text (modern in spelling, punctuation, and other textual features_ that we could then conform to the early modern text as found in the king's book, using an XML editor." (241)

cuttings -- like a mosaic; "it does not disguise the pices but rather demonstrates a kind of marvellous multiplicity. The pages of the concordance present both a singular object and a collection of materials standing dynamically, on the page -- dynamically, that is, in that they have each been placed there to a particular end. In this way, the edges (of each pasted segment) matter; they in fact make all the difference between this and a conventionally produced book. The edges, we argue, act rhetorically to suggest the multiple ways of reading described in the books preface, indicating not a text that has been finally arranged, but an endlessly rearranging text." (243)

"When does modern scholarship go too far in flattening the text and how might it track the dynamics of the page?" (243)

"microfilm washes out almost all of the edges, effectively presenting not a cut-and-paste page, but rather a poorly printed conventional one." (243)

"No matter clear, though, the photographs would remain two dimensional, not only missing the tactile sense of the original, but also giving the overall sense of the original as a modern book, a sigular object rather than a collection of texts. On a practical level, a full printed photographic facsimile would also be prohibitively expensive." (244)

"The LG concordances provide us with an ideal opportunity to push digital tools and methodology deeper than we otherwise might, for they demand an edition made of many editions, in which one is always viewing at least two books at once. The achievement of the concordances is in their presentation of a single text-object via that collection of materials from many text-objects, a complexity that is always present to the reader through the physical make-up of the page. Rather than model this as a single XML file (albeit one with links to source materials), we are working back to something like our initial gospelgrab document, producing a document that contains very little content, but that instead records what pieces of what sources the Ferrars put where; that is to say, an order" (245)
"Our separation of the many sources that make up any LG concordance, effectively deconstructing it, allows the scholar to consider more thoroughly its construction, making immediately available the sources as much as possible in their own right, and making apparent the compositional choices of the Ferrars, as those choices were enacted on the sources ... they will also be used alongside the concordance, allowing the re-enactment of the Ferrars' snipping and arrangement [246] of them on the page." (245-6)
"The problem in principle with noting all the cuts in the order document is that this separates the action of cutting from the thing being cut. The issue is how to respect and represent the two kinds of information, that is, the design of the LG book including all of its cuts and the uncut primary materials. We want an edition that conserves both. We think that we can do this by marking the Ferrars' cuts in the images of the primary materials, but in non-obtrusive ways." (247)
"Both the Ferrars' cuts and our subverses follow the grain of the biblical verse divisions, naturally extending them." (248)

"represent the dynamism inherent in the construction of the concordance itself. Perhaps techniques of animation can be applied to represent the integration of elements into the concordance even more dramatically." (251)

"even if we lacked computation altogether, XML would still be a better way philosophically of representing the LG books than any other method known to us." (252)
"The ad hoc XML anchors this structure in the most accurate description of the materials themselves. We also think that this suitability brings out the non-modern quality of any early text: early books are perhaps as much like present-day text and image bases as they are like present-day books Since any edition, whether print or digital, ,is a representation, the digital edition has the advantage of making the material strange rather than making it familiar. Significantly, one of the indications to us that this project is worthwhile is the great extent to which it pushes us to rethink both the book and the text- and image-base, perhaps most clearly in the way that the material has pushed us not only to represent the product of this work in a user interface, but also to develop a photograph/XML interface as a working tool, one that allows us both to construct this edition and to model the original material process of the Ferrars." (253)

Vexed Impressions: Towards a Digital Archive of Broadside Ballad Illustrations, by Patricia Fumerton, Kris McAbee, Carl Stahmer, and Megan Palmer Browne (257-285)

A Virtual Museum or E-Research? British Printed Images to 1700 and the Digitization of Early Modern Prints, by Stephen Pigney, Katherine Hunt (287-312)

Rose Tools: A Medieval Manuscript Text-Image Annotation Project, by Christine McWebb and Diane Jakacki (313-334)

Digitization of Maps and Atlases and the Use of Analytical Bibliography, by Wouter Bracke, Benoit Pigeon, Gerard Bouvin (335-362)

Between Text and Image: Digital Renderings of a Late Medieval City, by Paul Vetch, Keith Lilley, Catherine Clarke (363-393)

Virtual Reality for Humanities Scholarship, by Lisa M. Snyder (395-428)

Simulating Splendour: Visual Modelling of Historical Jewellery Research, by David Humphrey (429-453)

Coinage, Digitization, and the World Wide Web: Numismatics and the COINS Project, by Jonathan Jarrett et al. (455-485)