Reverse detail from Kakelbont MS 1, a fifteenth-century French Psalter. This image is in the public domain. Daniel Paul O'Donnell

Forward to Navigation

Transcription Guidelines

Posted: Nov 19, 2007 12:11;
Last Modified: May 23, 2012 19:05

---

The following is a list of typographical conventions to use when transcribing medieval manuscripts in my classes.



deletion

Strikethrough indicates the physical deletion of text in a witness. Deletion may be by any method (underlining, punctum delens, erasure, overwriting, etc). You should indicate the precise method of deletion by a note at the end of your transcription. The deleted text is recorded whenever possible. If deleted text cannot be recovered, it is replaced by colons.

You indicate strikethrough in HTML as follows <strike>Text struck through</strike>


\addition/

Upward sloping brackets indicate that the enclosed text has been added above the manuscript line. If a caret was used, this is indicated with a preceding comma or caret symbol (⁁): ⁁\addition above the line/.

|addition|

Vertical brackets indicate that the enclosed text has been inserted between existing characters within the manuscript line. Insertion is distinguished from overwriting (i.e. the conversion of one character to another or the addition of a new character in the space left by a previously deleted form).

{addition}

Brackets indicate that the enclosed text has been added over some pre-existing form. This addition may involve the conversion of one letter to another (for example, the conversion of to by the addition of an ascender), or the addition of new text in the place of a previous erasure. The overwritten text is treated as a deletion.

/addition\

Downward sloping brackets indicate that the enclosed text has been added below the manuscript line.

addition| or |addition

A single vertical bar indicates that the text has been added at the beginning or end of a manuscript line. Text preceded by a single vertical bar has been added at the end of a manuscript line. Text followed by a single vertical bar has been added at the beginning of a manuscript line. Text between two vertical bars has been added “within the line” (i.e. between pre-existing letters or words).

damage

Underlining indicates that text has been damaged. When damaged text is unclear or illegible, additional symbols are used.

In HTML, you indicate text is underlined as follows: <u>Underlined text</u>.


〈unclear〉

Angle brackets indicate that the enclosed text is unclear for some physical reason (e.g. rubbing, flaking, staining, poorly executed script).

In HTML, there is a distinction between angled brackets ( and ) and the greater than and less than signs (> and <). If you use the greater and less than signs, your text will not appear as the browser will think your text is an HTML code.


[supplied] or [emended]

Square brackets indicate that the enclosed text is being supplied or emended. “Supplied text” refers to the hypothetical restoration of original readings from a specific witness that have become lost or illegible due to some physical reason. “Emended text” refers to the replacement of legible text from extant witnesses by a modern editor or transcriber.
::

Colons represent text that is completely effaced or illegible. The number of colons used corresponds roughly to the number of letters the transcriber believes are missing. Note that colons are used for text that was in the manuscript but is not physically missing due to erasure or other damage. They are not used to indicate text that has not been copied into the manuscript but appears in other versions.

----  

Back to the future: What digital editors can learn from print editorial practice.

Posted: Feb 09, 2007 18:02;
Last Modified: May 23, 2012 19:05

---

A ersion of this essay was published in literary and Linguistic Computing

Digital Editing and Contemporary Textual Studies

The last decade or so has proven to be a heady time for editors of digital editions. With the maturation of the digital medium and its application to an ever increasing variety of cultural objects, digital scholars have been led to consider their theory and practice in fundamental terms (for a recent collection of essays, see Burnard, O’Keeffe, and Unsworth 2006). The questions they have asked have ranged from the nature of the editorial enterprise to issues of academic economics and politics; from problems of textual theory to questions of mise-en-page and navigation: What is an Edition? What kinds of objects can it contain? How should it be used? Must it be critical? Must it have a reading text? How should it be organised and displayed? Can intellectual responsibility be shared among editors and users? Can it be shared across generations of editors and users? While some of these questions clearly are related to earlier debates in print theory and practice, others involve aspects of the production of editions not relevant to or largely taken for granted by previous generations of print-based editors.

The answers that have developed to these questions at times have involved radical departures from earlier norms1. The flexibility inherent to the electronic medium, for example, has encouraged editors to produce editions that users can manipulate interactively, displaying or suppressing different types of readings, annotation, and editorial approaches, or even navigate in rudimentary three-dimensional virtual reality (e.g. Railton 1998-; Foys 2003; O’Donnell 2005a; Reed Klein 2001; Ó Cróinín nd). The relatively low production, storage, and publication costs associated with digital publication, similarly, have encouraged the development of the archive as the de facto standard of the genre: users of digital editions now expect to have access to all the evidence used by the editors in the construction of their texts (assuming, indeed, that editors actually have provided some kind of mediated text): full text transcriptions, high-quality facsimiles of all known witnesses, and tools for building alternate views of the underlying data (e.g. Kiernan 1999/2003; Robinson 1996). There have been experiments in editing non-textual objects (Foys 2003; Reed-Kline 2001), in producing image-based editions of textual objects (Kiernan 1999/2003), and in recreating digitally aspects of the sensual experience users might have had in consulting the original objects (British Library nd). There have been editions that radically decenter the reading text (e.g. Robinson 1996), and editions that force users to consult their material using an editorially imposed conceit (Reed-Kline 2001). Even elements carried over from traditional print practice have come in for experimentation and redesign: the representation of annotation, glossaries, or textual variation, for example, are rarely the same in any two electronic editions, even in editions published by the same press (see O’Donnell 2005b, § 5)2.

Much of the impetus behind this theoretical and practical experimentation has come from developments in the wider field of textual and editorial scholarship, particularly work of the book historians, new philologists, and social textual critics who came into prominence in the decade preceding the publication of the earliest modern digital editorial projects (e.g. McKenzie 1984/1999; McGann 1983/1992; Cerquiglini 1989; Nicols 1990; for a review see Greetham 1994, 339-343). Despite significant differences in emphasis and detail, these approaches are united by two main characteristics: a broad interest in the editorial representation of variance as a fundamental feature of textual production, transmission, and reception; and opposition to earlier, intentionalist, approaches that privileged the reconstruction of a hypothetical, usually single, authorial text over the many actual texts used and developed by historical authors, scribes, publishers, readers, and scholars. Working largely before the revolution in Humanities Computing brought on by the development of structural markup languages and popularity of the Internet, these scholars nevertheless often expressed themselves in technological terms, calling for changes in the way editions were printed and organised (see, for example, the call for a loose leaf edition of Chaucer in Pearsall 1985) or pointing to the then largely incipient promise of the new digital media for representing texts as multiforms (e.g. McGann 1994; Shillingsburg 1996).

Digital Editing and Print Editorial Tradition

A second, complementary, impetus for this experimentation has been the sense that the digital editorial practice is, or ought to be, fundamentally different from and even opposed to that of print. This view is found to a greater or lesser extent in both early speculative accounts of the coming revolution (e.g. McGann 1994; the essays collected in Finneran 1996 and Landow and Delaney 1993) and subsequent, more sober and experienced discussions of whether digital practice has lived up to its initial promise (e.g. Robinson 2004, 2005, 2006; Karlsson and Malm 2004). It is characterised both by a sense that many intellectual conventions found in print editions are at their root primarily technological in origin, and that the new digital media offer what is in effect a tabula rasa upon which digital editors can develop new and better editorial approaches and conventions to accommodate the problems raised by textual theorists of the 1980s and 1990s.

Of course in some cases, this sense that digital practice is different from print is justified. Organisational models such as the Intellectual Commons or Wiki have no easy equivalent in print publication (O’Donnell Forthcoming). Technological advances in our ability to produce, manipulate, and store images cheaply, likewise, have significantly changed what editors and users expect editions to tell them about the primary sources. The ability to present research interactively has opened up rhetorical possibilities for the representation of textual scholarship difficult or impossible to realise in the printed codex.

But the sense that digital practice is fundamentally different from print has been also at times more reactionary than revolutionary. If digital theorists have been quick to recognise the ways in which some aspects of print editorial theory and practice have been influenced by the technological limitations of the printed page, they have been also at times too quick to see other, more intellectually significant aspects of print practice as technological quirks. Textual criticism in its modern form has a history that is now nearly 450 years old (see Greetham 1994, 313); seen more broadly as a desire to produce “better” texts (however “better” is defined at the moment in question), it has a history stretching back to the end of the sixth century BCE and is “the most ancient of scholarly activities in the West” (Greetham 1994, 297). The development of the critical edition over this period has been as much an intellectual as a technological process. While the limitations of the printed page have undoubtedly dictated the form of many features of the traditional critical edition, centuries of refinement—by trial-and-error as well as outright invention—also have produced conventions that transcend the specific medium for which they were developed. In such cases, digital editors may be able to improve upon these conventions by recognising the (often unexpressed) underlying theory and taking advantage of the superior flexibility and interactivity of the digital medium to improve their representation.

The Critical Text in a Digital Age

Perhaps no area of traditional print editorial practice has come in for more practical and theoretical criticism than the provision of synthetic, stereotypically eclectic, reading texts3. Of course this criticism is not solely the result of developments in the digital medium: suspicion of claims to definitiveness and privilege is, after all, perhaps the most characteristic feature of post-structuralist literary theory. It is the case, however, that digital editors have taken to avoiding the critical text with a gusto that far outstrips that of their print colleagues. It is still not unusual to find a print edition with some kind of critical text; the provision of similarly critical texts in digital editions is far less common. While most digital projects do provide some kind of top-level reading text, few make any strong claims about this text’s definitiveness. More commonly, as in the early ground breaking editions of the Canterbury Tales Project (CTP), the intention of the guide text is, at best, to provide readers with some way of organising the diversity without making any direct claim to authority (Robinson nd):

We began… work [on the CTP] with the intention of trying to recreate a better reading text of the Canterbury Tales. As the work progressed, our aims have changed. Rather than trying to create a better reading text, we now see our aim as helping readers to read these many texts. Thus from what we provide, readers can read the transcripts, examine the manuscripts behind the transcripts, see what different readings are available at any one word, and determine the significance of a particular reading occurring in a particular group of manuscripts. Perhaps this aim is less grand than making a definitive text; but it may also be more useful.

There are some exceptions to this general tendency—both in the form of digital editions that are focussed around the provision of editorially mediated critical texts (e.g. McGillivray 1997; O’Donnell 2005a) and projects, such as the Piers Plowman Electronic Archive (PPEA), that hope ultimately to derive such texts from material collected in their archives. But even here I think it is fair to say that the provision of a synthetic critical text is not what most digital editors consider to be the really interesting thing about their projects. What distinguishes the computer from the codex and makes digital editing such an exciting enterprise is precisely the ability the new medium gives us for collecting, cataloguing, and navigating massive amounts of raw information: transcriptions of every witness, collations of every textual difference, facsimiles of every page of every primary source. Even when the ultimate goal is the production of a critically mediated text, the ability to archive remains distracting4.

In some areas of study, this emphasis on collection over synthesis is perhaps not a bad thing. Texts like Piers Plowman and the Canterbury Tales have such complex textual histories that they rarely have been archived in any form useful to the average scholar; in such cases, indeed, the historical tendency—seen from our post-structuralist perspective—has been towards over-synthesis. In these cases, the most popular previous print editions were put together by editors with strong ideas about the nature of the textual history and/or authorial intentions of the works in question. Their textual histories, too, have tended to be too complex for easy presentation in print format (e.g. Manley and Rickert 1940). Readers with only a passing interest in these texts’ textual history have been encouraged implicitly or explicitly to leave the question in the hands of experts.

The area in which I work, Old English textual studies, has not suffered from this tendency in recent memory, however. Editions of Old English texts historically have tended to be under- rather than over-determined, even in print (Sisam 1993; Lapidge 1994, 1991). In most cases, this is excused by the paucity of surviving witnesses. Most Old English poems (about 97% of the known canon) survive in unique manuscripts (O’Donnell 1996a; Jabbour 1968; Sisam 1953). Even when there is more primary material, Anglo-Saxon editors work in a culture that resists attempts at textual synthesis or interpretation, preferring parallel-text or single-witness manuscript editions whenever feasible and limiting editorial interpretation to the expansion of abbreviations, word-division, and metrical layout, or, in student editions, the occasional normalisation of unusual linguistic and orthographic features (Sisam 1953). One result of this is that print practice in Anglo-Saxon studies over the last century or so has anticipated to a great extent many of the aspects that in other periods distinguish digital editions from their print predecessors.

Cædmon’s Hymn: A Case Study

The scholarly history of Cædmon’s Hymn, a text I have recently edited for the Society of Early English and Norse Electronic Texts series (O’Donnell 2005a), is a perfect example of how this tendency manifests itself in Old English studies. Cædmon’s Hymn is the most textually complicated poem of the Anglo-Saxon period, and, for a variety of historical, literary, and scholarly reasons, among the most important: it is probably the first recorded example of sustained poetry in any Germanic language; it is the only Old English poem for which any detailed account of its contemporary reception survives; and it is found in four recensions and twenty-one medieval manuscripts, a textual history which can be matched in numbers, but not complexity, by only one other vernacular Anglo-Saxon poem (the most recent discussion of these issues is O’Donnell 2005a).

The poem also has been well studied. Semi-diplomatic transcriptions of all known witnesses were published in the 1930s (Dobbie 1937)5. Facsimiles of the earliest manuscripts of the poem (dating from the mid-eighth century) have been available from various sources since the beginning of the twentieth century (e.g. Dobiache-Rojdestvensky 1928) and were supplemented in the early 1990s by a complete collection of high quality black and white photos of all witnesses in Fred C. Robinson and E.G. Stanley ‘s Old English Poems from Many Sources (1991). Articles and books on the poem’s transmission and textual history have appeared quite regularly for over a hundred years. The poem has been at the centre of most debates about the nature of textual transmission in Anglo-Saxon England since at least the 1950s. Taken together, the result of this activity has been the development of an editorial form and history that resembles contemporary digital practice in everything but its medium of production and dissemination. Indeed, in producing a lightly mediated, witness- and facsimile-based archive, constructed over a number of generations by independent groups of scholars, Cædmon’s Hymn textual criticism even anticipates several recent calls for the development of a new digital model for collective, multi-project and multi-generational editorial work (e.g. Ore 2004; Robinson 2005).

The print scholarly history of the poem anticipates contemporary digital practice in another way as well: until recently, Cædmon’s Hymn had never been the subject of a modern critical textual edition. The last century has seen the publication of a couple of student editions of the poem (e.g. Pope and Fulk 2001; Mitchell and Robinson 2001), and some specialised reconstructions of one of the more corrupt recensions (Cavill 2000, O’Donnell 1996b, Smith 1938/1978, Wuest 1906). But there have been no critical works in the last hundred years that have attempted to encapsulate and transmit in textual form what is actually known about the poem’s transmission and recensional history. The closest thing to a standard edition for most of this time has been a parallel text edition of the Hymn by Elliot Van Kirk Dobbie (1942). Unfortunately, in dividing this text into Northumbrian and West-Saxon dialectal recensions, Dobbie produced an edition that ignored his own previous and never renounced work demonstrating that such dialectal divisions were less important that other distinctions that cut across dialectal lines (Dobbie 1937)6.

The Edition as Repository of Expert Knowledge

The problem with this approach—to Cædmon’s Hymn or any other text—should be clear enough. On the one hand the poem’s textual history is, by Anglo-Saxon standards, quite complex and the subject of intense debate by professional textual scholars. On the other, the failure until recently to provide any kind of critical text representing the various positions in the debate has all but hidden the significance of this research—and its implications for work on other aspects of the Hymn_—from the general reader. Instead of being able to take advantage of the expert knowledge acquired by editors and textual scholars of the poem over the last hundred years, readers of _Cædmon’s Hymn instead have been forced either to go back to the raw materials and construct their own texts over and over again or rely on a standard edition that misrepresents its own editor’s considered views of the poem’s textual history.

This is not an efficient use of these readers’ time. As Kevin Kiernan has argued, the textual history of Cædmon’s Hymn is not a spectacle for casual observers (Kiernan 1990), and most people who come to study Cædmon’s Hymn are not interested in collating transcriptions, deciphering facsimiles, and weighing options for grouping the surviving witnesses. What they want is to study the poem’s sources and analogues, its composition and reception, its prosody, language, place in the canon, significance in the development of Anglo-Saxon Christianity, or usefulness as an index in discussions of the position of women in Anglo-Saxon society—that is, all the other things we do with texts when we are not studying their transmission. What these readers want—and certainly what I want when I consult an edition of a work I am studying for reasons other than its textual history—is a text that is accurate, readable, and hopefully based on clearly defined and well-explained criteria. They want, in other words, to be able to take advantage of the expert knowledge of those responsible for putting together the text they are consulting. If they don’t like what they see, or if the approach taken is not what they need for their research, then they may try to find an edition that is better suited to their particular needs. But they will not—except in extreme cases I suspect—actually want to duplicate the effort required to put together a top-quality edition.

The Efficiency of Print Editorial Tradition

The failure of the print editors of Cædmon’s Hymn over the last hundred years to provide a critical-editorial account of their actual knowledge of the poem is very much an exception that proves the rule. For in anticipating digital approaches to textual criticism and editorial practice, textual scholars of Cædmon’s Hymn have, ironically, done a much poorer job of supplying readers with information about their text than the majority of their print-based colleagues have of other texts in other periods.

This is because, as we shall see, the dissemination of expert knowledge is something that print-based editors are generally very good at. At a conceptual level, print approaches developed over the last several hundred years to the arrangement of editorial and bibliographic information in the critical edition form an almost textbook example for the parsimonious organisation of information about texts and witnesses. While there are technological and conventional limitations to the way this information can be used and presented in codex form, digital scholars would be hard pressed to come up with a theoretically more sophisticated or efficient organisation for the underlying data.

Normalisation and Relational Database Design

Demonstrating the efficiency of traditional print practice requires us to make a brief excursion into questions of relational database theory and design7. In designing a relational database, the goal is to generate a set of relationship schemas that allow us to store information without unnecessary redundancy but in a form that is easily retrievable (Silberschatz, Korth, and Sudarshan 2006, 263). The relational model organises information into two-dimensional tables, each row of which represents a relationship among associated bits of information. Complex data commonly requires the use of more than one set of relations or tables. The key thing is to avoid complex redundancies: in a well designed relational database, no piece of information that logically follows from any other should appear more than once8.

The process used to eliminate redundancies and dependencies is known as normalisation. When data has been organised so that it is free of all such inefficiencies, it is usually said to be in third normal form. How one goes about doing this can be best seen through an example. The following is an invoice from a hypothetical book store (adapted from Krishna 1992, 32):

Invoice: JJSmith0001
Customer ID: JJS01
Name: Jane J. Smith
Address: 323 Fifteenth Street S., Lethbridge, Alberta T1K 5X3.
ISBN Author Title Price Quantity Item Total
0-670-03151-8 Pinker, Stephen The Blank Slate: The Modern Denial of Human Nature $35.00 1 $35.00
0-8122-3745-5 Burrus, Virginia The Sex Lives of Saints: An Erotics of Ancient Hagiography $25.00 2 $50.00
0-7136-0389-5 Dix, Dom Gregory The Shape of the Liturgy $55.00 1 $55.00
Grand Total $140.00

Describing the information in this case in relational terms is a three step process. The first step involves identifying what is that is to be included in the data model by extracting database field names from the document’s structure. In the following, parentheses are used to indicate information that can occur more than once on a single invoice:

Invoice: invoice_number, customer_id, customer_name, customer_address, (ISBN, author, title, price, quantity, item_total), grand_total

The second step involves extracting fields that contain repeating information and placing them in a separate table. In this case, the repeating information involves bibliographical information about the actual books sold (ISBN, author, title, price, quantity, item_total). The connection between this new table and the invoice table made explicit through the addition of an invoice_number key that allows each book to be associated with a specific invoice9:

Invoice: invoice_number, customer_id, customer_name, customer_address, grand_total

Invoice_Item: invoice_number, ISBN, author, title, price, quantity, item_total

The final step involves removing functional dependencies within these two tables. In this database, for example, information about a book’s author, title and item_price are functionally dependent on its ISBN: for each ISBN, there is only one possible author, title, and item_price. Likewise customer_id is associated with only one customer_name and customer_address. These dependencies are eliminated by placing the dependent material in two new tables, Customer and Book, which are linked to rest of the data by the customer_id and ISBN keys respectively.

At this point the data is said to be in third normal form: we have four sets of relations, none of which can be broken down any further:

Invoice: invoice_number, customer_id, grand_total

Invoice_Item: invoice_number, ISBN, quantity, item_total

Customer: customer_id, customer_name, customer_address

Book: ISBN, author, title, price

Normalising Editorial Data

The normalisation process becomes interesting when one applies it to the type of information editors commonly collect about textual witnesses. The following, for example, is a simplified version of a sheet I used to record basic information about each manuscript witness to Cædmon’s Hymn:

Shelf-Mark: B1 Cambridge, Corpus Christi College 41
Date: s. xi-1
Scribe: Second scribe of the main Old English text.
Location: Copied as part of the main text of the Old English translation of the Historia ecclesiastica (p. 332 [f. 161v]. line 6)
Recension: West-Saxon eorðan recension
Text: Nuweherigan sculon

heofonrices weard metodes mihte

&hismod ge þanc weorc wuldor godes

[etc]

From the point of view of the database designer, this sheet has what are essentially fields for the manuscript sigil, date, scribe, location, and, of course, the text of the poem in the witness itself, something that can be seen, on analogy with our book store invoice, as itself a repeating set of (largely implicit) information: manuscript forms, normalised readings, grammatical and lexical information, metrical position, relationship to canonical referencing systems, and the like.

As with the invoice from our hypothetical bookstore, it is possible to place this data in normal form. The first step, once again, is to extract the relevant relations from the manuscript sheet and, in this case, the often unstated expert knowledge an editor typically brings to his or her task. This leads at the very least to the following set of relations10:

Manuscript: shelf_mark, date, scribe, location, (ms_instance, canonical_reading, dictionary_form, grammatical_information, translation)

Extracting the repeating information about individual readings, leaves us with two tables linked by the key shelf_mark:

Manuscript: shelf_mark, date, scribe, location
bq(code). Text: shelf_mark, ms_instance, canonical_reading,
bq(code). dictionary_form, grammatical_information, translation

And placing the material in third normal form generates at least one more:

Manuscript: shelf_mark, date, scribe, location

Text: shelf_mark, ms_instance, canonical_reading

Glossary: canonical_reading, dictionary_form, grammatical_information, translation

At this point, we have organised our data in its most efficient format. With the exception of the shelf_mark and canonical_reading keys, no piece of information is repeated in more than one table, and all functional dependencies have been eliminated. Of course in real life, there would be many more tables, and even then it would be probably impossible—and certainly not cost effective—to treat all editorial knowledge about a given text as normalisable data.

What is significant about this arrangement, however, is the extent to which our final set of tables reflects the traditional arrangements of information in a stereotypical print edition: a section up front with bibliographic (and other) information about the text and associated witnesses; a section in the middle relating manuscript readings to editorially privileged forms; and a section at the end containing abstract lexical and grammatical information about words in the text. Moreover, although familiarity and the use of narrative can obscure this fact in practice, much of the information contained in these traditional sections of a print edition actually is in implicitly tabular form: in structural terms, a glossary are best understood as the functional equivalent of a highly structured list or table row, with information presented in a fixed order from entry to entry. Bibliographical discussions, too, often consist of what are in effect, highly structured lists that can easily be converted to tabular format: one cell for shelf-mark, another for related bibliography, provenance, contents, and the like11.

Database Views and the Critical Text

This analogy between the traditional arrangement of editorial matter in print editions and normalised data in a relational database seems to break down, however, in one key location: the representation of the abstract text. For while it is possible to see the how the other sections of a print critical edition might be rendered in tabular form, the critical text itself—the place where editors present an actual reading as a result of their efforts—is not usually presented in anything resembling the non-hierarchical, tabular form a relational model would lead us to expect. In fact, the essential point of the editorial text—and indeed the reason it comes in for criticism from post-structuralists—is that it eliminates non-hierarchical choice. In constructing a reading text, print editors impose order on the mass of textual evidence by privileging individual readings at each collation point. All other forms—the material that would make up the Text table in a relational database—is either hidden from the reader or relegated, and even then usually only as a sample, to appearance in small type at the bottom of the page in the critical apparatus. Although it is the defining feature of the print critical edition, the critical text itself would appear to be the only part that is not directly part of the underlying, and extremely efficient, relational data model developed by print editors through the centuries.

But this does not invalidate my larger argument, because we build databases precisely in order to acquire this ability to select and organise data. If the critical text in a print edition is not actually a database table, it is a database view—that is to say a “window on the database through which data required for a particular user or application can be accessed” (Krishna 1992, 210). In computer database management systems, views are built by querying the underlying data and building new relations that contain one or more answers from the results. In print editorial practice, editors build critical texts by “querying” their knowledge of textual data at each collation point in a way that produces a single editorial reading. In this understanding, a typical student edition of a medieval or classical text might be understood as a database view built on the query “select the manuscript or normalised reading at each collation point that most closely matches paradigmatic forms in standard primers.” A modern-spelling edition of Shakespeare can be understood as the view resulting from a database query that instructs the processor to replace Renaissance spellings for the selected forms with their modern equivalents. And an edition like the Kane-Donaldson Piers Plowman can be understood as a view built on basis of a far more complex query derived from the editors’ research on metre, textual history, and scribal practice. Even editorial emendations are, in this sense, simply the result of a query that requests forms from an unstated “normalised/emended equivalent” column in the editors’ intellectual understanding of the underlying textual evidence: “select readings from the database according to criteria x; if the resulting form is problematic, substitute the form found in the normalised/emended_equivalent column.”12.

How Digital Editors can Improve on Print Practice

If this understanding of the critical text and its relationship to the data model underlying print critical practice is correct, then digital editors can almost certainly improve upon it. One obvious place to start might seem to lie in the formalising and automating the process by which print editors process and query the data upon which their editions are based. Such an approach, indeed, would have two main advantages: it would allow us to test others’ editorial approaches by modelling them programatically; and it would allows us to take advantage of the inherent flexibility of the digital medium by providing users with access to limitless critical texts of the same work. Where, for economic and technological reasons, print editions tend to offer readers only a single critical approach and text, digital editions could now offer readings a series of possible approaches and texts built according to various selection criteria. In this approach, users would read texts either by building their own textual queries, or by selecting pre-made queries that build views by dynamically modelling the decisions of others—a Kane-Donaldson view of Piers Plowman, perhaps, or a Gabler reading text view of Ulysses.

This is an area of research we should pursue, even though, in actual practice, we are still a long way from being able to build anything but the simplest of texts in this manner. Certain processes can, of course, be automated and even improved upon electronically—we can use computers to collate readings from different witnesses, derive manuscript stemma, automatically normalise punctuation and spelling, and even model scribal performance (see Ciula 2005; O’Donnell 2005c). And it is easy to see how it we might be able to build databases and queries so that we could model human editorial decisions in relatively simple cases—reproducing the flawed dialectal texts of Cædmon’s Hymn discussed above, perhaps, or building simple student editions of small poems.

Unfortunately, such conceptually simple tasks are still at the extreme outer limits of what it is currently possible, let alone economically reasonable, to do. Going beyond this and learning to automate higher-level critical decisions involving cultural, historical, or literary distinctions, is beyond the realm of current database design and artificial intelligence even for people working in fields vastly better funded than textual scholarship. Thus, while it would be a fairly trivial process to generate a reading text based on a single witness from an underlying relational database, building automatically a best text edition—that is to say, an edition in which a single witness is singled out automatically for reproduction on the basis of some higher-level criteria—is still beyond our current capabilities. Automating other distinctions of the type made every day by human editors—distinguishing between good and bad scribes, assessing difficilior vs. facilior readings, or weighing competing evidence of authorial authorisation—belong as yet to the realm of science fiction.13.

This doesn’t let us off the hook, however. For while we are still far away from being able to truly automate our digital textual editions, and we do need to find some way of incorporating expert knowledge into digital editions that are becoming ever more complex. The more evidence we cram into our digital editions, the harder it becomes for readers to make anything of them. No two witnesses to any text are equally reliable, authentic, or useful for all purposes at all times. In the absence of a system that can build custom editions in response to naïve queries—“build me a general interest text of Don Juan”, “eliminate unreliable scribes”, or even “build me a student edition“—digital editors still need to provide readers with explicit expert guidance as to how the at times conflicting data in their editions is to be assessed. In some cases, it is possible to use hierarchical and object-oriented data models to encode these human judgements so that they can be generated dynamically (see note 14 above). In other cases, digital editors, like their print predecessors, will simply have to build critical texts of their editions the old fashioned way, by hand, or run the risk or failing to pass on the expert knowledge they have built up over years of scholarly engagement with the primary sources.

It is here, however, that digital editors can improve theoretically and practically the most on traditional print practice. For if critical reading texts are, conceptually understood, the equivalent of query-derived database views, then there is no reason why readers of critical editions should not be able to entertain multiple views of the underlying data. Critical texts, in other words—as post-structuralist theory has told us all along—really are neither right nor wrong: they are simply views of a textual history constructed according to different, more or less explicit, selection criteria. In the print world, economic necessity and technological rigidity imposed constraints on the number of different views editors could reasonably present to their readers—and encouraged them in pre post-structuralist days to see the production of a single definitive critical text as the primary purpose of their editions. Digital editors, on the other hand, have the advantage of a medium that allows the inclusion much more easily of multiple critical views, a technology in which the relationship between views and data is widely known and accepted, and a theoretical climate that encourages an attention to variance. If we are still far from being at the stage in which we can produce critical views of our data using dynamic searches, we are able even now to hard-code such views into our editions in unobtrusive and user-friendly ways.14. By taking advantage of the superior flexibility inherent in our technology and the existence of a formal theory that now explains conceptually what print editors appear to have discovered by experience and tradition, we can improve upon print editorial practice by extending it to the point that it begins to subvert the very claims to definitiveness we now find so suspicious. By being more like our print predecessors, by ensuring that our expert knowledge is carefully and systematically encoded in our texts, we can, ironically, use the digital medium to offer our readers a greater flexibility in how they use our work.

Conclusion

And so in the end, the future of digital editing may lie more in our past than we commonly like to consider. While digital editorial theory has tended to define its project largely in reaction to previous print practice, this approach underestimates both the strength of the foundation we have been given to build upon and the true significance of our new medium. For the exciting thing about digital editing is not that it can do everything differently, but rather that it can do some very important things better. Over the course of the last half millennium, print editorial practice has evolved an extremely efficient intellectual model for the organisation of information about texts and witnesses—even as, in the last fifty years, we have become increasingly suspicious of the claims to definitiveness this organisation was often taken to imply. As digital editors, we can improve upon the work of our predecessors by first of all recognising and formalising the intellectual strength of the traditional editorial model and secondly reconciling it to post-structuralist interest in variation and change by implementing it far more fully and flexibly than print editors themselves could ever imagine. The question we need to answer, then, is not whether we can do things differently but how doing things differently can improve on current practice. But we won’t be able to answer this question until we recognise what current practice already does very very well.

Works Cited

Bart, Patricia R. 2006. Controlled experimental markup in a TEI-conformant setting. Digital Medievalist 2.1 <http://www.digitalmedievalist.org/article.cfm?RecID=10>.

British Library, nd. Turning the Pages. <http://www.bl.uk/onlinegallery/ttp/ttpbooks.html>.

Cavill, Paul. 2000. The manuscripts of Cædmon’s Hymn. Anglia 118: 499-530.

Cerquiglini, Bernard. 1989. Éloge de la variante: Histoire critique de la philologie. Paris: Éditions de Seuil.

Ciula, Arianna. 2005. Digital palaeography: Using the digital representation of medieval script to support palaeographic analysis. Digital Medievalist 1.1 <http://www.digitalmedievalist.org/article.cfm?RecID=2>

Dobbie, Elliott Van Kirk. 1937. The manuscripts of Cædmon’s Hymn and Bede’s Death Song with a critical text of the Epistola Cuthberti de obitu Bedæ. Columbia University Studies in English and Comparative Literature, 128. New York: Columbia University Press.

───, ed. 1942. The Anglo-Saxon minor poems. The Anglo-Saxon Poetic Records, a Collective Edition, 6. New York: Columbia University Press.

Dobiache-Rojdestvensky, O. 1928. Un manuscrit de Bède à Léningrad. Speculum 3: 314-21.

Finneran, Richard J., ed. 1996. The literary text in the digital age. Ann Arbor: University of Michigan Press.

Foys, Martin K., ed. 2003. The Bayeux Tapestry: Digital Edition. Leicester: SDE.

Greetham, D.C. 1994. Textual Scholarship. New York: Garland.

Jabbour, A. A. 1968. The memorial transmission of Old English poetry: a study of the extant parallel texts. Unpublished PhD dissertation, Duke University.

Karlsson, Lina and Linda Malm. 2004. Revolution or remediation? A study of electronic scholarly editions on the web. HumanIT 7: 1-46.

Kiernan, Kevin S. 1990 Reading Cædmon’s Hymn with someone else’s glosses. Representations 32: 157-74.

───, ed. 1999/2003. The electronic Beowulf. Second edition. London: British Library.

Krishna, S. 1992. Introduction to database and knowledge-base systems. Singapore: World Scientific.

Landow, George P. and Paul Delaney, eds. 1993. The digital word: text-based computing in the humanities. Cambridge, MA, MIT Press.

Lapidge, Michael. 1991. Textual criticism and the literature of Anglo-Saxon England. Bulletin of the John Rylands University Library. 73:17-45.

───. 1994. On the emendation of Old English texts. Pp. 53-67 in: D.G. Scragg and Paul Szarmach (ed.), The editing of Old English: Papers from the 1990 Manchester conference.

Manly, John M. and Edith Rickert. 1940. The text of the Canterbury tales. Chicago: University of Chicago Press.

McGann, Jerome J. 1983/1992. A critique of modern textual criticism. Charlottesville: University of Virginia Press.

───. 1994. Rationale of the hypertext. <http://www/iath.virginia.edu/public/jjm2f/rationale.htm>

McGillivray, Murray, ed. 1997. Geoffrey Chaucer’s Book of the Duchess: A hypertext edition. Calgary: University of Calgary Press.

McKenzie, D.F. 1984/1999. Bibliography and the sociology of texts. Cambridge: Cambridge University Press.

Mitchell, Bruce and Fred C. Robinson, eds. 2001. A guide to Old English. 6th ed. Oxford: Blackwell.

Nicols, Stephen G. Jr., ed. 1990. Speculum 65.

Ó Cróinín, Dáibhí. nd. The Foundations of Irish Culture AD 600-850. Website. <http://www.foundationsirishculture.ie/>.

O’Donnell, Daniel Paul. 1996a. Manuscript Variation in Multiple-Recension Old English Poetic Texts: The Technical Problem and Poetical Art. Unpubl. PhD Dissertation. Yale University.

───. 1996b. A Northumbrian version of “Cædmon’s Hymn” (eordu recension) in Brussels, Bibliothèque Royale MS 8245-57 ff. 62r2-v1: Identification, edition and filiation. Beda venerabilis: Historian, monk and Northumbrian, eds. L. A. J. R. Houwen and A. A. MacDonald. Mediaevalia Groningana, 19. 139-65. Groningen: Egbert Forsten.

───. 2005a. Cædmon’s Hymn: A multimedia study, edition, and archive. SEENET A.8. Cambridge: D.S. Brewer.

───. 2005b. O Captain! My Captain! Using Technology to Guide Readers Through an Electronic Edition. Heroic Age 8. <http://www.heroicage.org/issues/8/em.html>

───. 2005c. The ghost in the machine: Revisiting an old model for the dynamic generation of digital editions. HumanIT 8 (2005): 51-71.

───. Forthcoming. If I were “You”: How Academics Can Stop Worrying and Learn to Love “the Encyclopedia that Anyone Can Edit.” Heroic Age 10.

Ore, Espen S. 2004. Monkey Business—or What is an Edition? Literary and Linguistic Computing 19: 35-44.

Pearsall, Derek. 1985. Editing medieval texts. Pp. 92-106 in Textual criticism and literary interpretation. Ed. Jerome J. McGann. Chicago: U Chicago.

Pope, John C. and R. D. Fulk, eds. 2001. Eight Old English poems. 3rd ed. New York: W. W. Norton.

Railton, Stephen, ed. 1998-. Uncle Tom’s Cabin and American Culture. Charlottesville: University of Virginia. Institute for Advanced Technology in the Humanities. <http://www.iath.virginia.edu/utc/>.

Reed Kline, Naomi, ed. 2001. A Wheel of Memory: The Hereford Mappamundi. Ann Arbor: University of Michigan Press

Robinson, Fred C. and E. G. Stanley, eds. 1991. Old English verse texts from many sources: a comprehensive collection. Early English Manuscripts in Facsimile, 23. Copenhagen: Rosenkilde & Bagger.

Robinson, Peter. nd. New Methods of Editing, Exploring, and Reading the Canterbury Tales. <http://www.cta.dmu.ac.uk/projects/ctp/desc2.html>.

───, ed. 1996. The Wife of Bath’s Prologue on CD-ROM. Cambridge, Cambridge University Press.

───. 2004. Where are we with electronic scholarly editions, and where to we want to be? Jahrbuch für Computerphilologie Online at <http://computerphilologie.uni-muenchen.de/ejournal.html>. Also available in print: Jahrbuch für Computerphilologie. 123-143.

───. 2005. Current issues in making digital editions of medieval texts—or, do electronic scholarly editions have a future? Digital Medievalist 1.1 <http://www.digitalmedievalist.org/article.cfm?RecID=6>

───. 2006. The Canterbury Tales and other medieval texts. In Burnard, O’Brian O’Keefe, and Unsworth. New York: Modern Language Association of America.

Shillingsburg, Peter L. 1996 Electronic editions. Scholarly editing in the computer age: Theory and practice. Third edition.

Silberschatz, Avi, Hank Korth, and S. Sudarshan. 2006. Database system concepts. New York: McGraw-Hill.

Sisam, Kenneth. 1953. Studies in the history of Old English literature. Oxford: Clarendon Press.

Smith, A.H., ed. 1938/1978. Three Northumbrian poems: Cædmon’s Hymn, Bede’s Death Song, and the Leiden Riddle. With a bibliography compiled by M. J. Swanton. Revised ed. Exeter Medieval English Texts. Exeter: University of Exeter Press.

Wuest, Paul. 1906. Zwei neue Handschriften von Cædmons Hymnus. ZfdA 48: 205-26.

Notes

1 In a report covering most extant, web-based scholarly editions published in or before 2002, Lina Karlsson and Linda Malm suggest that most digital editors up to that point had made relatively little use of the medium’s distinguishing features: “The conclusion of the study is that web editions seem to reproduce features of the printed media and do not fulfil the potential of the Web to any larger extent” (2004 abstract).

2 As this list suggests, my primary experience with actual practice is with digital editions of medieval texts. Recent theoretical and practical discussions, however, suggest that little difference is to be found in electronic texts covering other periods.

3 Synthetic here is not quite synonymous with eclectic as used to describe the approach of the Gregg-Bower’s school of textual criticism. Traditionally, an eclectic text is a single, hypothetical, textual reconstruction (usually of the presumed Authorial text) based on assumption of divided authority. In this approach, a copy text is used to supply accidental details of spelling and punctuation and (usually) to serve as a default source for substantive readings that affect the meaning of the abstract artistic work. Readings from this copy text are then corrected by emendation or, preferably, from forms found in other historical witnesses. In this essay, synthetic is used to refer to a critical text that attempts to summarise in textual form an editorial position about an abstract work’s development at some point in its textual history. All eclectic texts are therefore synthetic, but not all synthetic texts are eclectic: a best text (single witness) edition is also synthetic if, as the name implies, an editorial claim is being made about the particular reliability, historical importance, or interest of the text as represented in the chosen witness. A diplomatic transcription, however, is not synthetic: the focus there is on reporting the details of a given witness as accurately as possible. For a primer on basic concepts in textual editing, excluding the concept of the synthetic text as discussed here, see Greetham 1994.

4 It is indeed significant that the PPEA —the most ambitious digital critical edition of a medieval text that I am aware of—is at this stage in its development publishing primarily as an archive: the development of critical texts of the A-, B-, and C-text traditions has been deferred until after the publication of individual edition/facsimiles of the known witnesses (Bart 2006).

5 Transcriptions, editions, facsimiles, and studies mentioned in this paragraph in many cases have been superseded by subsequent work; readers interested in the current state of Cædmon’s Hymn should begin with the bibliography in O’Donnell 2005a.

6 While there is reason to doubt the details of Dobbie’s recensional division, his fundamental conclusion that dialect did not play a crucial role in the poem’s textual development remains undisputed. For recent (competing) discussions of the Hymn’s transmission, see O’Donnell 2005a and Cavill 2000.

7 There are other types of databases, some of which are at times more suited to representation of information encoded in structural markup languages such as XML, and to the type of manipulation common in textual critical studies (see below, note 14). None of these other models, however, express information as parsimoniously as does the relational model (see Silberschatz, Korth, and Sudarshan 2006, 362-365).

8 This is a rough rather than a formal definition. Formally, a well-designed relational database normally should be in either third normal form or Boyce-Codd normal form (BCNF). A relation is said to be in third normal form when a) the domains of all attributes are atomic, and b) all non-key attributes are fully dependent on the key attributes (see Krishna 1992, 37). A relation is said to be in BCNF if whenever a non-trivial functional dependency → A holds in R, X is a superkey for R (Krishna 1992, 38). Other normal forms exist for special kinds of dependencies (Silbertschatz, Korth, Sudarshan 2006, 293-298).

9 In actual fact, the model for a real bookstore invoice would be more complex, since the example here does not take into account the possibility that there might be more than one copy of any ISBN in stock. A real bookstore would need additional tables to allow it to keep track of inventory.

10 In actual practice, the model would be far more complex and include multiple levels of repeating information (words within lines and relationships to canonical reference systems, for example). This example also assumes that the word is the basic unit of collation; while this works well for most Old English poetry, it may not for other types of literature.

11 Of course, critical editions typically contain far more than bibliographic, textual, and lexical/grammatical information. This too can be modelled relationally, however, although it would be quixotic to attempt to account for the infinite range of possible material one might include in a critical edition in this essay. Thus cultural information about a given text or witnesses is functionally dependent on the specific text or witness in question. Interestingly, the more complex the argumentation becomes, the less complex the underlying data model appears to be: a biographical essay on a text’s author, for example, might take up but a single cell in one of our hypothetical tables.

12 The critical apparatus in most print and many digital editions is itself also usually a view of an implicit textual database, rather than the database itself. Although it usually is presented in quasi-tabular form, it rarely contains a complete accounting for every form in the text’s witness base.

13 This is not to say that it is impossible to use data modelling to account for these distinctions—simply that we are far from being able to derive them arbitrarily from two dimensional relational databases, however complex. Other data models, such as hierarchical or object-oriented databases can be used to build such distinctions into the data itself, though this by definition involves the application of expert knowledge. In O’Donnell 2005a, for example, the textual apparatus is encoded as a hierarchical database. This allows readers to in effect query the database, searching for relations pre-defined as significant, substantive, or orthographic by the editor. See O’Donnell 2005a, §§ ii.7, ii.19, 7.2-9.

14 In the case of my edition of Cædmon’s Hymn, this takes the form of multiple critical texts and apparatus: several reconstructions of the poem’s archetypal form, and various critical views of the poem’s five main recensions and collations. The criteria used to construct these views is indicated explicitly in the title of each page and explained in detail in the editorial introductions. The individual editions were extracted from an SGML encoded text using stylesheets—in essence hard-wired database queries reflecting higher-level editorial decisions—but presented to the reader as a series of progressively abstract views. In keeping with the developing standard for digital textual editions, the edition also allows users direct access to the underlying transcriptions and facsimiles upon which it is based. The result is an edition that attempts to combine the best of the digital and print worlds: the archiving function common to most electronic editions (and traditionally the focus of Cædmon’s Hymn textual research in print), with the emphasis on the presentation of expert knowledge characteristic of traditional print editorial practice.

----  

The Ghost in the Machine: Revisiting an Old Model for the Dynamic Generation of Digital Editions

Posted: Dec 16, 2006 00:12;
Last Modified: May 23, 2012 20:05

---

First Published: HumanIT 8.1 (2005): 51-71. http://www.hb.se/bhs/ith/1-8/dpo.pdf

“The Electronic Cædmon’s Hymn Editorial Method” (1998)

In 1998, a few months into the preparation of my electronic edition of the Old English poem Cædmon’s Hymn (O’Donnell forthcoming), I published a brief prospectus on the “editorial method” I intended to follow in my future work (O’Donnell 1998). Less a true editorial method than a proposed workflow and list of specifications, the prospectus called for the development of an interactive edition-processor by which “users will […] be able to generate mediated (‘critical’) texts on the fly by choosing the editorial approach which best suits their individual research
or study needs” (O’Donnell 1998, ¶ 1).

The heart of the prospectus was a diagram of the “Editorial Process Schema” I intended to follow (figure 1). The edition was to be based on TEI (P2) SGML-encoded diplomatic transcriptions of all twenty-one known witnesses to the poem. Its output was to consist of dynamically generated “HTML/XML” display texts that would allow users access to different views of the underlying textual data depending on their specific interests: e.g. editions containing reconstructions of archetypal texts, student texts based on witnesses showing the simplest vocabulary and grammar, “best text” editions of individual witnesses or recensions, etc. The production of these display texts was to be handled by a series of SGML “filters” or “virtual editions” that would be populated by the
unspecified processor used to format and display the final output. [Begin p. 51]

Figure 1. Editorial Process Schema (O’Donnell 1998)

Goals

The initial impetus for this approach was practical. Although it is quite short, Cædmon’s Hymn has a relatively complex textual history for an Anglo-Saxon poem. Even in print, it has always been edited as a multitext. The standard print edition (Dobbie 1942) reproduces two editorial versions of the poem without commenting on their relative priority. Few other studies have managed to be even this decisive. Dobbie’s text was the last (before my forthcoming edition) to attempt to produce critical texts based on the entire manuscript tradition. Most editions before and
since have concentrated on individual recensions or groups of witnesses[1[. Anticipating great difficulty in proof-reading an electronic edition that might have several editorial texts and multiple textual apparatus2. I was at this early stage keenly interested in reducing the opportunity for typographical error. A workflow that would allow me to generate a number of [Begin p. 52] different critical texts from a single set of diplomatic transcriptions without retyping was for this reason an early desideratum.

This convenience, however, was not to come at the expense of editorial content: a second important goal of my prospectus was to find an explicit home for the editor in what Murray McGillivray recently had described as a “post-critical” world (McGillivray 1994; see also Ross 1996; McGann 1997). In medieval English textual studies in 1998, indeed, this post-critical world seemed to be fast approaching: the first volume of the Canterbury Tales Project, with its revolutionary approach to electronic collation and stemmatics and a lightly-edited guide text, had been published two years earlier (Robinson 1996). Forthcoming publications from the Piers Plowman Electronic Archive (Adams et al. 2000) and Electronic Beowulf (Kiernan 1999) projects, similarly, promised a much heavier emphasis on the manuscript archive (and less interest in the critical text) than their more traditional predecessors. My initial work with the Cædmon’s Hymn manuscripts (e.g. O’Donnell
1996a; O’Donnell 1996b), however, had convinced me that there was a significant need in the case of this text for both user access to the witness archive and editorial guidance in the interpretation of this primary evidence – or, as Mats Dahlström later would point out, that the two approaches had complementary strengths and weaknesses:

The single editor’s authoritative control in the printed SE [Scholarly Edition], manifested in e.g. the versional prerogative, isn’t necessarily of a tyrannical nature. Conversely, the much spoken-of hypermedia database exhibiting all versions of a work, enabling the user to choose freely between them and to construct his or her “own” version or edition, presupposes a most highly competent user, and puts a rather heavy burden on him or her. Rather, this kind of ultra-eclectic archive can result in the user feeling disoriented and even lost in hyperspace. Where printed SE:s tend to bury rival versions deep down in the variant apparatuses, the document architecture of extreme hypertext SE:s, consequential to the very nature of digitally realised hypertext, threatens to bury the user deep among the mass of potential virtuality. (Dahlström 2000, 17) [Begin p. 53]

Keen as I was to spare myself some unnecessary typing, I did not want this saving to come at the expense of providing access to the “insights and competent judgement” (Dahlström 2000, 17) I hoped to acquire in several years’ close contact with the manuscript evidence. What I needed, in other words, was a system in which the computer would generate, but a human edit, the final display texts presented to the reader.

Theory

In order to accomplish these goals, the prospectus proposed splitting the editorial process into distinct phases: a transcription phase, in which human scholars recorded information about the text as it appeared in the primary sources (the “Witness Archive”); an editorial (“Filtering”) phase, in which a human editor designed a template by which a display text was to be produced from the available textual evidence (“Virtual Editions”); a processing phase, in which a computer applied these filters to the Witness Archive; and a presentation phase, in which the resultant output was presented to the reader. The first and second stages were to be the domains of the human editor; the third and fourth that of the computer. An important element of this approach was the assumption that the human editor, even in traditional print sources, functioned largely as a rules-based interpreter of textual data – or as I (in retrospect unfortunately) phrased it, could be “reduced to a set of programming instructions”3 – in much the same way as a database report extracts and format specific information from the underlying data table of a database:

bq..In my view, the editor of a critical edition is understood as being functionally equivalent to a filter separating the final reader from the uninterpreted data contained in the raw witnesses. Depending on the nature of the instructions this processor is given, different types of manipulation will occur in producing the final critical edition. An editor interested in producing a student edition of the poem, for example, can be understood to be manipulating the data according to the instructions “choose the easiest (most sensible) readings and ignore those which raise advanced textual problems”; an editor interested in producing the “original” text can be seen as a processor performing the instruction “choose readings from the earliest manuscript(s) when these are available [Begin p. 54] and sensible; emend or normalise readings as required”; and an editor interested in producing an edition of a specific dialectal version of a text is working to the instruction “choose readings from manuscripts belong to dialect x; when these are not available, reconstruct or emend readings from other manuscripts, ensuring that they conform to the spelling rules of the dialect”. (O’Donnell 1998, ¶¶ 4 f.)

Advantages

From a theoretical perspective, the main advantage of this approach was that it provided an explicit location for the encoding of editorial knowledge – as distinct from textual information about primary sources, or formatting information about the final display. By separating the markup used to describe a text’s editorial form from that used to describe its physical manifestation in the witnesses, or its final appearance to the end user, this method made it easier in principle both to describe phenomena at a given level in intrinsically appropriate terms and to modify, reuse, or revise information at each level without necessarily having to alter other aspects of the edition design – in much the same way as the development of structural markup languages themselves had freed text encoders from worrying unduly about final display. Scholars working on a diplomatic transcription of a manuscript in this model would be able to describe its contents without having to ensure that their markup followed the same semantic conventions (or even DTD) as that used at the editorial or display levels.

Just as importantly, the approach was, in theory at least, infinitely extensible. Because it separated transcription from editorial activity, and because it attempted to encode editorial activity as a series of filters, users were, in principle, free to ignore, adapt, add to, or replace the work of the original editor. Scholars interested in statistical or corpus work might choose to work with raw SGML data collected in the witness archive; those interested in alternative editorial interpretations might wish to provide their own filters; those wishing to output the textual data to different media or using different display formats were free to adapt or substitute a different processor. Espen S. Ore recently has discussed how well-made and suitably-detailed transcriptions of source material might be used or adapted profitably by other scholars and projects as the basis [Begin p. 55] for derivative work (Ore 2004); from a theoretical perspective the “Editorial Method” proposed for use in Cædmon’s Hymn offered an early model for how such a process might be built into an edition’s design. Indeed, the method in principle allowed editors of new works to operate in the other direction as well: by building appropriate filters, editors of original electronic editions could attempt to model the editorial decisions of their print-based predecessors, or apply techniques developed for other texts to their own material4.

Implementation (1998)

Despite its theoretical attractiveness, the implementation of this model proved, in 1998, to be technically quite difficult. The main problem was access to technology capable of the type of filtering envisioned at the Virtual Edition level. In the original model, these “editions” were supposed to be able both to extract readings from multiple source documents (the individual witness transcriptions) and to translate their markup from the diplomatic encoding used in the original transcriptions to that required by the new context – as a reading used in the main text of a critical edition, say, or a form cited in an apparatus entry, textual note, or introductory paragraph. This type of transformation was not in and of itself impossible to carry out at the time: some SGML production environments and several computer languages (e.g. DSSSL or, more generally, Perl and other scripting languages) could be used to support most of what I wanted to do; in the days before XSL, however, such solutions were either very much cutting edge, or very expensive in time and/or resources. As a single scholar without a dedicated technical staff or funding to purchase commercial operating systems, I was unable to take full advantage of the relatively few transformation options then available.

The solution I hit upon instead involved dividing the transformation task into two distinct steps (extraction and translation) and adding an extra processing level between the witness and virtual edition levels in my original schema: [Begin p. 56]

Figure 2. Implemented Schema

Instead of acting as the locus of the transformation, the editorial filters in this revised model provided a context for text that had been previously extracted from the witness archive and transformed for use in such circumstances. The text these filters called upon was stored in a textual database as part of the project’s entity extension file (project.ent, see Sperberg-McQueen and Burnard 2004, § 3.3), and hence resident in the project DTD. The database itself was built by extracting marked-up readings from the original witness transcription files (using grep) and converting them (using macros and similar scripts) to entities that could be called by name anywhere in the project. Transformations involving a change in markup syntax or semantics (e.g. from a diplomatic-linguistic definition of a word in witness transcriptions to a syntactic and morphological definition in the edition files) also generally were performed in this DTD extension file. [Begin p. 57]

First two lines of a TEI SGML transcription of Cædmon’s Hymn witness T1:

〈l id=“t1.1” n=“1“〉
 〈seg type=“MSWord” id=“t1.1a.1“〉Nu〈space extent=“0“〉〈/seg〉
 〈seg type=“MSWord” id=“t1.1a.2“〉〈damage type=“stain” degree=“moderate“〉sculon〈/damage〉〈space〉〈/seg〉
 〈note id=“t1.1a.3.n” type=“transcription” target=“t1.1a.2 t1.1a.4 t1.1b.1 t1.2b.3 t1.3a.1 t1.4a.1 t1.4a.2 t1.4b.1 t1.6a.1 t1.6a.2 t1.7b.1 t1.7b.2 t1.9b.2“〉&copyOft1.1a.2;…&copyOft1.9b.2;] Large stain obscures some text down inside (right) margin of p. 195 in facsimile. Most text is readable, however.〈/note〉
 〈seg type=“MSWord” id=“t1.1a.3“〉〈damage type=“stain” degree=“moderate“〉herigean〈/damage〉〈space〉〈/seg〉
 〈caesura〉
 〈seg type=“MSWord” id=“t1.1b.1“〉〈damage type=“stain” degree=“light“〉he〈/damage〉ofon〈lb〉rices〈space〉〈/seg〉
 〈seg type=“MSWord” id=“t1.1b.2“〉&wynn;eard〈space〉〈/seg〉
〈/l〉
〈l id=“t1.2” n=“2“〉
 〈seg type=“MSWord” id=“t1.2a.1“〉meotodes〈space〉〈/seg〉
 〈seg type=“MSWord” id=“t1.2a.2“〉me〈corr sic=“u” cert=“50%“〉〈del rend=“overwriting“〉u〈/del〉〈add rend=“overwriting” place=“intralinear“〉a〈/add〉〈/corr〉hte〈space〉〈/seg〉
 〈note type=“transcription” id=“t1.2a.2.n” target=“t1.2a.2” resp=dpod〉&copyOft1.2a.2;] Corrected from 〈foreign〉meuhte〈/foreign〉?〈/note〉
 〈caesura〉
 〈seg type=“MSWord” id=“t1.2b.1“〉&tyronianNota;〈space extent=“0“〉〈/seg〉
 〈seg type=“MSWord” id=“t1.2b.2“〉his〈space〉〈/seg〉
 〈seg type=“MSWord” id=“t1.2b.3“〉〈damage type=“stain” degree=“severe“〉〈unclear reason=“stain in facsimile” cert=“90%“〉mod〈/unclear〉〈/damage〉〈damage type=“stain” degree=“moderate“〉geþanc〈/damage〉〈space〉〈/seg〉
 〈note type=“transcription” id=“t1.2b.3.n” target=“t1.2b.3“〉&copyOft1.2b.3;] 〈c〉mod〈/c〉 obscured by stain in facsimile.〈/note〉
〈/l〉

Same text after conversion to entity format (Information from the original l, w, caesura, and note elements are stored separately).

〈!ENTITY t1.1a.1 ‘Nu〈space type=“wordBoundary” extent=“0“〉‘〉
〈!ENTITY t1.1a.2 ‘sc〈damage type=“stain” rend=“beginning“〉ulon〈/damage〉〈space type=“wordBoundary” extent=“1“〉‘〉

[Begin p. 58]

〈!ENTITY t1.1a.3 ‘〈damage type=“stain” rend=“middle“〉herıgean〈/damage〉〈space type=“wordBoundary” extent=“1“〉‘〉
〈!ENTITY t1.1b.1 ‘〈damage type=“stain” rend=“end“〉heo〈/damage〉fon〈lb〉rıces〈space
type=“wordBoundary” extent=“1“〉‘〉
〈!ENTITY t1.1b.2 ‘&mswynn;eard〈space type=“wordBoundary” extent=“1“〉‘〉
〈!ENTITY t1.2a.1 ‘meotodes〈space type=“wordBoundary” extent=“1“〉‘〉
〈!ENTITY t1.2a.2 ‘me〈damage type=“stain” rend=“complete“〉a〈/damage〉hte〈space type=“wordBoundary” extent=“1“〉‘〉
〈!ENTITY t1.2b.1 ‘〈abbr type=“scribal” expan=“ond/and/end“〉&tyronianNota;〈/abbr〉〈expan type=“scribal“〉ond〈/expan〉〈space type=“wordBoundary” extent=“0“〉‘〉
〈!ENTITY t1.2b.2 ‘hıs〈space type=“wordBoundary” extent=“1“〉‘〉
〈!ENTITY t1.2b.3 ‘〈damage type=“stain” rend=“beginning“〉〈unclear rend=“complete“〉mod〈/unclear〉geþanc〈/damage〉〈space type=“wordBoundary” extent=“1“〉‘〉

Same text after conversion to editorial format for use in editions.

〈!ENTITY ex.1a.1 ‘Nu‘〉
〈!ENTITY ex.1a.2 ‘sculon‘〉
〈!ENTITY ex.1a.3 ‘herigean‘〉
〈!ENTITY ex.1b.1 ‘heofonrices‘〉
〈!ENTITY ex.1b.2 ‘&edwynn;eard‘〉
〈!ENTITY ex.2a.1 ‘meotodes‘〉
〈!ENTITY ex.2a.2 ‘meahte‘〉
〈!ENTITY ex.2b.1 ‘ond‘〉
〈!ENTITY ex.2b.2 ‘his‘〉
〈!ENTITY ex.2b.3 ‘modgeþanc‘〉

Citation from the text of T1 (bold) in an introductory chapter (simplified for demonstration purposes).

〈p id=“CH6.420” n=“6.42“〉Old English 〈mentioned lang=“ANG“〉swe〈/mentioned〉, 〈mentioned
lang=“ANG“〉swæ〈/mentioned〉, 〈mentioned lang=“ANG“〉swa〈/mentioned〉 appears as 〈mentioned
rend=“postcorrection” lang=“ANG“〉&t1.3b.1;〈/mentioned〉 (&carmsx; 〈mentioned rend=“postcorrection”
lang=“ANG“〉&ar.3b.1;〈/mentioned〉) in all West-Saxon witnesses of the poem on its sole occurrence in 3b. The expected West-Saxon development is 〈mentioned lang=“ANG“〉swæ〈/mentioned〉, found in early West-Saxon. As in most dialects, however, 〈mentioned lang=“ANG“〉swa〈/mentioned〉 develops
irregularly in the later period. 〈mentioned [Begin p. 59] lang=“ANG“〉Swa〈/mentioned〉 is the usual late West-Saxon reflex (see &hogg1992;, § 3.25, n. 3).〈/p〉

Citation from the text of T1 (bold) in a textual apparatus (simplified for demonstration purposes)

〈app id=“EX.1A.1.APP” n=“1” from=“EX.1A.1“〉
 〈lem id=“EX.1A.1.LEM” n=“1a“〉&ex.1a.1;〈/lem〉
 〈rdggrp〉
  〈rdggrp〉
   〈rdggrp〉
    〈rdg id=“T1.1A.1.RDG” wit=“T1“〉&t1.1a.1;〈/rdg〉〈wit〉〈xptr doc=“t1”
from=“T1.1A.1” n=“T1” rend=“eorthan“〉〈/wit〉
    〈rdg id=“O.1A.1.RDG” wit=“O (Pre-Correction)“〉〈seg rend=“precorrection“〉&o.1a.1;〈/seg〉〈/rdg〉〈wit〉〈xptr doc=“o” from=“O.1A.1” n=“O (Pre-Correction)”
rend=“eorthan“〉〈/wit〉
   〈/rdggrp〉
  〈/rdggrp〉
  〈rdggrp〉
   〈rdggrp〉
    〈rdg id=“N.1A.1.RDG” wit=“N“〉&n.1a.1;〈/rdg〉〈wit〉〈xptr doc=“n” from=“N.1A.1” n=“N” rend=“eorthan“〉〈/wit〉
   〈/rdggrp〉
  〈/rdggrp〉
 〈/rdggrp〉
 〈rdggrp〉
  〈rdggrp〉
   〈rdggrp〉
    〈rdg id=“B1.1A.1.RDG” wit=“B1“〉&b1.1a.1;&b1.1a.2;〈/rdg〉〈wit〉〈xptr doc=“b1” from=“B1.1A.1” n=“B1” rend=“eorthan“〉〈/wit〉
    〈rdg id=“TO.1A.1.RDG” wit=“To“〉&to.1a.1;&to.1a.2;〈/rdg〉〈wit〉〈xptr doc=“to” from=“TO.1A.1” n=“To” rend=“eorthan“〉〈/wit〉
    〈rdg sameas=“O.1A.1.RDG” wit=“O (Post-Correction)“〉〈seg
rend=“postcorrection“〉&o.1a.1;&o.1a.2;〈/seg〉〈/rdg〉〈wit〉〈xptr doc=“o” from=“O.1A.1” n=“O (Post-Correction)” rend=“eorthan“〉〈/wit〉
    〈rdg id=“CA.1A.1.RDG” wit=“Ca“〉&ca.1a.1;&ca.1a.2;〈/rdg〉〈wit〉〈xptr doc=“ca” from=“CA.1A.1” n=“Ca” rend=“eorthan“〉〈/wit〉
   〈/rdggrp〉
  〈/rdggrp〉
 〈/rdggrp〉
〈/app〉

[Begin p. 60]

Implementation (2005)

The solutions I developed in 1998 to the problem of SGML transformation are no longer of intrinsic interest to Humanities Computing specialists except, perhaps, from a historical perspective. With the publication of the first XSL draft in November 1999, and, especially, the subsequent rapid integration of XSL and XML into commercial and academic digital practice, editors soon had far more powerful languages and tools available to accomplish the same ends.

Where my solutions are valuable, however, is as proof-of-concept. By dividing the editorial process into distinct phases, I was able to achieve, albeit imperfectly, both my original goals: no Old English text from the primary witnesses was input more than once in my edition and I did to a certain extent find in the “Virtual Editions” an appropriate and explicit locus for the encoding of editorial information.

With the use of XSLT, however, it is possible to improve upon this approach in both practice and theory. In practical terms, XSLT functions and instructions such as document() and xsl:result-document eliminate the need for a pre-compiled textual database: scholars using XSLT today can work, as I originally had hoped to, directly with the original witness transcriptions, extracting readings, processing them, and outputing them to different display texts using a single language and processor – and indeed perhaps even a single set of stylesheets.

In theoretical terms, moreover, the adoption of XSLT helps clarify an ambiguity in my original proposal. Because, in 1998, I saw the process of generating an edition largely as a question of translation from diplomatic to editorial encoding, my original model distinguished between the first two levels on largely semantic grounds. The Witness Archive was the level that was used to store primary readings from the poem’s manuscripts; the filter or Virtual Edition level was used to store everything else, from transformations necessary to translate witness readings into
editorial forms to secondary textual content such as introductory chapters, glossary entries, and bibliography.

In XSLT terms, however, there is no significant reason for maintaining such a distinction: to the stylesheet, both types of content are simply raw material for the transformation. What this raw material is, where it came from, or who its author is, are irrelevant to the stylesheet’s task of
[Begin p. 61] organisation, adaptation, interpretation, and re-presentation. While poor quality or poorly constructed data will affect the ultimate quality of its output, data composition and encoding remain, in the XSLT world, distinct operations from transformation.

This is significant because it helps us refine our theoretical model of the editorial process and further isolate the place where editorial intelligence is encoded in a digital edition. For organisation, adaptation, interpretation, and re-presentation are the defining tasks of the scholarly editor as much as they are that of the XSLT stylesheet. Change the way a standard set of textual data is interpreted, organised, adapted, or presented, and you change the nature of the final “edition”. Editions of literary works are usually based on very similar sets of primary data – there is only one Beowulf manuscript, after all, and even better attested works usually have a relatively small set of textually significant witnesses, editions, or recensions. What differences arise between modern editions of literary texts tend for the most part to hinge on the reinterpretation of existing evidence, rather than any real change in the available data5. In traditional editions, the evidence for this observation can be obscured by the fact that the “editor” also usually is responsible for much of the secondary textual content. That the observation is true, however, is demonstrated by emerging forms of digital editions in which the editorial function is largely distinct from that of content creation: multigenerational and derivative editions such as those discussed by Ore (2004), as well as interactive models such as that proposed by the Virtual Humanities Lab (e.g. Armstrong & Zafrin 2005), or examples in which users reinterpret data in already existing corpora or databases (e.g. Green 2005).

Taken together, this suggests that my 1998 model was correct in its division of the editorial process into distinct tasks, but imprecise in its understanding of the editorial function. [Begin p. 62]

Figure 3. Revised Schema

In the revised version, the original “Witness Archive” is now reconceived of more generally as a collection of textual data used in the edition, regardless of source or type. This data is then organised, interpreted, adapted, and prepared for presentation using stylesheets (and perhaps other organisational tools) provided by an “editor” – regardless of whether this “editor” is the person responsible for assembling and/or authoring the original content, an invited collaborator, or even an end user. As in the original model, this reorganisation is then presented using
an appropriate display media.

Conclusion

Technical advances of the last eight years have greatly improved our ability to extract and manipulate textual data – and our ability to build editions in ways simply impossible in print. The model for the editorial [Begin p. 63] process proposed in O’Donnell (1998) represented an early attempt to understand how the new technology might affect the way editors work, and, more importantly, how this technology might be harnessed more efficiently. With suitable modifications to reflect our field’s growing sophistication, the model appears to have stood the test of time, and proven itself easily adapted to include approaches developed since its original publication. From my perspective, however, a real sign of strength is that it continues to satisfy my original two goals: it suggests a method for avoiding reinputting primary source documents, and it provides a description of the locus of editorial activity; in an increasingly collaborative and interactive scholarly world, it appears that the ghost in the machine may reside in the stylesheet.

Daniel Paul O’Donnell is an Associate Professor of English at the University of Lethbridge, Alberta, Canada. He is also director of the Digital Medievalist Project 〈http://www.digitalmedievalist.org/〉 and editor of Cædmon’s Hymn: A Multimedia Study, Edition, and Archive (D.S. Brewer, forthcoming 2005). His research interests include Old English poetry, Textual and Editorial Studies, Humanities Computing, and the History of the Book. E-mail: daniel.odonnell@uleth.ca Web page: http://people.uleth.ca/~daniel.odonnell/ [Begin p. 64]

Notes

1 A bibliography of studies and editions of Cædmon’s Hymn can be found in O’Donnell (forthcoming).

2 In the event, the final text of O’Donnell (forthcoming) has eight critical editions, all of which have several apparatus, and “semi-normalised” editions of all twenty-one witnesses.

3 This choice was unfortunate, as it seems that it led to my model being understood far more radically than I intended (e.g. in Dahlström 2000, 17, cited above). A perhaps better formulation would be that editors (print and digital) function in a manner analogous to (and perhaps reproducable in) progamming instructions.

4 In practice, of course, this type of modelling would work best in the case of simple, linguistically oriented exemplars. It becomes increasingly difficult – though still theoretically possible – with more complex or highly eclectic editorial approaches. A rule-based replica of Kane and Donaldson (1988), for example, is possible probably only in theory.

5 While this obviously does not apply in those few cases in which editions are made after the discovery of significant new textual evidence, such discoveries are few and far between. Most editorial differences are the result of a reinterpretation of essentially similar sets of textual data.

[Begin p. 65]

References

[Begin p. 66]

[Begin p. 67]

Appendix: O’Donnell (1998)

The following is a reprint of O’Donnell (1998). It has been reformatted for publication, but is otherwise unchanged from the original text with the exception of closing brackets that were missing from some of the code examples in the original and that have been added here. The Editorial Schema diagram has been redrawn without any deliberate substantive alteration. The original low resolution version can be found at 〈http://people.uleth.ca/~daniel.odonnell/research/caedmon-job.html〉.

The Electronic Cædmon’s Hymn: Editorial Method

Daniel Paul O’Donnell

The Electronic Cædmon’s Hymn will be an archive based, virtual critical edition. This means users will:

The following is a rough schema describing how the edition will work:

[Begin p. 68]

Figure 1.

This schema reflects my developing view of the editing process. The terms (Witness Level, Processor Level, etc.) are defined further below.

In my view, the editor of a critical edition is understood as being functionally equivalent to a filter separating the final reader from the uninterpreted data contained in the raw witnesses. Depending on the nature of the instructions this processor is given, different types of manipulation will occur in producing the final critical edition. An editor interested in producing a student edition of the poem, for example, can be understood to be manipulating the data according to the instructions choose the easiest (most sensible) readings and ignore those which raise advanced textual problems; an editor interested in producing the ‘original’ text can be seen as a processor performing the instruction choose readings from the earliest manuscript(s) when these are available and sensible; emend or normalise readings as required; and an editor interested in producing an edition of a specific dialectal version of a text is working to the instruct[Begin p. 69]tion choose readings from manuscripts belong to dialect x; when these are not available, reconstruct or emend readings from other manuscripts, ensuring that they conform to the spelling rules of the dialect. If editors can be reduced to a set of programming instructions, then it ought to be possible, in an electronic edition, to automate the manipulations necessary to produce various kinds of critical texts. In the above schema, I have attempted to do so. Instead of producing a final interpretation of ‘the text’, I instead divide the editorial process into a series of discrete steps:

Because the critical edition is not seen as an actual text but rather as a simple view of the raw data, different textual approaches are understood as being complementary rather than competing. It is possible to have multiple ‘views’ coexisting within a single edition. Readers will be expected to choose the view most appropriate to the type of work they wish to do. For research requiring a reconstruction of the hypothetical ‘author’s original’, a ‘reconstruction filter’ might be applied; a student can apply the ‘student edition filter’ and get a readable simplified text.
And the oral-formulaicist can apply the ‘single manuscript x filter’ and get a formatted edition of the readings of a single manuscript. Because different things are expected of the different levels, each layer has its own format and protocol. Because all layers are essential to the
development of the text, all would be included on the CDRom containing the edition. Users could program their own filters at the filter level, or change the processing instructions to use other layouts or formats; they could also conduct statistical experiments and the like on the raw
SGML texts in the witness archive or filter level as needed.

[Begin p. 70]

Witness Archive

The witness archive consists of facsimiles and diplomatic transcriptions of all relevant witnesses marked up in SGML (TEI) format. TEI is better for this initial stage of the mark-up because it is so verbose. Information completely unnecessary to formatting – linguistic, historical, metrical,
etc. – can be included for use search programs and manipulation by other scholars.

The following is a sample from a marked-up transcription at the witness archive level:

bq..〈l id=“ld.1” n=“1“〉
 〈w〉Nu〈/w〉
 〈w〉&wynn;e〈/w〉〈space extent=0〉
 〈w〉sceolan〈/w〉
 〈w〉herian〈/w〉
 〈w〉〈del type=“underlined“〉herian〈/del〉〈/w〉
 〈caesura〉
 〈w〉heo〈lb〉〈add hand=“editorial” cert=“90“〉f〈/add〉on〈space extent=1〉rices〈/w〉
 〈w〉&wynn;eard〈/w〉.〈space extent=0〉
〈/l〉

Virtual Editions

Virtual Editions are the filters that contain the editorial processing instructions. They are not so much texts in themselves as records of the intellectual processes by which a critical text interprets the underlying data contained in the witness archive. They are SGML (TEI) encoding
documents which provide a map of which witness readings are to be used in which critical texts. For most readings in most types of editions, these instructions will consist of empty elements using the ‘sameAs’ and ‘copyOf’ attributes to indicate which witness is to provide a specific
reading: e.g. 〈w copyOf=CaW2〉〈/w〉 where CaW2 is the identifier for the reading of a specific word from manuscript Ca. One of the advantages of this method is that eliminates one potential source of error (cutting and pasting from the diplomatic transcriptions into the critical editions); it also allows for the near instantaneous integration of new manuscript readings into the finished editions – changes in the witness transcriptions are automatically incorporated in the final texts via the filter.

[Begin p. 71]

In some cases, the elements will contain emendations or normalisation instructions: e.g. 〈w sameAs=CaW2〉þa〈w〉. The sample is from a virtual edition. It specifies that line 1 of this critical text is to be taken verbatim from manuscript ld (i.e. the text reproduced above):

〈l id=“Early.1” n=“1” copyOf=“ld.1“〉〈/l〉

Processing Level and Display Texts

The ‘Virtual Editions’ are a record of the decisions made by an editor in producing his or her text rather than a record of the text itself. Because they consists for the most part of references to specific readings in other files, the virtual editions will be next-to-unreadable to the human eye. Turning these instructions into readable, formatted text is the function of the next layer – in which the processing instructions implied by the virtual layer are applied and in which final formatting is applied. This processing is carried out using a transformation type processor – like Jade – in which the virtual text is filled in with actual readings from the
witness archive, and these readings then formatted with punctuation and capitalisation etc. as required. The final display text is HTML or XML. While this will involve a necessary loss of information – most TEI tags have nothing to do with formatting, few HTML tags have much to do with content – it is more than compensated for by the ability to include the bells and whistles which make a text useful to human readers: HTML browsers are as a rule better and more user friendly than SGML browsers. Users who need to do computer analysis of the texts can always use the TEI encoded witness transcriptions or virtual editions.

Here is my guess as to how HTML would display the same line in the final edition (a critical apparatus would normally also be attached at this layer containing variant readings from other manuscripts [built up from the manuscript archive using the ‘copyOf’ attribute rather than by
cutting and pasting]; notes would discuss the various corrections etc. ignored in the reading text of this view):

〈P〉Nu we sceolan herian heofonrices weard〈/P〉

----  

O Captain! My Captain! Using Technology to Guide Readers Through an Electronic Edition

Posted: Dec 15, 2006 16:12;
Last Modified: May 23, 2012 20:05

---

Original Publication Information: Heroic Age 8 (2005). http://www.heroicage.org/issues/8/em.html.

O CAPTAIN! my Captain! our fearful trip is done;
The ship has weather’d every rack, the prize we sought is won;
The port is near, the bells I hear, the people all exulting,
While follow eyes the steady keel, the vessel grim and daring:
  But O heart! heart! heart!
    O the bleeding drops of red,
      Where on the deck my Captain lies,
        Fallen cold and dead.

Walt Whitman, Leaves of Grass

Digital vs. Print editions

§1. Most theoretical discussions of electronic editing attribute two main advantages to the digital medium over print: interactivity and the ability to transcend the physical limitations of the page1. From a production standpoint, printed books are static, linearly organised, and physically limited. With a few expensive or unwieldy exceptions, their content is bound in a fixed, unchangeable order, and required to fit on standard-sized, two dimensional pages. Readers cannot customise the physical order in which information is presented to them, and authors are restricted in the type of material they can reproduce to that which can be presented within the physical confines of the printed page2.

§2. Electronic editions, in contrast, offer readers and authors far greater flexibility. Content can be reorganised on demand in response to changing user needs through the use of links, search programs, and other utilities. The physical limitations of the screen can be overcome in part through the intelligent use of scrolling, dynamically generated content, frames, and other conventions of the electronic medium. The ability to organise and present non-textual material, indeed, has expanded the scope of the edition itself: it is becoming increasingly possible to edit physical objects and intellectual concepts as easily as literary or historical texts.

§3. Not surprisingly, this greater flexibility has encouraged electronic editors to experiment with the conventions of their genre. As McGann has argued, the traditional print-based critical edition is a machine of knowledge (McGann 1995). Its conventions developed over several centuries in response to a complex interplay of intellectual pressures imposed by the requirements of its subject and technical pressures imposed by requirements of its form:

Scholarly editions comprise the most fundamental tools in literary studies. Their development came in response to the complexity of literary works, especially those that had evolved through a long historical process (as one sees in the bible, Homer, the plays of Shakespeare). To deal with these works, scholars invented an array of ingenious machines: facsimile editions, critical editions, editions with elaborate notes and contextual materials for clarifying a work’s meaning. The limits of the book determined the development of the structural forms of these different mechanisms; those limits also necessitated the periodic recreation of new editions as relevant materials appeared or disappeared, or as new interests arose.

With the elimination of (many) traditional constraints faced by their print predecessors, electronic editors have been free to reconceive the intellectual organisation of their work. The ability to construct electronic documents dynamically and interactively has allowed editors to reflect contemporary doubts about the validity of the definitive critical text. Cheap digital photography and the ability to include sound and video clips has encouraged them to provide far more contextual information than was ever possible in print. With the judicious use of animation, virtual reality, and other digital effects, electronic editions are now able to recreate the experience of medieval textuality in ways impossible to imagine in traditional print editions.

Print Convention vs. Electronic Innovation

§4. The increased freedom enjoyed by electronic editors has brought with it increased responsibility. Because they work in a well established and highly standardised tradition, print-based editors are able to take most organisational aspects of their editions for granted. With some minor variation, print-based textual editions are highly predictable in the elements they contain, the order in which these elements are arranged, and they way in which they are laid out on the page (for examples and facsimiles of the major types, see Greetham 1994). In print editions, the textual introduction always appears before the critical text; the apparatus criticus always appears at the bottom of the page or after the main editorial text; glossaries, when they appear, are part of the back matter; contextual information about witnesses or the literary background to the text appears in the introduction. Publishers commonly require these elements to be laid out in a house style; beginning editors can look up the required elements in one of several standard studies (e.g. Greetham 1994, West 1973, Willis 1972).

§5. No such standardisation exists for the electronic editor (Robinson 2005)3. Few if any publishing houses have a strong house style for electronic texts, and, apart from a sense that electronic editions should include high quality colour images of all known witnesses, there are, as yet, few required elements. Electronic editions have been published over the last several years without textual introductions (Kiernan 1999), without critical texts (Solopova 2000), without a traditional textual apparatus (De Smedt and Vanhoutte 2000), and without glossaries (Adams et al. 2000)4. There are few standards for mise en page: some editions attempt to fit as much as possible into a single frameset (Slade 2002); others require users to navigate between different documents or browser tabs (Stolz 2003). Facsimiles can appear within the browser window or in specialised imaging software (cf. McGillivray 1997 vs. Adams et al. 2000): there are as yet few universally observed standards for image resolution, post-processing, or file formats. User interfaces differ, almost invariably, from edition to edition, even among texts issued by the same project or press (cf. Bordalejo 2003 vs. Solopova 2000). Where readers of print editions can expect different texts to operate in an approximately similar fashion, readers approaching new electronic texts for the first time cannot expect their text’s operation to agree with that of other editions they have consulted5.

Technology for Technology’s Sake?

§6. The danger this freedom brings is the temptation towards novelty for novelty’s sake. Freed largely from the constraints of pre-existing convention, electronic editors can be tempted to towards technological innovations that detract from the scholarly usefulness of their projects.

Turning the Page (British Library)

§7. Some innovations can be more annoying than harmful. The British Library Turning the Pages, series, for example, allows readers to mimic the action of turning pages in a manuscript facsimile (http://www.bl.uk/collections/treasures/digitisation4.html). When users click on the top or bottom corner of the manuscript page and drag the cursor to the opposite side of the book, they are presented with an animation showing the page being turned over. If they release the mouse button before the page has been pulled approximately 40% of the way across the visible page spread, virtual gravity takes over and the page falls back into its original position.

§8. This is an amusing animation, and well suited to its intended purpose as an interactive program that allows museums and libraries to give members of the public access to precious books while keeping the originals safely under glass ( http://www.armadillosystems.com/ttp_commercial/home.htm). Scholars interested in the texts as research objects, however, are likely to find the system less attractive. The page-turning system uses an immense amount of memory—the British Library estimates up to 1 GB of RAM for high quality images ( http://www.armadillosystems.com/ttp_commercial/techspec.htm)—and the requirement that users drag pages across the screen makes paging through an edition a time- and attention-consuming activity: having performed an action that indicates that they wish an event to occur (clicking on the page in question), users are then required to perform additional complex actions (holding the mouse button down while dragging the page across the screen) in order to effect the desired result. What was initially amusing rapidly becomes a major and unnecessary irritation.

A Wheel of Memory: The Hereford Mappamundi (Reed Kline 2001)

§9. Other innovations can be more harmful to the intellectual usefulness of a given project. A Wheel of Memory: The Hereford Mappamundi uses the Mappamundi as a conceit for the exploration of the medieval collective memory… using our own collective rota of knowledge, the CD-ROM (Reed Kline 2001, I audio). The edition has extremely high production values. It contains original music and professional narration. Images from the map6 and associated documents are displayed in a custom-designed viewing area that is itself in part a rota. Editorial material is arranged as a series of chapters and thematically organised explorations of different medieval Worlds: World of the Animals, World of the Strange Races, World of Alexander the Great, etc. With the exception of four numbered chapters, the edition makes heavy use of the possibilities for non-linear browsing inherent in the digital medium to organise its more than 1000 text and image files.

§10. In this case, however, the project’s innovative organisation and high production values are ultimately self-defeating.!(left)http://www.heroicage.org/issues/8/images/herefordwholemap.png| Despite its heavy reliance on a non-linear structural conceit, the edition itself is next to impossible to use or navigate in ways not anticipated by the project designers. Text and narration are keyed to specific elements of the map and edition and vanish if the user strays from the relevant hotspot: because of this close integration of text and image, it is impossible to compare text written about one area of the map with a facsimile of another. The facsimile of the map itself is also very difficult to study. The customised viewing area is of a fixed size (I estimate approximately 615×460 pixels) with more than half this surface given over to background and navigation: when the user chooses to view the whole map on screen, the 4 foot wide original is reproduced with a diameter of less than 350 pixels (approximately 1/10 actual size). Even then, it remains impossible to display the map in its entirety: in keeping with the project’s rota conceit, the facsimile viewing area is circular, even though the Hereford map itself is pentagonal: try as I might, I was unable ever to get a clear view of the border and image in the facsimile’s top corner.

Using Technology to Transcend Print

§11. The problem with the British Library and Hereford editions is not that they use innovative technology to produce unconventional editions. Rather, it is that they use this innovative technology primarily for effect rather than as a means of contributing something essential to the presentation of the underlying artifact. In both cases this results in editions that are superficially attractive, but unsuited to repeated use or serious study7. The British Library facsimiles lose nothing if the user turns off the “Turning the Page” technology (indeed, in the on-line version, an accessibility option allows users precisely this possibility); leaving the technology on comes at the cost of usability and memory. In the case of the Hereford Mappamundi, the emphasis on the rota navigational conceit and the project’s high production values get in the way of the map itself: the use of the round viewing area and fixed-width browser actually prevents the user from exploring the entire map, while the close integration of text, narration, and images ironically binds readers more closely to the editor’s view of her material than would be possible in a print edition.

Bayeux Tapestry (Foys 2003)

§12. Appropriately used, innovative technology can create editions that transcend the possibilities of print, however. This can be seen in the third edition discussed in this paper, The Bayeux Tapestry: Digital Edition8.

§13. On the one hand, the Bayeux Tapestry edition uses technology in ways that, at first glance, seem very similar to the Hereford Mappamundi and British Library facsimiles. Like the Mappamundi project, the Bayeux edition has very high production values and is presented using a custom-designed user interface (indeed, the Hereford and Bayeux projects both use the same Macromedia presentation software). Like the British Library facsimiles, the Bayeux project uses technology to imitate the physical act of consulting the medieval artifact: users of the Bayeux Tapestry edition, like visitors to the Bayeux Tapestry itself, move along what appears to be a seamless presentation of the entire 68 metre long object.

§14. The difference between The Bayeux Tapestry: Digital edition and the other two projects, however, is that in the Bayeux edition this technology plays an essential role in the representation of the underlying object. I am aware of no medieval manuscript that incorporates the act of turning the page into its artistic design; the Bayeux tapestry, however, was designed to be viewed as a single continuous document. By integrating hundreds of digital images into what behaves like a single facsimile, the Bayeux project allows users to consult the tapestry as its makers originally intended: moving fluidly from scene to scene and pausing every-so-often to examine individual panels or figures in greater detail.

§15. The organisation of the Bayeux edition is similarly well thought out. In contrast to the Hereford Mappamundi project, the Bayeux project is constructed around the object it reproduces. The opening screen shows a section from the facsimile (few screens would be able to accommodate the entire facsimile in reasonable detail) above a plot-line that provides an overview of the Tapestry’s entire contents in a single screen. Users can navigate the Tapestry scene-by-scene using arrow buttons at the bottom left of the browser window, centimetre by centimetre using a slider on the plot-line, or by jumping directly to an arbitrary point on the tapestry by clicking on the plot-line at the desired location. Tools, background information, other facsimiles of the tapestry, scene synopses, and notes are accessed through buttons at the bottom left corner of the browser. The first three types of material are presented in a separate window when chosen; the last two appear under the edition’s plot-line. Where the organisational conceit of the rota prevented users from accessing the entire Hereford map, the structure of the Bayeux edition encourages users to explore the entire length of the Tapestry.

§16. The Bayeux project also does its best to avoid imposing a particular structure on its editorial content. Where the Hereford project proved extremely difficult to navigate in ways not anticipated by its editor, The Bayeux Tapestry contains a slideshow utility that allows users to reorder elements of the edition to suit their own needs. While few readers perhaps will need to use this in their own study, the utility will prove of the greatest benefit to teachers and lecturers who wish to use the edition to illustrate their own work.

Conclusion

§17. The interactivity, flexibility, and sheer novelty of digital media bring with them great challenges for the electronic editor. Where scholars working in print can rely on centuries of precedent in designing their editions, those working in digital media still operate for the most part in the absence of any clear consensus as to even the most basic expectations of the genre. This technological freedom can, on the one hand, be extremely liberating: electronic editors can now produce editions of a much wider range of texts, artifacts, and concepts than was ever possible in print. At the same time, however, this freedom can also lead to the temptation of using technology for its own sake.

§18. The three projects discussed in this column have been produced by careful and technologically sophisticated researchers. The differences among them lie for the most part in the way they match their technological innovation to the needs of the objects they reproduce. The British Library and Hereford Mappamundi projects both suffer from an emphasis on the use of advanced technology for largely decorative purposes; both would be easier to use without much of their most superficially attractive technological features. The Bayeux Tapestry project, on the other hand, succeeds as an electronic text because it uses advanced technology that is well suited to its underlying object and allows it to be presented in a fashion difficult, if not impossible, in any other medium. Users of the British Library and Hereford facsimiles may find themselves wishing for a simpler presentation; few users of the Bayeux tapestry would wish that this edition had been published in book form.

Notes

1 This is a commonplace. For an influential discussion, see McGann 1995. Strictly speaking, print and digital/electronic in this discussion refer to accidentals of display rather than essential features of composition and storage. Texts composed and stored digitally can be displayed in print format, in which case they are subject to the same limitations as texts composed and stored entirely on paper. The importance of this distinction between composition and display is commonly missed in early theoretical discussions, which tend to concentrate exclusively on possibilities for on-screen display. In fact, as recent commercial and scholarly applications of xml are demonstrating, the real advantage of electronic composition and storage is reusability. Properly designed electronic texts can be published simultaneously in a number of different formats, allowing users to take advantage of the peculiar strengths and weaknesses of each. In my view, the most profound difference between electronic and print texts lies in the separation of content and presentation which makes this reuse of electronic texts possible.

2 It is easy to overemphasise the limitations of print and the flexibility of digital display. While books are for the most part physically static and two dimensional (the main exception are books published as loose pages intended for storage in binders or picture books with three dimensional figures), they are intellectually flexible: readers are free to reorganise them virtually by paging back and forth or using a table of contents or index to find and extract relevant information. In certain cases, e.g. dictionaries and encyclopedias, this intellectual flexibility is an essential feature of the genre. Not surprisingly, these genres were also among the first and most successful titles to be published in electronic format. Screens, for all their flexibility and interactivity, remain two-dimensional display devices subject to many of the same limitations of the printed page.

3 Robinson’s important article came to my attention after this column was in proof.

4 The observation that these examples are missing one or more traditional elements of a print edition is not intended as a criticism. Not all editions need all traditional parts, and, in several cases, the editorial approach used explicitly precludes the inclusion of the missing element. What the observation does demonstrate, however, is that no strong consensus exists as to what must appear in an electronic critical edition. The only thing common to all is the presence of facsimiles.

5 One possible objection to the above list of examples is that I am mixing editions produced using very different technologies over the greater part of a decade (a long time in humanities computing). This technical fluidity is one of the reasons for the lack of consensus among electronic editors, however. Since in most cases, moreover, the technology has aged before the editorial content (eight years is a relatively short time in medieval textual studies), the comparison is also valid from a user’s perspective: as a medievalist, I am as likely to want to consult the first disc in the Canterbury Tales Project as I am the most recent.

6 As is noted in the introduction to the edition, the facsimile reproduces a nineteenth-century copy of the Hereford map rather than the medieval Mappamundi itself. The images in the Bayeux disc discussed below are similarly based on facsimiles—albeit in this case photographs of the original tapestry.

7 This is less of a problem in the case of the British Library series, which presents itself primarily as an aid for the exhibition of manuscripts to the general public rather than a serious tool for professional scholars. The intended audience of the Mappamundi project is less certain: it is sold by a university press and seems to address itself to both scholars and students; much of its content, however, seems aimed at the high school level. The design flaws in both texts seem likely to discourage repeated use by scholars and members of the general public alike.

8 In the interests of full disclosure, readers should be aware that I am currently associated with Foys in several on-going projects. These projects began after the publication of Foys 2003, with which I am not associated in any way.

Works Cited

----  

The Doomsday Machine, or, "If you build it, will they still come ten years from now?": What Medievalists working in digital media can do to ensure the longevity of their research

Posted: Dec 15, 2006 13:12;
Last Modified: May 23, 2012 20:05

---

Original Publication Information: Heroic Age 7 (2004). http://www.heroicage.org/issues/7/ecolumn.html.

Yes, but the… whole point of the doomsday machine… is lost… if you keep it a secret!

Dr. Strangelove

It is, perhaps, the first urban myth of humanities computing: the Case of the Unreadable Doomsday Machine. In 1986, in celebration of the 900th anniversary of William the Conqueror’s original survey of his British territories, the British Broadcasting Corporation (BBC) commissioned a mammoth 2.5 million electronic successor to the Domesday Book. Stored on two 12 inch video laser discs and containing thousands of photographs, maps, texts, and moving images, the Domesday Project was intended to provide a high-tech picture of life in late 20th century Great Britain. The project’s content was reproduced in an innovative early virtual reality environment and engineered using some of the most advanced technology of its day, including specially designed computers, software, and laser disc readers (Finney 1986).

Despite its technical sophistication, however, the Domesday Project was a flop by almost any practical measure. The discs and specialized readers required for accessing the project’s content turned out to be too expensive for the state-funded schools and public libraries that comprised its intended market. The technology used in its production and presentation also never caught on outside the British government and school system: few other groups attempted to emulate the Domesday Project’s approach to collecting and preserving digital material, and no significant market emerged for the specialized computers and hardware necessary for its display (Finney 1986, McKie and Thorpe 2003). In the end, few of the more than one million people who contributed to the project were ever able to see the results of their effort.

The final indignity, however, came in March 2003 when, in a widely circulated story, the British newspaper The Observer reported that the discs had finally become “unreadable”:

16 years after it was created, the £2.5 million BBC Domesday Project has achieved an unexpected and unwelcome status: it is now unreadable.

The special computers developed to play the 12in video discs of text, photographs, maps and archive footage of British life are — quite simply — obsolete.

As a result, no one can access the reams of project information — equivalent to several sets of encyclopedias — that were assembled about the state of the nation in 1986. By contrast, the original Domesday Book — an inventory of eleventh-century England compiled in 1086 by Norman monks — is in fine condition in the Public Record Office, Kew, and can be accessed by anyone who can read and has the right credentials. ‘It is ironic, but the 15-year-old version is unreadable, while the ancient one is still perfectly usable,’ said computer expert Paul Wheatley. ‘We’re lucky Shakespeare didn’t write on an old PC.’ (McKie and Thorpe 2003)

In fact, the situation was not as dire as McKie and Thorpe suggest. For one thing, the project was never actually “unreadable,” only difficult to access: relatively clean copies of the original laser discs still survive, as do a few working examples of the original computer system and disc reader (Garfinkel 2003). For another, the project appears not to depend, ultimately, on the survival of its obsolete hardware. Less than ten months after the publication of the original story in The Observer, indeed, engineers at Camileon, a joint project of the Universities of Leeds and Michigan, were able to reproduce most if not quite all the material preserved on the original 12 inch discs using contemporary computer hardware and software (Camileon 2003a; Garfinkel 2003).

The Domesday Project’s recent history has some valuable, if still contested, lessons for librarians, archivists, and computer scientists (see for example the discussion thread to Garfinkel 2003; also Camileon 2003b). On the one hand, the fact that engineers seem to be on the verge of designing software that will allow for the complete recovery of the project’s original content and environment is encouraging. While it may not yet have proven itself to be as robust as King William’s original survey, the electronic Domesday Project now at least does appear have been saved for the foreseeable future-even if “foreseeable” in this case may mean simply until the hardware and software supporting the current emulator itself becomes obsolete.

On the other hand, however, it cannot be comforting to realise that the Domesday Project required the adoption of such extensive and expensive restoration measures in the first place less than two decades after its original composition: the discs that the engineers at Camileon have devoted the last ten months to recovering have turned out to have less than 2% the readable lifespan enjoyed by their eleventh-century predecessor. Even pulp novels and newspapers published on acidic paper at the beginning of the last century have proved more durable under similarly controlled conditions.1 While viewed in the short term, digital formats do appear to offer a cheap method of preserving, cataloguing, and especially distributing copies of texts and other cultural material, their effectiveness and economic value as a means of long-term preservation has yet to be demonstrated completely.

These are, for the most part, issues for librarians, archivists, curators, computer scientists, and their associations: their solution will almost certainly demand resources, a level of technical knowledge, and perhaps most importantly, a degree of international cooperation far beyond that available to most individual humanities scholars (Keene 2003). In as much as they are responsible for the production of an increasing number of electronic texts and resources, however, humanities scholars do have an interest in ensuring that the physical record of their intellectual labour will outlast their careers. Fortunately there are also some specific lessons to be learned from the Domesday Project that are of immediate use to individual scholars in their day-to-day research and publication.

1. Do not write for specific hardware or software.

Many of the preservation problems facing the Domesday Project stem from its heavy reliance on specific proprietary (and often customized) hardware and software. This reliance came about for largely historical reasons. The Domesday Project team was working on a multimedia project of unprecedented scope, before the Internet developed as a significant medium for the dissemination of data.2 In the absence of suitable commercial software and any real industry emphasis on inter-platform compatibility or international standards, they were forced to custom-build or commission most of their own hardware and software. The project was designed to be played from a specially-designed Phillips video-disc player and displayed using custom-built software that functioned best on a single operating platform: the BBC Master, a now obsolete computer system which, with the related BBC Model B, was at the time far more popular in schools and libraries in the United Kingdom than the competing Macintosh, IBM PC, or long forgotten Acorn systems.3

With the rise of the internet and the development of well-defined international standard languages such as Standard General Markup Language (SGML), HyperText Markup Language (HTML), eXtensible Markup Language (XML), and Hypermedia/Time-based Structuring Language (HyTime), few contemporary or future digital projects are likely to be as completely committed to a single specific hardware or software system as the Domesday Project. This does not mean, however, that the temptation to write for specific hardware or software has vanished entirely. Different operating systems allow designers to use different, often incompatible, shortcuts for processes such as referring to colour, assigning fonts, or referencing foreign characters (even something as simple as the Old English character thorn can be referred to in incompatible ways on Windows and Macintosh computers). The major internet browsers also all have proprietary extensions and idiosyncratic ways of understanding supposedly standard features of the major internet languages. It is very easy to fall into the trap of adapting one’s encoding to fit the possibilities offered by non-standard extensions, languages, and features of a specific piece of hardware or software.

The very real dangers of obsolescence this carries with it can be demonstrated by the history of the Netscape and tags. Introduced with the Netscape 4.0 browser in early 1997, the and tags were proprietary extensions of HTML that allowed internet designers to position different parts of their documents independently of one another on the screen: to superimpose one piece of a text over another, to place text over (or under) images, or to remove one section of a line from the main textual flow and place it elsewhere (Netscape Communications Corporation 1997). The possibilities this extension opened up were exciting. In addition to enlivening otherwise boring pages with fancy typographic effects, the and elements also allowed web designers to create implicit intellectual associations among otherwise disparate elements in a single document. For example, one could use these tags to create type facsimiles of manuscript abbreviations by superimposing their component parts or create annotated facsimile editions by placing textual notes or transcriptions over relevant manuscript images.

As with the Domesday Project, however, projects that relied on these proprietary extensions for anything other than the most incidental effects were doomed to early obsolescence: the and tags were never adopted by the other major browsers and, indeed, were dropped by Netscape itself in subsequent editions of its Navigator browser. Thus an annotated manuscript facsimile coded in mid 1997 to take advantage of the new Netscape 4.0 and tags would, with the release of Netscape 5.0 at the end of 1999, already be obsolete. Users who wished to maintain the presumably intellectually significant implicit association between the designer’s notes and images in this hypothetical case would need either to maintain (or recreate) a working older version of the Netscape browser on their system (an increasingly difficult task as operating systems themselves go through subsequent alterations and improvements) or to convert the underlying files to a standard encoding.

2. Maintain a distinction between content and presentation

A second factor promoting the early obsolescence of the Domesday Project was its emphasis on the close integration of content and presentation. The project was conceived of as a multimedia experience and its various components-text, video, maps, statistical information-often acquired meaning from their interaction, juxtaposition, sequencing, and superimposition (Finney 1986, “Using Domesday”; see also Camileon 2003b). In order to preserve the project as a coherent whole, indeed, engineers at Camileon have had to reproduce not only the project’s content but also the look and feel of the specific software environment in which it was intended to be searched and navigated (Camileon 2003b).

Here too the Domesday Project designers were largely victims of history. Their project was a pioneering experiment in multimedia organisation and presentation and put together in the virtual absence of now standard international languages for the design and dissemination of electronic documents and multimedia projects — many of which, indeed, were in their initial stages of development at the time the BBC project went to press.4

More importantly, however, these nascent international standards involved a break with the model of electronic document design and dissemination employed by the Domesday Project designers. Where the Domesday Project might be described as an information machine — a work in which content and presentation are so closely intertwined as to become a single entity — the new standards concentrated on establishing a theoretical separation between content and presentation (see Connolly 1994 for a useful discussion of the distinction between “programmable” and “static” document formats and their implications for document conversion and exchange). This allows both aspects of an electronic to be described separately and, for the most part, in quite abstract terms which are then left open to interpretation by users in response to their specific needs and resources. It is this flexibility which helped in the initial popularization of the World Wide Web: document designers could present their material in a single standard format and, in contrast to the designers of the Domesday Project, be relatively certain that their work would remain accessible to users accessing it with various software and hardware systems — whether this was the latest version of the new Mosaic browser or some other, slightly older and non-graphical interface like Lynx (see Berners-Lee 1989-1990 for an early summary of the advantages of multi-platform support and a comparison with early multi-media models such as that adopted by the Domesday Project). In recent years, this same flexibility has allowed project designers to accommodate the increasingly large demand for access to internet documents from users of (often very advanced) non-traditional devices: web activated mobile phones, palm-sized digital assistants, and of course aural screen readers and Braille printers.

In theory, this flexibility also means that where engineers responsible for restoring the Domesday Project have been forced to emulate the original software in order to recreate the BBC designer’s work, future archivists will be able to restore current, standards-based, electronic projects by interpreting the accompanying description of their presentation in a way appropriate to their own contemporary technology. In some cases, indeed, this restoration may not even require the development of any actual computer software: a simple HTML document, properly encoded according to the strictest international standards, should in most cases be understandable to the naked eye even when read from a paper printout or text-only display.

In practice, however, it is still easy to fall into the trap of integrating content and presentation. One common example involves the use of table elements for positioning unrelated or sequential text in parallel “columns” on browser screens (see Chisholm, Vanderheiden, et al. 2000, § 5). From a structural point of view, tables are a device for indicating relations among disparate pieces of information (mileage between various cities, postage prices for different sizes and classes of mail, etc.). Using tables to position columns, document designers imply in formal terms the existence of a logical association between bits of text found in the same row or column — even if the actual rationale for this association is primarily aesthetic. While the layout technique, which depends on the fact that all current graphic-enabled browsers display tables by default in approximately the same fashion, works well on desktop computers, the same trick can produce nonsensical text when rendered on the small screen of a mobile phone, printed by a Braille output device, or read aloud by an aural browser or screen-reader. Just as importantly, this technique too can lead to early obsolescence or other significant problems for future users. Designers of a linguistic corpus based on specific types of pre-existing electronic documents, for example, might be required to devote consider manual effort to recognising and repairing content arbitrarily and improperly arranged in tabular format for aesthetic reasons.

3. Avoid unnecessary technical innovation

A final lesson to be learned from the early obsolescence of the Domesday Project involves the hidden costs of technical innovation. As a pioneering electronic document, the Domesday Project was in many ways an experiment in multimedia production, publication, and preservation. In the absence of obvious predecessors, its designers were forced to develop their own technology, organisational outlines, navigation techniques, and distribution plans (see Finney 1986 and Camileon 2003a for detailed descriptions). The fact that relatively few other projects adopted their proposed solutions to these problems — and that subsequent developments in the field led to a different focus in electronic document design and dissemination — only increased the speed of the project’s obsolescence and the cost and difficulty of its restoration and recovery.

Given the experimental status of this specific project, these were acceptable costs. The Domesday Project was never really intended as a true reference work in any usual sense of the word.5 Although it is full of information about mid-1980s Great Britain, for example, the project has never proved to be an indispensable resource for study of the period. While it was inspired by William the Conqueror’s great inventory of post-conquest Britain, the Domesday Project was, in the end, more an experiment in new media design than an attempt at collecting useful information for the operation of Mrs. Thatcher’s government.

We are now long past the day in which electronic projects can be considered interesting simply because they are electronic. Whether they are accessing a Z39.50 compliant library catalogue, consulting an electronic journal on JSTOR, or accessing an electronic text edition or manuscript facsimile published by an academic press, users of contemporary electronic projects by-and-large are now more likely to be interested in the quality and range of an electronic text’s intellectual content than the novelty of its display, organisation or technological features (Nielsen 2000). The tools, techniques, and languages available to producers of electronic projects, likewise, are now far more standardised and helpful than those available to those responsible for electronic incunabula such as the Domesday Project.

Unfortunately this does not mean that contemporary designers are entirely free of the dangers posed by technological experimentation. The exponential growth of the internet, the increasing emphasis on compliance with international standards, and the simple pace of technological change over the last decade all pose significant challenges to the small budgets and staff of many humanities computing projects. While large projects and well-funded universities can sometimes afford to hire specialized personnel to follow developments in computing design and implementation and freeing other specialists to work on content development, scholars working on digital projects in smaller groups, at less well-funded universities, or on their own often find themselves responsible for both the technological and intellectual components of their work. Anecdotal evidence suggests that such researchers find keeping up with the pace of technological change relatively difficult — particularly when it comes to discovering and implementing standard solutions to common technological problems (Baker, Foys, et al. 2003). If the designers of the Domesday Project courted early obsolescence because their pioneering status forced them to design unique technological solutions to previously unresolved problems, many contemporary humanities projects appear to run same risk of obsolescence and incompatibility because their inability to easily discover and implement best practice encourages them to continuously invent new solutions to already solved problems (HATII and NINCH 2002, NINCH 2002-2003, Healey 2003, Baker, Foys, et al. 2003 and O’Donnell 2003).

This area of humanities computing has been perhaps the least well served by the developments of the last two decades. While technological changes and the development of well-designed international standards have increased opportunities for contemporary designers to avoid the problems which led to the Domesday Project’s early obsolescence, the absence of a robust system for sharing technological know-how among members of the relevant community has remained a significant impediment to the production of durable, standards-based projects. Fortunately, however, improvements are being made in this area as well. While mailing lists such humanist-l and tei-l long have facilitated exchange of information on aspects of electronic project design and implementation, several new initiatives have appeared over the last few years which are more directly aimed at encouraging humanities computing specialists to share their expertise and marshal their common interests. The Text Encoding Initiative (TEI) has recently established a number of Special Interest Groups (SIGs) aimed at establishing community practice in response to specific types of textual encoding problems. Since 1993, the National Initiative for a Networked Cultural Heritage (NINCH) has provided a forum for collaboration and development of best practice among directors and officers of major humanities computing projects. The recently established TAPoR project in Canada and the Arts and Humanities Data Service (AHDS) in the United Kingdom likewise seek to serve as national clearing houses for humanities computing education and tools. Finally, and aimed more specifically at medievalists, the Digital Medievalist Project (of which I am currently director) is seeking funding to establish a “Community of Practice” for medievalists engaged in the production of digital resources, through which individual scholars and projects will be able to pool skills and practice acquired in the course of their research (see Baker, Foys, et al. 2003). Although we are still in the beginning stages, there is increasing evidence that humanities computing specialists are beginning to recognise the extent to which the discovery of standardised implementations and solutions to common technological problems is likely to provide as significant a boost to the durability of electronic resources as the development of standardised languages and client-side user agents in the late 1980s and early 1990s. We can only benefit from increased cooperation.

The Case of the Unreadable Doomsday Machine makes for good newspaper copy: it pits new technology against old in an information-age version of nineteenth-century races between the horse and the locomotive. Moreover, there is an undeniable irony to be found in the fact that King William’s eleventh-century parchment survey has thus far proven itself to be more durable than the BBC’s 1980s computer program.

But the difficulties faced by the Domesday Project and its conservators are neither necessarily intrinsic to the electronic medium nor necessarily evidence that scholars at work on digital humanities projects have backed wrong horse in the information race. Many of the problems which led to the Domesday Project’s early obsolescence and expensive restoration can be traced to its experimental nature and the innovative position it occupies in the history of humanities computing. By paying close attention to its example, by learning from its mistakes, and by recognising the often fundamentally different ways in which contemporary humanities computing projects differ from such digital incunabula, scholars can contribute greatly to the likelihood that their current projects will remain accessible long after their authors reach retirement age.

Notes

1 See the controversy between Baker 2002 and [Association of Research Libraries] 2001, both of whom agree that even very acidic newsprint can survive “several decades” in carefully controlled environments.

2 The first internet browser, “WorldWideWeb,” was finished by Tim Berners-Lee at CERN (Conseil Européen pour la Recherche Nucléaire) on Christmas Day 1990. The first popular consumer browser able to operate on personal computer systems was the National Center for Supercomputing Applications (NCSA) Mosaic (a precursor to Netscape), which appeared in 1993. See [livinginternet.com] 2003 and Cailliau 1995 for brief histories of the early browser systems. The first internet application, e-mail, was developed in the early 1970s ([www.almark.net] 2003); until the 1990s, its use was restricted largely to university researchers and the U.S. military.

3 Camileon 2003; See McMordie 2003 for a history of the Acorn platform.
fn4. SGML, the language from which HTML is derived, was developed in the late 1970s and early 1980s but not widely used until the mid-to-late 1980s ([SGML Users’ Group] 1990). HyTime, a multimedia standard, was approved in 1991 ([SGML SIGhyper] 1994).
fn5. This is the implication of Finney 1986, who stresses the project’s technically innovative nature, rather than its practical usefulness, throughout.

Reference List

----  

Disciplinary impact and technological obsolescence in digital medieval studies

Posted: Dec 15, 2006 13:12;
Last Modified: May 23, 2012 20:05

---

Forthcoming in The Blackwell Companion to the Digital Humanities, ed. Susan Schriebman and Ray Siemens. 2007.

Final Draft. Do not quote without permission of the author.

In May 2004, I attended a lecture by Elizabeth Solopova at a workshop at the University of Calgary on the past and present of digital editions of medieval works1. The lecture looked at various approaches to the digitisation of medieval literary texts and discussed a representative sample of the most significant digital editions of English medieval works then available: the Wife of Bath’s Prologue from the Canterbury Tales Project (Robinson and Blake 1996), Murray McGillivray’s Book of the Duchess (McGillivray 1997), Kevin Kiernan’s Electronic Beowulf (Kiernan 1999), and the first volume of the Piers Plowman Electronic Archive (Adams et al. 2000). Solopova herself is an experienced digital scholar and the editions she was discussing had been produced by several of the most prominent digital editors then active. The result was a master class in humanities computing: an in-depth look at mark-up, imaging, navigation and interface design, and editorial practice in four exemplary editions.

From my perspective in the audience, however, I was struck by two unintended lessons. The first was how easily digital editions can age: all of the CD-ROMs Solopova showed looked quite old fashioned to my 2004 eyes in the details of their presentation and organisation and only two, Kiernan’s Beowulf and McGillivray’s Book of the Duchess, loaded and displayed on the overhead screen with no difficulties or disabled features.

For the purposes of Solopova’s lecture these failures were not very serious: a few missing characters and a slightly gimpy display did not affect her discussion of the editions’ inner workings and indeed partially illustrated her point concerning the need to plan for inevitable technological obsolescence and change at all stages of edition design. For end-users consulting these editions in their studies or at a library, however, the problems might prove more significant: while well-designed and standards-based editions such as these can be updated in order to accommodate technological change, doing so requires skills that are beyond the technological capabilities of most humanities scholars; making the necessary changes almost certainly requires some post-publication investment on the part of the publisher and/or the original editors. Until such effort is made, the thought and care devoted by the original team to editorial organisation and the representation of textual detail presumably is being lost to subsequent generations of end users.

The second lesson I learned was that durability was not necessarily a function of age or technological sophistication. The editions that worked more-or-less as intended were from the middle of the group chronologically and employed less sophisticated technology than the two that had aged less well: they were encoded in relatively straightforward HTML (although Kiernan’s edition makes sophisticated use of Java and SGML for searching) and rendered using common commercial web browsers. The projects that functioned less successfully were encoded in SGML and were packaged with sophisticated custom fonts and specialised rendering technology: the Multidoc SGML browser in the case of the Piers Plowman Electronic Archive and the Dynatext display environment in the case of the Canterbury Tales Project. Both environments were extremely advanced for their day and allowed users to manipulate text in ways otherwise largely impossible before the development and widespread adoption of XML and XSL-enabled browsers.

Neither of these lessons seems very encouraging at first glance to medievalists engaged in or investigating the possibilities of using digital media for new projects. Like researchers in many humanities disciplines, medievalists tend to measure scholarly currency in terms of decades, not years or months. The standard study of the Old English poem Cædmon’s Hymn before my recent edition of the poem (O’Donnell 2005a) was published nearly 70 years ago. Reference works like Cappelli’s Dizionario di abbreviature latine ed italiane (first edition, 1899) or Ker’s Catalogue of manuscripts containing Anglo-Saxon (first edition, 1959) also commonly have venerable histories. In the case of the digital editions discussed above—especially those already showing evidence of technological obsolescence—it is an open question whether the scholarship they contain will be able to exert nearly the same long-term influence on their primary disciplines. Indeed, there is already some evidence that technological or rhetorical problems may be hindering the dissemination of at least some of these otherwise exemplary projects’ more important findings. Robinson, for example, reports that significant manuscript work by Daniel Mosser appearing in various editions of the Canterbury Tales Project is cited far less often than the importance of its findings warrant (Robinson 2005: § 11).

The lesson one should not draw from these and other pioneering digital editions, however, is that digital projects are inevitably doomed to early irrelevance and undeserved lack of disciplinary impact. The history of digital medieval scholarship extends back almost six decades to the beginnings of the Index Thomisticus by Roberto Busa in the mid 1940s (see Fraser 1998 for a brief history). Despite fundamental changes in focus, tools, and methods, projects completed during this time show enough variety to allow us to draw positive as well as negative lessons for future work. Some digital projects, such as the now more than thirty-year-old Dictionary of Old English (DOE), have proven themselves able to adapt to changing technology and have had an impact on their disciplines—and longevity—as great as the best scholarship developed and disseminated in print. Projects which have proven less able to avoid technological obsolescence have nevertheless also often had a great effect on our understanding of our disciplines, and, in the problems they have encountered, can also offer us some cautionary lessons (see Keene n.d. for a useful primer in conservation issues and digital texts).

Premature obsolescence: The failure of the Information Machine

Before discussing the positive lessons to be learned from digital medieval projects that have succeeded in avoiding technological obsolescence or looking ahead to examine trends that future digital projects will need to keep in mind, it is worthwhile considering the nature of the problems faced by digital medieval projects that have achieved more limited impact or aged more quickly than the intrinsic quality of their scholarship or relevance might otherwise warrant—although in discussing projects this way, it is important to realise that the authors of these often self-consciously experimental projects have not always aimed at achieving the standard we are using to judge their success: longevity and impact equal to that of major works of print-originated and disseminated scholarship in the principal medieval discipline.

In order to do so, however, we first need to distinguish among different types of obsolescence. One kind of obsolescence occurs when changes in computing hardware, software, or approach render a project’s content unusable without heroic efforts at recovery. The most famous example of this type is the Electronic Domesday Book, a project initiated by the BBC in celebration of the nine hundredth anniversary of King William’s original inventory of post-conquest Britain (Finney 1986-2006; see O’Donnell 2004 for a discussion). The shortcomings of this project have been widely reported: it was published on video disks that could only be read using a customised disk player; its software was designed to function on the BBC Master personal computer—a computer that at the time was more popular in schools and libraries in the United Kingdom than any competing system but is now hopelessly obsolete. Costing over £ 2.5 million, the project was designed to showcase technology that it was thought might prove useful to schools, governments, and museums interested in producing smaller projects using the same innovative virtual reality environment. Unfortunately, the hardware proved too expensive for most members of its intended market and very few people ended up seeing the final product. For sixteen years, the only way of accessing the project was via one of a dwindling number of the original computers and disk readers. More recently, after nearly a year of work by an international team of engineers, large parts of the project’s content finally has been converted for use on contemporary computer systems.

The Domesday project is a spectacular example of the most serious kind of technological obsolescence, but it is hardly unique. Most scholars now in their forties and fifties probably have disks lying around their studies containing information that is for all intents and purposes lost due to technological obsolescence—content written using word processors or personal computer database programmes that are no longer maintained, recorded on difficult to read media, or produced using computers or operating systems that ultimately lost out to more popular competitors. But the Domesday project did not become obsolete solely because it gambled on the wrong technology: many other digital projects of the time, some written for main-frame computers using languages and operating systems that that are still widely understood, have suffered a similar obsolescence even though their content theoretically could be recovered more easily.

In fact the Domesday Book project also suffered from an obsolescence of approach—the result of a fundamental and still ongoing change in how medievalists and others working with digital media approach digitisation. Before the second half of the 1980s, digital projects were generally conceived of information machines_—programs in which content was understood to have little value outside of its immediate processing context. In such cases, the goal was understood to be the sharing of results rather than content. Sometimes, as in the case of the Domesday Book, the goal was the arrangement of underlying data in a specific (and closed) display environment; more commonly, the intended result was statistical information about language usage and authorship or the development of indices and concordances (see for example, the table of contents in Patton and Holoien 1981, which consists entirely of database, concordance, and statistical projects). Regardless of the specific processing goal, this approach tended to see data as raw material rather than an end result2. Collection and digitisation were done with an eye to the immediate needs of the processor, rather than the representation of intrinsic form and content. Information not required for the task at hand was ignored. Texts encoded for use with concordance or corpus software, for example, commonly ignored capitalisation, punctuation, or mise-en-page. Texts encoded for interactive display were structured in ways suited to the planned output (see for example the description of database organisation and video collection in Finney 1986-2006). What information was recorded was often indicated using _ad hoc and poorly documented tokens and codes whose meaning now can be difficult or impossible to recover (see Cummings 2006).

The problem with this approach is that technology ages faster than information: data that require a specific processing context in order to be understood will become unintelligible far more rapidly than information that has been described as much as possible in its own terms without reference to a specific processing outcome. By organising and encoding their content so directly to suit the needs of a specific processor, information machines like the Domesday Project condemned themselves to relatively rapid technological obsolescence.

Content as end-product: Browser-based projects

The age of the information machine began to close with the development and popular acceptance of the first Internet browsers in the early 1990s. In an information machine, developers have great control over both their processor and how their data is encoded. They can alter their encoding to suit the needs of their processors and develop or customise processors to work with specific instances of data. Developers working with browsers, however, have far less control over either element: users interact with projects using their own software and require content prepared in ways compatible with their processor. This both makes it much more difficult for designers to produce predictable results of any sophistication and requires them to adhere to standard ways of describing common phenomena. It also changes the focus of project design: where once developers focussed on producing results, they now tend to concentrate instead on providing content.

This change in approach explains in large part the relative technological longevity of the projects by McGillivray and Kiernan. Both were developed during the initial wave of popular excitement at the commercialisation of the Internet. Both were designed to be used without modification by standard Internet web browsers operating on the end-users’ computer and written in standard languages using a standard character set recognised by all Internet browsers to this day. For this reason—and despite the fact that browsers available in the late 1990s were quite primitive by today’s standards—it seems very unlikely that either project in the foreseeable future will need anything like the same kind of intensive recovery effort required by the Domesday Project: modern browsers are still able to read early HTML-encoded pages and Java routines, and are likely to continue to do so, regardless of changes in operating system or hardware, as long as the Internet exists in its current form. Even in the unlikely event that technological changes render HTML-encoded documents unusable in our lifetime, conversion will not be difficult. HTML is a text-based language that can easily be transformed by any number of scripting languages. Since HTML-encoded files are in no way operating system or software dependent, future generations—in contrast to the engineers responsible for converting the Electronic Domesday Book—will be able to convert the projects by Kiernan and McGillivray to new formats without any need to reconstruct the original processing environment.

SGML-based editions

The separation of content from processor did not begin with the rise of Internet browsers. HTML, the language which made the development of such browsers possible, is itself derived from work on standardised structural mark-up languages in the 1960s through the 1980s. These languages, the most developed and widely used at the time being Standard General Mark-up Language (SGML), required developers to make a rigid distinction between a document’s content and its appearance. Content and structure were encoded according to the intrinsic nature of the information and interests of the encoder using a suitable standard mark-up language. How this mark-up was to be used and understood was left up to the processor: in a web browser, the mark-up could be used to determine the text’s appearance on the screen; in a database program it might serve to delimit it into distinct fields. For documents encoded in early HTML (which used a small number of standard elements), the most common processor was the web browser, which formatted content for display for the most part without specific instructions from the content developer: having described a section of text using an appropriate HTML tag as 〈i〉 (italic) or 〈b〉 (bold), developers were supposed for the most part to leave decisions about specific details of size, font, and position up to the relatively predictable internal stylesheets of the user’s browser (though of course many early webpages misused structural elements like 〈table〉 to encode appearance).

SGML was more sophisticated than HTML in that it described how mark-up systems were to be built rather than their specific content. This allowed developers to create custom sets of structural elements that more accurately reflected the qualities they wished to describe in the content they were encoding. SGML languages like DocBook were developed for the needs of technical and other publishers; the Text Encoding Initiative (TEI) produced a comprehensive set of structural elements suitable for the encoding of texts for use in scholarly environments. Unfortunately, however, this flexibility also made it difficult to share content with others. Having designed their own sets of structural elements, developers could not be certain their users would have access to software that knew how to process them.

The result was a partial return to the model of the information machine: in order to ensure their work could be used, developers of SGML projects intended for wide distribution tended to package their projects with specific (usually proprietary) software, fonts, and processing instructions. While the theoretical separation of content and processor represented an improvement over that taken by previous generations of digital projects in that it treated content as having intrinsic value outside the immediate processing context, the practical need to supply users with special software capable of rendering or otherwise processing this content tended nevertheless to tie the projects’ immediate usefulness to the lifespan and weaknesses of the associated software. This is a less serious type of obsolescence, since rescuing information from projects that suffer from it involves nothing like the technological CPR required to recover the Domesday Project. But the fact that it must occur at all almost certainly limits these projects’ longevity and disciplinary impact. Users who must convert a project from one format to another or work with incomplete or partially broken rendering almost certainly are going to prefer texts and scholarship in more convenient formats.

XML, XSLT, Unicode, and related technologies

Developments of the last half-decade have largely eliminated the problem these pioneering SGML-based projects faced in distributing their projects to a general audience. The widespread adoption of XML, XSLT, Unicode, and similarly robust international standards on the Internet means that scholars developing new digital projects now can produce content using mark-up as flexible and sophisticated as anything possible in SGML without worrying that their users will lack the necessary software to display and otherwise process it. Just as the projects by Kiernan and McGillivray were able to avoid premature technological obsolescence by assuming users would make use of widely available Internet browsers, so to designers of XML-based projects can now increase their odds of avoiding early obsolescence by taking advantage of the ubiquitousness of the new generation of XML-, XSLT-, and Unicode-aware Internet clients3.

Tools and community support

The fact that these technologies have been so widely accepted in both industry and the scholarly world has other implications beyond making digital projects easier to distribute, however. The establishment of robust and stable standards for structural mark-up has also encouraged the development of a wide range of tools and organisations that also make such projects easier to develop.

Tools

Perhaps the most striking change lies in the development of tools. When I began my SGML-based edition Cædmon’s Hymn in 1997, the only SGML-aware and TEI-compatible tools I had at my disposal were GNU-Emacs, an open source text editor, and the Panorama and later Multidoc SGML browsers (what other commercial tools and environments were available were far beyond the budget of my one scholar project). None of these were very user friendly. Gnu-Emacs, though extremely powerful, was far more difficult to set up and operate than the word processors, spreadsheets, and processors I had been accustomed to use up to that point. The Panorama and Multidoc browsers used proprietary languages to interpret SGML that had relatively few experienced users and a very limited basis of support. There were other often quite sophisticated tools and other kinds of software available, including some—such as TACT, Collate, TUSTEP, and various specialised fonts like Peter Baker’s original Times Old English—that were aimed primarily at medievalists or developers of scholarly digital projects. Almost all of these, however, required users to encode their data in specific and almost invariably incompatible ways. Often, moreover, the tool itself also was intended for distribution to the end user—once again causing developers to run the risk of premature technological obsolescence.

Today, developers of new scholarly digital projects have access to a far wider range of general and specialised XML-aware tools. In addition to GNU-Emacs—which remains a powerful editor and has become considerably more easy to set up on most operating systems—there are a number of full-featured, easy to use, open source or relatively inexpensive commercial XML-aware editing environments available including Oxygen, Serna, and Screem. There are also quite a number of well-designed tools aimed at solving more specialised problems in the production of scholarly projects. Several of these, such as Anastasia and Edition Production and Presentation Technology (EPPT), have been designed by medievalists. Others, such as the University of Victoria’s Image Markup Tool and other tools under development by the TAPoR project, have been developed by scholars in related disciplines.

More significantly, these tools avoid most of the problems associated with those of previous decades. All the tools mentioned in the previous paragraph (including the commercial tools) are XML-based and have built-in support for TEI XML, the standard structural markup language for scholarly projects (this is also true of TUSTEP, which has been updated continuously). This means both that they can often be used on the same underlying content and that developers can encode their text to reflect their interests or the nature of the primary source rather than to suit the requirements of a specific tool. In addition, almost all are aimed at the developer rather than the end user. With the exception of Anastasia and EPPT, which all involve display environments, none of the tools mentioned above is intended for distribution with the final project. Although these tools—many of which are currently in the beta stage of development—ultimately will become obsolete, the fact that almost all are now standards compliant means that the content they produce almost certainly will survive far longer.

Community support

A second area in which the existence of stable and widely recognised standards has helped medievalists working with digital projects has been in the establishment of community-based support and development groups. Although Humanities Computing, like most other scholarly disciplines, has long had scholarly associations to represent the interests of their members and foster exchanges of information (e.g. Association for Literary and Linguistic Computing [ALLC]; Society for Digital Humanities / Société pour l‘étude des médias interactifs [SDH-SEMI]), the last half-decade has also seen the rise of a number of smaller formal and informal Communities of Practice aimed at establishing standards and providing technological assistance to scholars working in more narrowly defined disciplinary areas. Among the oldest of these are Humanist-l and the TEI—both of which pre-date the development of XML by a considerable period of time. Other community groups, usually more narrow in focus and generally formed after the development of XML, Unicode, and related technologies, include MENOTA (MEdieval and NOrse Text Archive), publishers of the Menota handbook: Guidelines for the encoding of medieval Nordic primary sources; MUFI (Medieval Unicode Font Initiative), an organisation dedicated to the development of solutions to character encoding issues in the representation of characters in medieval Latin manuscripts; and the Digital Medievalist, a community of practice aimed at helping scholars meet the increasingly sophisticated demands faced by designers of contemporary digital projects, that organises a journal, wiki, and mailing list devoted to the establishment and publication of best practice in the production of digital medieval resources.

These tools and organisations have helped reduce considerably the technological burden placed on contemporary designers of digital resources. As Peter Robinson has argued, digital projects will not come completely into their own until “the tools and distribution… [are] such that any scholar with the disciplinary skills to make an edition in print can be assured he or she will have access to the tools and distribution necessary to make it in the electronic medium” (Robinson 2005: abstract). We are still a considerable way away from this ideal and in my view unlikely to reach it before a basic competence in Humanities computing technologies is seen as an essential research skill for our graduate and advanced undergraduate students. But we are also much farther along than we were even a half-decade ago. Developers considering a new digital project can begin now confident that they will be able to devote a far larger proportion of their time to working on disciplinary content—their scholarship and editorial work—than was possible even five years ago. They have access to tools that automate many jobs that used to require special technical know-how or support. The technology they are using is extremely popular and well-supported in the commercial and academic worlds. And, through communities of practice like the Text Encoding Initiative, Menota, and the Digital Medievalist Project, they have access to support from colleagues working on similar problems around the globe.

Future Trends: Editing non-textual objects

With the development and widespread adoption of XML, XSLT, Unicode, and related technologies, text-based digital medieval projects can be said to have emerged from the incunabula stage of their technological development. Although there remain one or two ongoing projects that have resisted incorporating these standards, there is no longer any serious question as to the basic technological underpinnings of new text-based digital projects. We are also beginning to see a practical consensus as to the basic generic expectations for the “Electronic edition”: such editions almost invariably include access to transcriptions and full colour facsimiles of all known primary sources, methods of comparing the texts of individual sources interactively, and, in most cases, some kind of guide, reading, or editorial text. There is still considerable difference in the details of interface (Rosselli Del Turco 2006), mise en moniteur, and approach to collation and recension. But on the whole, most developers and presumably a large number of users seem to have an increasingly strong sense of what a text-based digital edition should look like.

Image, Sound, and Animation: Return of the information machine?

Things are less clear when digital projects turn to non-textual material. While basic and widely accepted standards exist for the encoding of sounds and 2D and 3D graphics, there is far less agreement as to the standards that are to be used in presenting such material to the end user. As a result, editions of non-textual material often have more in common with the information machines of the 1980s than contemporary XML-based textual editions. Currently, most such projects appear to be built using Adobe’s proprietary Flash and Shockwave formats (e.g. Foys 2003; Reed Kline 2001). Gaming applications, 3D applications, and immersive environments use proprietary environments such as Flash and Unreal Engine or custom-designed software. In each case, the long-term durability and cross-platform operability of projects produced in these environments is tied to that of the software for which they are written. All of these formats require proprietary viewers, none of which are shipped as a standard part of most operating systems. As with the BBC Domesday Project, restoring content published in many of these formats ultimately may require restoration of the original hard- and software environment.

Using technology to guide the reader: Three examples4

Current editions of non-textual material resemble information machines in another way, as well: they tend to be over-designed. Because developers of such projects write for specific processors, they—like developers of information machines of the 1980s—are able to control the end-user’s experience with great precision. They can place objects in precise locations on the user’s screen, allow or prevent certain types of navigation, and animate common user tasks.

When handled well, such control can enhance contemporary users’ experience of the project. Martin Foy’s 2003 edition of the Bayeux Tapestry, for example, uses Flash animation to create a custom-designed browsing environment that allows the user to consult the Bayeux Tapestry as a medieval audience might—by moving back and forth apparently seamlessly along its 68 metre length. The opening screen shows a section from the facsimile above a plot-line that provides an overview of the Tapestry’s entire contents in a single screen. Users can navigate the Tapestry scene-by-scene using arrow buttons at the bottom left of the browser window, centimetre by centimetre using a slider on the plot-line, or by jumping directly to an arbitrary point on the tapestry by clicking on the plot-line at the desired location. Tools, background information, other facsimiles of the tapestry, scene synopses, and notes are accessed through buttons at the bottom left corner of the browser. The first three types of material are presented in a separate window when chosen; the last two appear under the edition’s plot-line. Additional utilities include a tool for making slideshows that allows users to reorder panels to suit their own needs.

If such control can enhance a project’s appearance, it can also get in the way—encouraging developers to include effects for their own sake, or to control end-users’ access to the underlying information unnecessarily. The British Library Turning the Pages series, for example, allows readers to mimic the action of turning pages in an otherwise straightforward photographic manuscript facsimile. When users click on the top or bottom corner of the manuscript page and drag the cursor to the opposite side of the book, they are presented with an animation showing the page being turned over. If they release the mouse button before the page has been pulled approximately 40% of the way across the visible page spread, virtual “gravity” takes over and the page falls back into its original position.

This is an amusing toy and well suited to its intended purpose as an “interactive program that allows museums and libraries to give members of the public access to precious books while keeping the originals safely under glass” (British Library n.d.). It comes, however, at a steep cost: the page-turning system uses an immense amount of memory and processing power—the British Library estimates up to 1 GB of RAM for high quality images on a stand alone machine—and the underlying software used for the Internet presentation, Adobe Shockwave, is not licensed for use on all computer operating systems (oddly, the non-Shockwave Internet version uses Windows Media Player, another proprietary system that shares the same gaps in licensing). The requirement that users drag pages across the screen, moreover, makes paging through an edition unnecessarily time- and attention-consuming: having performed an action that indicates that they wish an event to occur (clicking on the page in question), users are then required to perform additional complex actions (holding the mouse button down while dragging the page across the screen) in order to effect the desired result. What was initially an amusing diversion rapidly becomes a major and unnecessary irritation.

More intellectually serious problems can arise as well. In A Wheel of Memory: The Hereford Mappamundi (Reed Kline 2001), Flash animation is used to control how the user experiences the edition’s content—allowing certain approaches and preventing others. Seeing the Mappamundi “as a conceit for the exploration of the medieval collective memory… using our own collective rota of knowledge, the CD-ROM” (§ I [audio]), the edition displays images from the map and associated documents in a custom-designed viewing area that is itself in part a rota. Editorial material is arranged as a series of chapters and thematically organised explorations of different medieval Worlds: World of the Animals, World of the Strange Races, World of Alexander the Great, etc. With the exception of four numbered chapters, the edition makes heavy use of the possibilities for non-linear browsing inherent in the digital medium to organise its more than 1000 text and image files.

Unfortunately, and despite its high production values and heavy reliance on a non-linear structural conceit, the edition itself is next-to-impossible to use or navigate in ways not anticipated by the project designers. Text and narration are keyed to specific elements of the map and edition and vanish if the user strays from the relevant hotspot: because of this close integration of text and image it is impossible to compare text written about one area of the map with a facsimile of another. The facsimile itself is also very difficult to study. The customised viewing area is of a fixed size (I estimate approximately 615×460 pixels) with more than half this surface given over to background and navigation: when the user chooses to view the whole map on screen, the 4 foot wide original is reproduced with a diameter of less than 350 pixels (approximately 1/10 actual size). Even then, it remains impossible to display the map in its entirety: in keeping with the project’s rota conceit, the facsimile viewing area is circular even though the Hereford map itself is pentagonal: try as I might, I never have been able to get a clear view of the border and image in the facsimile’s top corner.

Future standards for non-textual editions?

It is difficult to see at this point how scholarly editions involving non-textual material ultimately will evolve. Projects that work most impressively right now use proprietary software and viewers (and face an obvious danger of premature obsolescence as a result); projects that adhere to today’s non-proprietary standards for the display and manipulation of images, animation, and sound currently are in a situation analogous to that of the early SGML-based editions: on the one hand, their adherence to open standards presumably will help ensure their data is easily converted to more popular and better supported standards once these develop; on the other hand, the lack of current popular support means that such projects must supply their own processing software—which means tying their short term fate to the success and flexibility of a specific processor. Projects in this field will have emerged from the period of their technological infancy when designers can concentrate on their content, safe in the assumption that users will have easy access to appropriate standards-based processing software on their own computers.

Collaborative content development

The development of structural markup languages like HTML were crucial to the success of the Internet because they allowed for unnegotiated interaction between developers and users. Developers produce content assuming users will be able to process it; users access content assuming it will be suitable for use with their processors. Except when questions of copyright, confidentiality, or commerce intervene, contact between developers and users can be limited to little more than the purchase of a CD-ROM or transfer of files from server to browser.

The last few years have seen a movement towards applying this model to content development as well. Inspired by the availability of well-described and universally recognised encoding standards and encouraged no doubt by the success of the Wikipedia and the open source software movement, many projects now are looking for ways to provide for the addition and publication of user-contributed content or the incorporation of work by other scholars. Such contributions might take the form of notes and annotations, additional texts and essays, links to external resources, and corrections or revision of incorrect or outdated material.

An early, pre-wiki, model of this approach is the Online Reference Book for Medieval Studies (ORB). Founded in 1995 and run by a board of section editors, ORB provides a forum for the development and exchange digital content by and for medievalists. Contributors range from senior scholars to graduate students and interested amateurs; their contributions belong of a wide variety of genres: encyclopaedia-like articles, electronic primary texts, on-line textbooks and monographs, sample syllabi, research guides, and resources for the non-specialist. Despite this, the project itself is administered much like a traditional print-based encyclopaedia: it is run by an editorial board that is responsible for soliciting, vetting, and editing contributions before they are published.

More recently, scholars have been exploring the possibilities of a different, unnegotiated approach to collaboration. One model is the Wikipedia—an on-line reference source that allows users to contribute and edit articles with little editorial oversight. This approach is frequently used on a smaller scale for the construction of more specialised reference works: the Digital Medievalist, for example, is using wiki software to build a community resource for medievalists who use digital media in their research, study, or teaching. Currently, the wiki contains descriptions of projects and publications, conference programmes, calls for papers, and advice on best practice in various technological areas.

Other groups, such as a number of projects at the Brown Virtual Humanities Lab, are working on the development of mechanisms by which members of the community can make more substantial contributions to the development of primary and secondary sources. In this case, users may apply for permission to contribute annotations to the textual database, discussing differences of opinion or evidence in an associated discussion form (Armstrong and Zafrin 2005; Riva 2006).

A recent proposal by Espen Ore suggests an even more radical approach: the design of unnegotiated collaborative editions—i.e. projects that are built with the assumption that others will add to, edit, and revise the core editorial material: texts, introductory material, glossaries and apparatus (Ore 2004). In a similar approach, the Visionary Rood Project has proposed building its multi-object edition using an extensible architecture that will allow users to associate their own projects with others to form a matrix of interrelated objects, texts, and commentary (Karkov, O’Donnell, Rosselli Del Turco, et al 2006). Peter Robinson has recently proposed the development of tools that would allow this type of editorial collaboration to take place (Robinson 2005).

These approaches to collaboration are still very much in their earliest stages of development. While the technology already exists to enable such community participation in the development of intellectual content, questions of quality control, intellectual responsibility, and especially incentives for participation remain very much unsettled. Professional scholars traditionally achieve success—both institutionally and in terms of reputation—by the quality and amount of their research publications. Community-based collaborative projects do not easily fit into this model. Project directors cannot easily claim intellectual responsibility for the contributions of others to “their” projects—reducing their value in a profession in which monographs are still seen as a standard measure of influence and achievement. And the type of contributions open to most participants—annotations, brief commentary, and editorial work—are difficult to use in building a scholarly reputation: the time when a carefully researched entry on the Wikipedia or annotation to an on-line collaborative edition will help scholars who are beginning or building their careers is still a long way away (see O’Donnell 2006 who discusses a number of the economic issues involved in collaborative digital models).

Conclusion

Digital scholarship in Medieval Studies has long involved finding an accommodation between the new and the durable. On the one hand, technology has allowed scholars to do far more than was ever possible in print. It has allowed them to build bigger concordances and more comprehensive dictionaries, to compile detailed statics about usage and dialectal spread, and to publish far more detailed collations, archives, and facsimiles. At the same time, however, the rapidly changing nature of this technology and its associated methods has brought with it the potential cost of premature obsolescence. While few projects, perhaps, have suffered quite so spectacularly as the BBC’s Domesday Book, many have suffered from an undeserved lack of attention or disciplinary impact due to technological problems. The emphasis on information as a raw material in the days before the development of structural mark-up languages often produced results of relatively narrow and short-term interest—often in the form of information machines that could not survive the obsolescence of their underlying technology without heroic and costly efforts at reconstruction. Even the development of early structural markup languages like SGML did not entirely solve this problem: while theoretically platform-independent and focussed on the development of content, SGML-based projects commonly required users to acquire specific and usually very specialised software for even the most basic processing and rendition.

Of the projects published in the initial years of the internet revolution, those that relied on the most widely supported technology and standards—HTML and the ubiquitous desktop Internet browsers—survived the best. The editions by Kiernan and McGillivray showcased by Solopova in her lecture that summer still function well—even if their user interfaces now look even more old fashioned two years on.

In as much as the new XML and Unicode-based technologies combine the flexibility and sophistication of SGML with the broad support of early HTML, text-based medieval digital scholarship is now leaving its most experimental period. There remain economic and rhetorical issues surrounding the best ways of delivering different types of scholarly content to professional and popular audiences; but on the whole the question of the core technologies required has been settled definitively.

The new areas of experimentation in medieval digital studies involve editions of non-textual material and the development of new collaborative models of publication and project development. Here technology both has even more to offer the digital scholar and carries with it even greater risks. On the one hand, the great strides made in computer-based animation, gaming, and 3-D imaging in the commercial world offer projects the chance to deal with material never before subject to the kind of thorough presentation now possible. We already have marvellous editions of objects—maps, tapestries, two dimensional images—that allow the user to explore their subjects in ways impossible in print. In the near future we can expect to see a greater use of 3D and gaming technology in the treatment of sculpture, archaeological digs, and even entire cities. With the use of wikis and similar types of collaborative technologies, such projects may also be able to capture much more of the knowledge of the disciplinary experts who make up their audiences.

For projects dealing with non-textual objects, the risk is that the current necessity of relying on proprietary software intended for the much shorter-term needs of professional game designers and computer animators will lead to the same kind of premature and catastrophic obsolescence brought on by the equally-advanced-for-its-day Domesday Project. Sixteen years from now, animation design suites like Director (the authoring suite used for producing Shockwave files) and gaming engines like Unreal engine (an authoring engine used to produce current generations of video games) are likely to be different from and perhaps incompatible with current versions in a way that XML authoring technologies and processors will not. While we can hope that reconstruction will not be as difficult as it proved to be in the case of the Domesday Project, it seems likely that few of today’s non-textual editions will still be working without problems at an equivalent point in their histories, two decades from now.

In the case of experimentation with collaborative software, the challenge is more economic and social than technological. In my experience, most professional scholars initially are extremely impressed by the possibilities offered by collaborative software like wikis and other forms of annotation engines—before almost immediately bumping up against the problems of prestige and quality control that currently make them infeasible as channels of high level scholarly communication. Indeed at one recent conference session I attended (on the future of collaborative software, no less!) the biggest laugh of the morning came when one of the speakers confessed to having devoted most of the previous month to researching and writing a long article for the Wikipedia on his particular specialism in Medieval Studies.

That current text-based digital editions seem likely to outlive the technology that produced them can be attributed to the pioneering efforts of the many scholars responsible for editions like those by Adams, Kiernan, McGillivray, and Robinson discussed by Solopova in her lecture. The current generation of scholars producing editions of non-textual objects and experimenting with collaborative forms of scholarship and publication are now filling a similar role. The solutions they are developing may or may not provide the final answers; but they certainly will provide a core of experimental practice upon which the final answers most certainly will be built.

Notes

1 The focus of this chapter is on theoretical and historical problems that have affected digital scholarship in Medieval studies in the past and likely to continue to do so for the foreseeable future. Scholars seeking more specific advice on technological problems or best practice have access to numerous excellent Humanities Computing societies, mailing lists, and internet sites. For some specific suggestions, see the section “Community Support,” pp. 000-000, below. I thank Roberto Rosselli Del Turco for his help with this article.

2 Exceptions to this generalisation prove the rule: pre-Internet age projects, such as the Dictionary of Old English (DOE) or Project Gutenberg that concentrated more on content than processing have aged much better than those that concentrated on processing rather than content. Both the DOE and Project Gutenberg, for example, have successfully migrated to HTML and now XML. The first volume of the DOE was published on microfiche in 1986—the same year as the BBC’s Domesday Book; on-line and CD-ROM versions were subsequently produced with relatively little effort. Project Gutenberg began with ASCII text in 1971.

3 Not all developers of XML-encoded medieval projects have taken this approach. Some continue to write for specific browsers and operating systems (e.g. Muir 2004a); others have developed or are in the process of developing their own display environments (e.g. Anastasia, Elwood [see Duggan and Lyman 2005: Appendix]). The advantage of this approach, of course, is that—as with information machines like the BBC Domesday Book—developers acquire great control over the end user’s experience (see for example McGillivray 2006 on Muir 2004b); the trade off, however, is likely to be more rapid than necessary technological obsolescence or increased maintenance costs in the future.

4 The discussion in this section has been adapted with permission from a much longer version in O’Donnell 2005b.

References and Further Reading

Organisations and Support

Further reading

----  

Back to content

Search my site

Sections

Current teaching

Recent changes to this site

Tags

anglo-saxon studies, caedmon, citation, citation practice, citations, composition, computers, digital humanities, digital pedagogy, exercises, grammar, history, moodle, old english, pedagogy, research, student employees, students, study tips, teaching, tips, tutorials, unessay, universities, university of lethbridge

See all...

Follow me on Twitter

At the dpod blog