Saturday 8 February 2014

Hey, This is Your Journal - [The Editor's Cut]
Editorial - LLC. The Journal of Digital Scholarship in the Humanities 29/1

Even in academia, where much is done on a voluntary basis, it is generally acknowledged that good work deserves a correct remuneration. A lot of labour is involved in publishing a peer-reviewed scholarly journal and, whether access to the publication is open or subscription-based, there is always a cost involved. In the real world, no one expects anything to be free, except for a smile and the sun, maybe.

Over the last couple of years I have been contacted by a handful of scholars who announced that they do not want to contribute to or review for LLC anymore because they object to 'giving away their research and peer reviewing for free to a publisher who charges readers and makes a profit'. I regret that this point of view diabolizes LLC by romanticizing the ideal of Open Access publication.

In the first part of this editorial, I'd like to take the opportunity to explain why this perspective on LLC is false on at least three points and why a decision not to devote any time or effort to LLC directly affects Open Access publications. In the second part of this editorial I am presenting a report on the past record-breaking year of publishing LLC. The Journal of Digital Scholarship in the Humanities.

1. Three points to consider before turning your back

1.1. Ownership & Copyrights

Let me start by emphasizing that LLC is not owned by the Press but by the European Association for Digital Humanities (EADH, formerly known as ALLC). Every five years, EADH re-negotiates the contract or puts the publication of the Journal – not the ownership – up for tender. A very important and substantial part of the negotiations concerns author and readership services, terms and conditions. For example, as an author publishing in LLC, you do not 'give away your research' but you retain all copyrights. All you do is sign a licence which gives EADH the right to have your work published on their behalf by the Press. This is clearly stated in the footer of the first page of each published paper. This means that as an author, you retain any right to make a preprint version of the article available on your own personal website and/or that of your employer and/or in free public servers of preprints and/or articles in your subject area, provided that where possible you acknowledge that the article has been accepted for publication in LLC.

It is true that for the moment, you cannot make the accepted postprint manuscript available in the same way during the first 24 months after publication, but this term is currently under discussion. It is also true that, for the moment, you can never make the PDF of the final typeset and published version of your article available for free, e.g. in institutional repositories, for the simple reason that there is copyright involved in the typesetting by the Press. Just as the Press respects the copyrights of the authors and the licences to the EADH, we should respect the copyrights of the Press. There are legal systems to obey, after all. However, both the EADH and the LLC Editorial Team are in constant discussion. with the Press to see what could be done about these restrictions. One of the outcomes of this ongoing and open discussion has been the freely accessible online publication of the DH2012 conference issue (LLC 28/4) for a period of three months after publication. This will certainly be a recurring initiative which could hopefully be extended to other issues as well.

So what do you get out of it, apart from retaining your copyrights? Well, your publication is included in a highly esteemed scholarly Journal with a long tradition, a wide distribution, and a high appreciation. Further, the peer review process helps to improve your paper and the copy-editing and typesetting helps to improve your paper's readability. Your paper is published both in print and online where reference links to cited work are included and related data files can be linked to the article. Your published paper can be accessed by ca. 600 personal subscribers, and scholars and students in over 3,500 institutions worldwide. Your publication is indexed by the most important indexing/abstracting services in the humanities, including MLA, ISI Web of Science indexes, INSPEC, ABELL, ABES, LLBA, etc. And as LLC is receiving a yearly Impact Factor, publishing in the Journal certainly does no harm to your academic record.

1.2. Subscription Fees & Membership Fees

Subscription fees are collected by the Press on behalf of ADHO and all of its constituent organizations, namely the EADH, ACH, CSDH/SCHN, aaDH, JADH and CenterNet. The subscription rate is agreed upon yearly. Up to 2013, membership of an association of your choice or joint membership of ADHO was by subscription to the Journal only. In 2013, this changed, and it is now also possible to become a (joint) society member and pay the corresponding fee without paying for a subscription to the Journal. Currently, this proves to be popular with student members who have access to the Journal through their institution. It is, however, a misconception that opting for the membership-only fee would only affect the Press's profit. Under the current contract, only 30% of the net profit remains with the Press. The remaining 70% of the net profit goes directly to ADHO and its constituent organisations. However, there is always a fixed cost involved in making a Journal whichever publication model is chosen or whatever the circulation of the Journal – we did agree at the beginning of this editorial that that good work deserves a correct remuneration. The profit which is at stake here is your very own associations' income.

1.3. LLC funds DH (and Open Access)

LLC generates a substantial revenue for all Digital Humanities Organisations represented in ADHO. The total income from LLC received by ADHO in 2013, i.e. 70% of the net profit of 2012, was over 80,000 GBP (ca. 97,000 EUR; ca. 132,000 USD; ca. 146,000 CAD; ca. 151,000 AUD; ca. 13,613,000 JPY). ADHO distributes this income among the constituent organisations using a disbursement system which takes into account the individual and joint memberships and the geographical location of institutional and consortia subscriptions. A huge share of this income is used by ADHO and its constituent organisations to jointly fund, for instance, the publication subventions of DHQ, Digital Studies/Le Champ Numérique and DHCommons, the DH web infrastructure including DHQ hosting costs, conference support, prizes, awards and bursaries, etc. The rest of the income is then passed on to the constituent organisations on the basis of the disbursement scheme. Each constituent organisation uses this income to realize their own programmes and actions to promote, support and further the Digital Humanities.

So, if anyone makes a huge profit from the Journal, it is your very own association. Their income through the Journal allows your association to invest in the Digital Humanities the way you, as a voting member, decide upon.

To put it simply: by supporting LLC as a subscriber, author or reviewer, you support ADHO as well as your own association and you facilitate the funding of many initiatives in DH including the publication of Open Access publications. To put it differently, a subscription to LLC does not only cover the cost involved with the publication of LLC, but also the cost involved with the publication of the other Open Access publications which are offered by ADHO for free.

2. A Year's Work in Publishing LLC

2013 has by all measures been a record-breaking year for LLC. The Journal has never had more subscribers, never received more submissions, never published more papers on more pages, never received a higher Impact Factor, and never generated more income for ADHO and its constituent organisations.

2.1. Figures

In 2013, LLC managed to raise its individual subscriptions again by a healthy 12%. The Journal also received 34.58% more submissions than in 2012. With 144 manuscripts submitted from 34 different countries, LLC confirms the upward trend which started in 2011. The breakdown of the submitted papers per country in 2013 shows that most submissions still come from Europe (74) with the UK (17), Germany (11), the Netherlands (6), Belgium (5), France (5), Italy (5), Switzerland (5), Spain (4) and Greece (4) as the main providers of copy. Other European submissions came from Norway (1), Portugal (2), the Russian Federation (2), Cyprus (1), Ireland (2), Poland (2), Sweden (2) and Turkey (1). The second highest number of submissions came from the US (32). Asian authors produced 20 submissions: China (6), India (4), Japan (2), Malaysia (2), Taiwan (2) and 1 each from Iran, Israel, Hong Kong and Korea. The rest of the submissions were sent in from Canada (8), Australia (5), Africa (3 – Egypt, Morocco & Nigeria), Mexico (1) and New Zealand (1).

This is a very pleasing result which reflects the geographical distribution of the constituent organisations of ADHO which adopted LLC as their official Journal. These figures partially fulfill the objectives I outlined in my editorial in LLC 26/1 with regards to outreach to scholars in Asia, Africa, Latin America, Australia, and the Middle East. However, there is still room for improvement here, and I'd like to call upon your assistance to promote LLC as a publishing venue for all digital scholarship in the Humanities worldwide.

Also, LLC keeps on performing well with respect to speed and acceptance rate. The average time taken between submission and first decision for manuscripts submitted in 2012 was 133.42 days for full papers and 159.76 days for short papers. The average time for first decision for manuscripts submitted in 2013 was 80.20 days for full papers and 66.38 days for short papers. 36.46% of the full papers and 26.7% of the short papers submitted in 2013 received a decision in the same year. The average time taken between submission and final decision for revised papers submitted in 2013 was 32.47 days for full papers and 25 days for short papers. The accepted papers were published in advance access on in under six weeks from the final decision and we are heading towards a five-week mark.

Although these average times are very good considering the growth, size, and scope of the Journal, I am very sensitive to comments of authors who express their wish for a faster review track. A scholarly Journal like LLC is hugely dependent on the availability of peer reviewers, and although the demand to publish in LLC is growing, I observe a declining willingness to review for the Journal. Also, because of the broadening scope of the Journal and the geographical diversity of its authors, many manuscripts are submitted on subjects outside the traditional nucleus of Anglo-American centred literary and linguistic computing. This is no doubt a positive trend for the Digital Humanities, but on a managerial level, it can create delays in the peer review process when a sufficient number of appropriate reviewers cannot be invited. One author even decided to withdraw his submission because he was not able to suggest any subject specialist to the Journal besides himself. In order to improve the peer review process, I'd like to invite anyone who has not already done so to come forward and register as a reviewer by creating or updating your account in the Journal's online system and flagging your areas of expertise.

The overall acceptance rate of all submissions has come down from 40.21% in 2012 to 34.95% in 2013. Of the original submissions, 59.09% were sent back to the authors for revision in 2013 (49.06% in 2012). The rejection rate of original submissions after the first round of peer review has decreased from 31.13% in 2012 to 26.36% in 2013.

Production continues to run smoothly and all four issues appeared ahead of schedule in 2013. Volume 28 published a record number of 59 papers, which is an increase of 119% compared to 2012. Thanks to ADHO and OUP, subscribers to LLC were treated to 58% more pages compared to 2012 at no extra cost. The 2013 volume contained 753 pages compared to 476 pages in the 2012 volume.

The Journal more than doubled its Impact Factor which has increased from 0.431 (June 2011) and 0.333 (June 2012) to 0.717 (June 2013). Although LLC is not purely a linguistics Journal, it ranks at 60th of the 161 journals in SSCI Linguistics (101st in 2011).

2.2. Contents

The contents of volume 28 were as diverse as the DH community itself and mainly consisted of thematic collections of papers. In my editorial to 27/1, I already identified the formation of thematic clusters of interdisciplinary research as the second of four evolutions in the Digital Humanities. The first issue published a thematic collection on dialectometry edited by John Nerbonne and William A. Kretzchmar Jr. This exciting collection demonstrated that the traditional use of computational and quantitative techniques in dialectology co-exists alongside novel developments in the field, such as the application of dialectometric techniques to sociolinguistic and diachronic research questions and experiments with techniques from spatial statistics, geographic information systems, and image analysis. The second issue published the long-awaited collection of conference papers from DH2011 and was edited by Katherine Walter, Matt Jockers, and Glen Worthey. This conference issue reflects the conference theme of 'Big Tent Digital Humanities' and promotes an inclusive view of the Digital Humanities which is at the heart of the Journal. Apart from four unsolicited contributions and five book reviews, the third issue contained a thematic section of six papers coming out of the Interface 2011 Symposium and which highlights the wealth and breadth of early-career research. This thematic section was edited by seven young scholars as a hands-on exercise in journal management: Alberto Campagnolo, Andreia Martins Carvalho, Alejandro Giacometti, Richard Lewis, Matteo Romanello, Claire Ross, and Raffaele Viglianti. The fourth issue presented a collection of papers presented at the DH2012 conference and was edited by Paul Spence, Susan Brown, and Jan Christoph Meister. The thematic and methodological wealth demonstrated in this conference issue is in line with the overall theme of the conference: ‘Digital Diversity: Cultures, languages and methods’. This impressive conference issue was made accessible for free to everyone during a period of three months after publication. This will surely be a recurring initiative, because there is no better publicity for the Journal and the community it represents than to increase the accessibility to its contents.

Volume 29 of LLC promises to be at least as exciting as the previous one, albeit less voluminous. The first two issues will publish a good number of regular, unsolicited copy which is already available in advance access. The last two issues are reserved for the DH2013 issue and a thematic issue on Computational Models of Narratives.
With a growing number of submissions published in advance access, a good part of the 30th Jubilee volume of LLC is filled up nicely, but there are still some slots available. The Journal is already accepting proposals for thematic issues to be published in 2016.

3. A Bright Future

It has been a wonderful year for LLC and the future looks bright, thanks to the many people involved in editing, producing and publishing the Journal. First of all, I should like to thank our authors, book reviewers, anonymous peer reviewers, and guest editors for their important contribution to the Journal and their service to the community. A special word of thanks goes to the production and marketing people at OUP who have done a terrific job in producing and publicizing the Journal. Thanks to Sarah Scutts and Victoria Smith and their publishing team, a special thanks to Sarah Beattie who served as LLC's Production Editor for a year, and a very warm welcome to Deborah Hutchinson who has taken over from Sarah since July 2013. Thanks also to Jane Wiejak and her marketing team.

At the end of 2013 we say goodbye to Ron Van den Branden who has served the Journal over the last three years as its Book Reviews Editor. Ron has invigorated the number and importance of book reviews in the Journal and has done a wonderful job in prospecting, commissioning, and editing reviews. I'd like to thank Ron for all his work and for his ongoing support to the Journal as a member of the Editorial Team. My personal gratitute also goes to our Associate Editors Wendy Anderson and Isabel Galina for their hard work, much of which remains hidden from the readership.

Last but not least, I should explicitly thank the readership for their support and feedback. I'd be delighted to hear back from you and receive any feedback on the Journal or suggestions for improvement. You can do this by including the hashtag #LLCjournal in your tweets or by contacting the Journal via email and you can stay informed by following @LLCjournal on Twitter, find us on Facebook, visit the Journal's website regularly or sign up to be notified automatically whenever a new issue becomes available online.

Thank your once again for subscribing to the Journal and supporting the Digital Humanities organisations.

Edward Vanhoutte
Editor-in-Chief

Disclaimer: all figures and facts presented in this editorial are quoted from public ADHO & EADH reports.

Thursday 3 January 2013

The Gates of Hell - Guest Lecture Würzburg, 13 December 2012

Recently, I stumbled across a documentary about Auguste Rodin's monumental sculpture La porte de l'Enfer, and decided I could use The Gates of Hell as a metaphor in telling the history of the use of computing in the Humanities and the transfer from Humanities Computing to Digital Humanities as a name for the field. This coincided with my attempts to find an angle for a guest lecture I was invited to give at the Lehrstuhl für Computerphilologie und Neuere Deutsche Literaturgeschichte of the Julius-Maximilians Universität Würzburg (Germany) on 13 December. My last visit to Würzburg as a keynote speaker to the 2011 Annual Conference and Members' Meeting of the TEI Consortium (12 October 2012) had been very enjoyable indeed, but bringing in chocolates again would be pushing it a bit. Moreover, I was not lecturing in the prestigious Würzburg Rezidens, but at the evenly prestigious University of Würzburg, where I met a very attentive audience of students of Digital Humanities.

As it goes with guest lectures, at least in my case, the eventual contents of the lecture hardly ever reflects the title which was communicated way before putting together the lecture. When I sent my title to Armin Volkmann, who invited me to teach in his course, Text and Image based Digital Humanities: providing access to textual heritage in Flanders seemed a good title. However, when I discovered the story behind Rodin's Gates of Hell I changed my mind and elaborated on the metaphor to talk about the history and definition of literary and linguistic computing, Humanities Computing, and Digital Humanities. In order to relate to the previously communicated title, I divided the lecture in two parts:

1. History of the use of computing in the Humanities

Slides

Video

  • Part 1:
  • Part 2:
  • Part 3:
  • Part 4:

2. Demonstration of DH Projects from Flanders

In the second part, I demonstrated some of the projects we realized at the Centre for Scholarly Editing and Document Studies (CTB) of the Royal Academy of Dutch Language and Literature (KANTL) in Flanders:

And one project of the Computational Linguistics& Psycholinguistics Research Centre (CLiPS) from the University of Antwerp:

Further work

Since I taught at Würzburg, I have elaborated on the metaphor and the themes addressed in the lecture for a forthcoming book chapter entitled: The Gates of Hell: Digital@Humanities@Computing

Acknowledgements

I would like to thank Armin Volkmann for the invitation to lecture in his course, and especially Mareike Laue who took care of all the practical arrangements and who did a wonderful job filming and editing the lecture.

Tuesday 24 July 2012

Ruling the Screen: compromising decisions and decisive compromises - DRH 99

I was lucky my first paper on an international conference got published right away and in two different places. I presented 'Where is the editor? Resistance in the creation of an electronic critical edition' on the DRH Conference (Digital Resources for the Humanities) in Glasgow in 1998. The original paper got published in Human IT (1/1999: 197-214) where my name was vikingized as 'Edvard'. A revised version appeared a year later in DRH 98. Selected papers from Digital resources for the Humanities 1998 (Marilyn Deegan, Jean Anderson & Harold Short (eds.), London: OHC, 2000, p. 171-183). My second international paper, however, didn't make it into publication, partly because it was too sketchy and reported on research in progress. The main aim of 'Ruling the Screen: compromising decisions and decisive compromises' which I presented to DRH99 in London was to introduce the Electronic Streuvels Project and to report on the work so far. I focused especially on six decisive compromises I had to make because of the financial and infrastructural context of the project. Two of these compromises concerned the encoding architecture for the textual variation and the design of a project specific DTD for the encoding of letters.

Because this paper was never published, my use of nested <NOTE>-tags instead of the TEI parallel segmentation method to generate an inclusive view of all variant versions of the text in the edition was misunderstood by the encoding community when the electronic critical edition of De teleurgang van den Waterhoek was published in 2000 by Amsterdam University Press. The venture was not so much about documenting the textual variation among the different versions of the novel, but about creating a model and an interface by which parts of the text could be optically compared to one another independently. Criticism has been voiced by Dan O'Donnel, for instance, in his review of the electronic edition in Literary and Linguistic Computing (17/4 (2000): 491-496). O'Donnel pointed out that my solution was a poor one because of the sigificantly stray from the TEI definition of <NOTE> and because it ignored several features of TEI standard intended for precisely the type of functionality that I was suggesting. O'Donnel suggested that a combination of ≶APP>, <RDG>, and optionally <LEM> elements could have been chosen within the TEI Guidelines and that <LINKGRP> could have been used to link the variant versions to the orientation text. These choices, however, were the result of one of the compromising decisions outlined in the current paper, namely that I had to use TEI-Lite for reasons of time and financial constraints. O'Donnell righlty pointed out that adding them to the TEI-Lite DTD wouldn't have been too difficult, but I've always been against modifying a digested DTD like TEI Lite in order to lift it up to the level of full TEI. The choice was also one of ease of formatting. With only a basic knowledge of SGML and TEI, I was unable to to much transformations, and the low-cost low-tech SGML publication suite Multidoc was perfect for my purpose of getting out an electronic edition in 21 months time. The nested <NOTE>-construction also served my model of the linkeme on which I elaborated in my article 'A Linkemic Approach to Textual Variation. Theory and Practice of the Electronic-Critical Edition of Stijn Streuvels' De teleurgang van den Waterhoek.' (Human IT, 1/2000: 103-138). In the current paper I also presented the DTD I had produced for the encoding of modern correspondence materials. This DTD was the very first attempt at what later became the DALF-Scheme. Important to know, when reading this paper, is that we were then still living in the SGML world.

Abstract

At the end of every discussion on textual criticism and scholarly editing, there is this question about lay-out: 'How will the editor present his or her theories, findings and editorial decisions to the interested public?' The already published scholarly editions in paper seem to have opted, not for a maximum legibility and usability of the edition, but for a form that shows its well-researched content in such a way that it can expect all academic recognition it certainly deserves, leaving the interested user amidst a labyrinth of diacritics and codes and most of all, without enough fingers to use as bookmarks in order to read the edition as it was designed.

This paradoxical situation in which the explanation of the choice for and the constitution of a particular version of a text is laid down in some sort of merely illegible apparatus, culminates in the production and publication of static and paper-based historic-critical editions in which the genesis and the transmission of a particular work is articulated and presented against a base text conforming well defined guidelines. The synoptic, lemmatized and inclusive organisations of historic-critical editions are all meant to clarify the results of the research, but by their condense form they problematize their accessibility and usability (McGann, 1996; Lavagnino, 1995; De Tienne, 1996 & Vanhoutte, 1999). On top of that, their physical record impedes the longevity and interchangeability. And these are exactly the parameters by which academic work is being assessed.

The theory and practice of genetic and textual criticism shows that this is not a simple question to answer. There are in fact as many answers to this question about-lay out as there are potential editors thinking of possible editions. With the creation and publication of the electronic-critical edition of Stijn Streuvels’ De teleurgang van den Waterhoek (The decline of the Waterhoek) (De Smedt & Vanhoutte, 2000), the Electronic Streuvels Project (ESP) hopes to suggest yet another answer to this basic question, not by producing a complete historic-critical edition, but by aiming at a compromise which incorporates intellectual integrity, usability and utility.

Paper

Enfin,'t is er - of het nu inslaat,
of weer in stilte begraven wordt,
kan me minder schelen1

Stijn Streuvels

In 1996 Marcel De Smedt published a genetic article on Stijn Streuvels’ novel De teleurgang van den Waterhoek (De Smedt, 1996) from 1927 on the basis of close-reading of the author’s correspondence with friends and publishers and of thorough research of the extant primary sources which can all be found in the Archive and Museum for Flemish Cultural Life (AMVC, Archief en Museum voor het Vlaamse Cultuurleven) in Antwerp and they are described in Streuvels (1999) and Streuvels (2000). There is

  • a defective draft manuscript from 1926 (S935/H15),
  • a complete neat manuscript from 1927 (S935/H18),
  • a corrected typescript (1927) (S935/H16),
  • a corrected and annotated copy of the pre-publication of the novel in the literary journal De Gids which functioned as manuscript for the first print edition of 1927 (S935/H17),
  • a defective corrected proof (S935/H17),
  • an elaborately edited version of the first print which functioned as manuscript for the second revised edition of 1939 (S935/H24).

This drastical revised edition of 1939 which only retained 73.4% of the original text of the first edition was probably the author’s response to both the publisher's request, to produce a shorter and hence a more marketable book,2 and the catholic critique who had fulminated against the elaborate depiction of the erotic relationship between two of the main characters. It goes without saying that this revision resulted in a different text, telling a different story with different conclusions to literary criticism. Up to 1987, this revised text had been the basis for 13 reprints of the book.

De Smedt's conclusion of his genetic study was a plea for a new scholarly edition of the novel based on the restored text of the first print edition.

We believe that in this case, the first print edition prevails over the journal publication. It wasn’t till the redaction of this first edition that Streuvels had the complete text of his work at hand, and that he could overlook and edit his book as a whole. (De Smedt, 1996: 326; my translation)

and further on

It is obvious that manifest mistakes in this first edition have to be corrected with the use of the proofs and the manuscript. (Ibidem)

In what form and when this edition should become reality remained an open question to him.

In January 1998, The Royal Academy of Dutch Language and Literature (Belgium), charmed by De Smedt’s proposal and on the lookout for a new challenging profile for this learned society, employed one full-time research fellow to design and realize this project which was called the Electronic Streuvels Project (ESP). From the very start of the project it was clear that it had to include an electronic component which would make the exclusive choice for a specific well defined form or for one kind of edition (e.g. a documentary, historic-critical, diplomatic, study or reading edition) obsolete. The project would have to include elements of all of these, but be neither of them. Very soon, the choice was made for an electronic edition project which aims:

  • To deliver an electronic edition of Stijn Streuvels’ De Teleurgang van den Waterhoek in 21 months time.
  • To obtain expertise in using SGML/TEI in creating electronic editions.
  • To function as a pilot-project in electronic scholarly editing in the field of modern Dutch literature.
  • To deliver a project report which will be helpfull as guidelines for further electronic editing projects in Flanders and the Netherlands.

With the inefficiency of conventional paper-based editions annex the illegibility of their apparatus variorum as omnipresent demon, the project wants to explore new ways of producing editions for a diverse audience and suggest alternative solutions for the presentation of variants in an electronic environment to the interested user. I believe we succeeded in doing so with the publication of both an electronic-critical edition on CD and a text-critical edition in bookform as a first spin-off product.

The hard-copy spin-off product (paperback and hardback editions) by all means appeals to a scholarly reading edition in that it answers the central criterion as defined by Bowers in his essay Notes on theory and practice in editing texts:

Perhaps the central criterion for such a reading edition is that its text is intended to serve two audiences - the scholarly and the generally informed non-professional public, in each case without essential compromise. (Bowers, 1992: 245)

Whereas the hard-copy version will present the constituted reading text of the first edition, accompanied by a glossary list, scholarly articles on the text-constitution and the transmission of the work together with an exemplary article on the genetic variants, the electronic edition will include the fully searchable texts of the pre-publication published in De Gids, the first edition from 1927 and the second revised edition from 1939, the digital facsimiles of three primary sources (i.e. the complete manuscript, the corrected version of De Gids and the corrected version of the first print) a glossary list, a (genetic) chapter on the production and the transmission of the work, including relevant correspondence between the author and his publishers in full text (ca. 70 letters), and a study on the reception of the work. Taking the (edited) text of the first edition as the orientation text, this hypertext edition will link the different versions of the text on the paragraph level in order to show the variant readings. With this choice of the paragraph instead of the variant as linkeme, we think to have found a gentle compromise for the aforementioned dichotomy between intellectual integrity and legibility.

The scope of this enterprise (and hence the form and formality of its spin-off products) was highly defined by a set of reality-driven compromising decisions on the level of the project administration, which had its repercussions on the methodology of the project, i.e. a set of six decisive compromises had to be put forward.

1.

The Royal Academy funds the ESP with private money from their patrimony. Therefore, only the employment of one full-time research fellow could be financed, leaving a small budget for hard- and software and the production of the CD. As a consequence, all the electronic work as OCR’ing, imaging, text encoding, system architecture etc had to be done in-house by one person with very basic hard- and software and a lot of creativity. On top of that, the funds only allowed the project to run for 21 months. Therefore, a choice had to be made as to what to include in the edition.

2.

From the very start of the project, the steering committee was focused on the lay-out of the result, i.e. the CD-ROM, without knowing anything about text-encoding, markup, imaging or Humanities Computing in general. The aim of the project was something along the lines of: 'Creating a CD-Rom by which users can compare different versions of the text by showing at the same time all variants on the paragraph level as well as the corresponding digital facsimiles of the document witnesses.' Hyperlinking and digital facsimiles were the keywords in this description. This 'narrowing down' of the text-critical description of variants and the limited time which didn’t allow for a long learning curve made me skim the option for the full-cream TEI DTD to the reality of the TEILite DTD. A compromise which was in itself a compromising decision.

3.

In looking for a valid way to encode the relation between the corresponding paragraphs of the different versions, I first wanted to document the relationship using CORRESP attribute to <P>, but although this seems to work conceptually, it didn’t work in the practical world of the browsers. Most popular browsers (including Panorama Pro and MultiDoc Pro) have difficulties in expressing it in a visible useful way for editions. The same is true of the CopyOf and the SameAs attributes in the full TEI.

The compromise was found in a combined construction of nested <NOTE> tags to show the paragraph agreement and <XREF>’s for references to the digital facsimiles. In giving each paragraph of each full text document source a unique ID and drawing correspondance tables from it, it was relatively easy to generate one mammoth-SGML instance by running a suite of AWK and PERL scripts on the three basis SGML instances which contained the encoded full text versions of the pre-publication, the first and the second print. The XREF’ing had then to be done manually, because no machine-readable transliteration of the facsimiles had been made.

This resulted in a typical construction of the following type. (NB: this formatted structure is not valid SGML because of mixed content. The whitespaces between <NOTE> and <P> should be removed.)

<P ID="ed.td1.1.003" N="1.003">
   <NOTE>
      <P>MS</XREF></P>
         <P>DG
            <NOTE>
               <P><SEG>DG</SEG></P>
               <P ID="ed.tg.1.003" N="1.003">"Aho! Aho!"</P>
            </NOTE>
         </P>
         <P>DGcor<XREF DOC="g1.064065"></XREF></P>
         <P>D1cor<XREF DOC="d1.004005"></XREF></P>
         <P>D2
             <NOTE>
               <P><SEG>D2</SEG></P>
               <P ID="ed.td2.1.003" N="1.003">
               <Q TYPE="speech" DIRECT="Y" WHO="reiziger">
               — Aho! Aho!</Q></P>
             </NOTE>
         </P>
   </NOTE>
<Q TYPE="speech" DIRECT="Y" WHO="reiziger">— Aho! Aho!</Q></P>

This explicit articulation of the virtual paragraph correspondences is by far the clearest case of the ruling of the screen in the project.

To overcome the fact that this mammoth-instance is theoretically and methodologically speaking “unsound” for analytic research operations limited to one version of the text, the three basic SGML instances are supplied with this edition-instance on the CD.

4.

De teleurgang van den Waterhoek, as many modern novels mixes poetry and prose. Several paragraphs in the novel contain songs, which can be identified as poetry. Because of an error in the content model of <P> in P3, which hasn't been corrected in the revised P3 (sometimes called P4 beta) as I have it,3 <LG> and <L> are not allowed inside <P>. To solve this problem, two options are open:

  1. Modify the TEILite DTD by change the content model for <P> so that it allows both <LG> and <L> as its childs.
  2. Embrace <LG> or <L> with <Q>-tags. This compromise creates yet another pitfall, known as the mixed content problem. Every element whose content model in the TEI dtd is %specialPara; only allows #PCDATA or a sequence of childs.

5.

The edition contains a corpus of 70-odd letters which encoding with TEILite was problematic. The theory of letter editing imposes strict rules on its practice and on the visualisation of the edition, something which even in our electronic edition couldn’t be neglected.

First of all in editing letters it is common practice to transcribe the physical appearance of the writing, eg.

  • what’s underlined in the letter is put in italics in the edition
  • what’s double-underlined in the letter is put in italics and underlined in the edition
  • what’s been added is put between /slashes/
  • what’s been deleted between <-lower than and greater than marks with a minus sign>
  • what’s been altered between >lower than and double greater than signs>>

this asks for the generic markup to describe procedural information.

Further, there is the need to markup specific elements of a letter as such:

  • catalogue number
  • envelope
  • postmark
  • sender
  • receiver
  • sender address
  • receiver address
  • initials
  • subject statement
  • editorial commentary
  • words which are unclear
  • the distinction between a correction by the author and an editorial correction

Instead of modifying the TEI DTD I found it more useful and quicker to write my own STREULET DTD which defines, amongst others, all of these elements and allows for the use of both procedural and descriptive markup.

6.

Due to lack of time, the digitised facsimiles are supplied as Jpegs, derived from 24 bit 300dpi Tiff files, without any supplementary tagging or documentation.

Notes

1. Letter of Stijn Streuvels to Joris Eeckhout, 27.11.1927. Archive UB KULeuven, archive Joris Eeckhout (P23/93).

2. Cf. Letter of R. Van der Velde to Stijn Streuvels of 02/06/1938. AMVC (S 935 / B) and included in (Streuvels, 2000).

3. LG and L should be removed from m.chunk and added to m.inter which would result in the following entity declarations:

<!ENTITY % m.inter ’%x.inter %m.bibl; | %m.hqinter; |
      %m.lists; | %m.notes; | %m.stageDirection; | castList |
      figure | l | lg | stage | table | text’>

<!ENTITY % m.chunk ’%x.chunk ab | eTree | graph | p |
      sp | tree | witList’>

References

  • Bowers, Fredson (1992). 'Notes on theory and practice in editing texts.' In Peter Davidson (ed.), The Book Encompassed. Studies in Twentieth-Century Bibliography. Cambridge: Cambridge University Press, 244-257.
  • De Smedt, Marcel & Edward Vanhoutte (2000). Stijn Streuvels, De teleurgang van den Waterhoek. Elektronisch-kritische editie/electronic-critical edition. Amsterdam/Gent: Amsterdam University Press/KANTL.
  • De Smedt, Marcel (1996). 'Uit de ontstaansgeschiedenis van De teleurgang van den Waterhoek.' In Rik Van Daele & Piet Thomas, De vos en het Lijsternest. Jaarboek 2 van het Stijn Streuvelsgenootschap, Tielt, Lannoo, 309-326.
  • De Tienne, André. (1996). 'Selecting Alterations for the Apparatus of a Critical Edition.' TEXT, 9 (1996): 33-62.
  • Lavagnino, John. (1995). 'Reading, Scholarship, and Hypertext Editions.' TEXT, 8 (1995): 109-124.
  • McGann, Jerome. (1996). 'The Rationale of HyperText.' TEXT, 9 (1996): 11-32.
  • Stijn Streuvels (1999). De teleurgang van den Waterhoek. Tekstkritische editie door Marcel de Smedt en Edward Vanhoutte. Antwerpen: Manteau.

Wednesday 18 July 2012

First steps in Digital Humanities

Back in 1995, at Lancaster University where I undertook an MA in Mediaeval Studies (with 'ae'!) I met Professor Meg Twycross, who turned out to become one of the most influential women in my life. At a time when it was still possible to read through the complete internet (what we attempted in the computerlabs at night) and when we were trying out different nicknames on #IRC chat channels, Meg Twycross not only caught my attention with her tremenduously well taught courses on Medieval literature and culture and Paleography, but especially with the pilot for the York Doomsday Project which she was building at that time. In one of my nightlong Internet sessions, I came across Stuart Lee's Break of Day in the Trenches Hypermedia Edition which also exploited hypertext as a didactic means in the teaching of literature and culture. This appealed so much to me that I started to build similar editions of poems by Hugo Claus when I was a research assistant at the University of Antwerp in 1996. This was picked up by people from the Department of Didactics at the University of Antwerp who invited me to present on a conference on teaching Dutch in secondary education. I presented my first conference paper on 15 November 1996 under the title "Retourtje Hypertekst. Een reis naar het hoe en waarom van hypertekst in het literatuuronderwijs" and a revised version was published as:

However, this wasn't my first publication. Before this article came out, I had already published two pieces about the same matter:

  • 'Oorlogspoëzie en HyperTekst: Gruwel of Hype?' WvT, Werkmap voor Taal- en Literatuuronderwijs, 20/79 (najaar 1996): 153-160.
  • 'Met een Doughnut de bib in. Over de rol van HyperTekst in het literatuuronderwijs' Vonk, 25/5 (mei-juni 1996): 51-56.

In the following years, I wrote some more on this subject:

  • 'De geheugenstunt van hypertekst.' Leesidee, 3/10 (december 1997): 777-779.
  • 'De soap 'Middeleeuwen.' Leesidee, 3/9 (november 1997): 697-698.
  • 'Het web van Marsua.' Tsjip/Letteren, 7/3 (oktober 1997): 9-12.

Meg Twycross not only stimulated my interest in the use of hypertext for literary studies and through this for models of electronic editions, she also charged me with an important mission which changed my life forever. One day, when I was awarded the County College Major Award which came with a cheque for £250, I asked her what I should do with that money and she told me to go away, learn everything I could about SGML and come back and tell her. The first book I bought with the prize money was Charles Goldfarb's The SGML Handbook.

Both the interest in the hypertextual modeling of scholarly editions and the markup of texts using SGML formed what I have been doing since. And the only person to kindly blame for it is Meg Twycross.

What if your paper doesn't make it into print?

Recently a graduate student at UCL emailed me with a request to have access to a number of conference papers I presented at the beginning of my academic career, none of which made it into a publication. Apart from the abstracts which were published in the conference book, nothing of the research or argument presented in these papers survive.

Is this a bad thing? Not necessarily. One of the reasons why they were never turned into a publication is probably because they were simply not good enough. Another reason may have been that the 20 minute presentation didn't have enough body to write out as a full paper submission to a Journal or a chapter in a book. A third, and more plausible, reason is that I was too busy doing other stuff to revisit the conference paper and rewrite it as an academic paper.

Nevertheless, I do think they have some value to some people including myself. I myself am interested in the history of the field and in the evolution of ideas - I can't just read an article by my colleagues without looking up the previous publications on which the argument builds - and I find it difficult sometimes to reconstruct a history of thoughts because of the lacking documentation. Therefore, I decided to dig up my old conference presentations and make them publicly available on this blog over the coming weeks. For me personally, it'll probably be a confronting revisit of my first steps in academia, but it will hopefully generate a better understanding of the provenance of my current ideas.

For my own documentation and for the sake of contextualisation I will provide each paper with a short introduction explaining the circumstances of the research and the occasion of the presentation. I'll also try to reconstruct which conference papers were the inspiration to published papers.

Thursday 9 February 2012

Being Practical. Electronic editions of Flemish literary texts in an international perspective

This is the text of my lecture at the International Workshop on Electronic Editing (9-11 February 2012) in the School of Cultural Texts and Records at Jadavpur University, Kolkata, India.

The slides of this lecture have been published on Slideshare.





Keep it cool: the electronic edition & the fridge

Over the last couple of years, I have been observing my children's continuous development of skills with growing amusement. And, as those amongst you who are parents or grandparents will agree, kids sometimes really amaze you. From the age of two, my boys know, for instance, how to operate a fridge. As far as I can recall, neither their mother nor their father taught them how it worked and I'm pretty sure the grandparents didn't tutor them privately either. Nevertheless, they have since been very successful in opening the door of the fridge, exploring (if not rummaging) the contents, finding what they are craving for, picking one or two incidentally found extra's on their way out, and running off with their treasures after having closed the door again. They also noticed quite early on that the light is operated automatically on opening and closing the door and that they don't need to use a switch for that.

While I was witnessing one of their recent scavenger's hunts, it occurred to me that the fridge was the perfect model for what we have been looking for for over almost two decades now in the design of electronic textual editions. A fridge is an intuitively designed repository of a diverse range of foods from which anyone may quarry what they need. It offers an ideal storage space for a selection of fresh meat, fish, vegetables, fruit, dairy products as well as for semi-prepared foods, finished dishes, and leftovers. It is also the most economic and safest option to defrost foods. Although there is a generally acknowledged plan by which a fridge should be filled – bottles go in the inside of the door, vegetables and herbs go in the boxes at the bottom, meat goes on the bottom shelf and dairy products go on the top shelf – the internal organisation of the foods on the shelves is decided on by whoever fills it up, and can be changed according to the insights and preferences of any user. The products can, for instance, be grouped according to food group, meal, user frequency, size and so on. The fridge can be refilled, products can be replaced by fresher ones and new products can be introduced. Another feature is that one only needs some pieces of paper and a couple of magnets or post-it notes to annotate the contents of the fridge, put up shopping lists, or leave instructions about the next meal. By the same technique the appearance of the fridge is altered on a daily basis by moving around the notes, introducing new ones, taking old ones off, embellishing the outside with various collections of fridge magnets or with your children's artistic creations. The fridge's main function is to preserve foods over a certain period of time and to offer easy access to a wide range of products from within people's homes. And fridges are available in many models with various features like freezing compartments, ice makers and water dispensers which extend the fridge's central function. Unfortunately, it must be admitted, a fridge can't cook you a meal.

Electronic editions, by comparison, or at least the electronic editions we want to be making, should be intuitively designed 'repositories of information, from which skilled scholars might quarry what they need' as Peter Robinson stipulated once (Robinson, 2003b). Michael Sperberg-McQueen reminded us that 'any edition records a selection from the observable and the recoverable portions' of an 'infinite set of facts to the work being edited.' (Sperberg- McQueen, 2002) He mentions the apparatus of variants, glosses for some words, historical or literary annotations and the like as possible selections and 'visual effects, atmospheric sound, music, film clips of readings or performances' as possible elements of inclusion. Scholars who have written on the preferable contents of electronic editions like Susan Hockey (Hockey, 1996, p. 13-14), Marilyn Deegan and Peter Robinson (Deegan and Robinson, 1994 [1990], p. 36), Peter Shillingsburg (1996b) and Thomas Tanselle (Tanselle, 1995b, p. 592) agreed already in the mid-nineteen nineties that full accurate transcriptions and full digital images of each witness were essential parts of the edition, and both Tanselle (Tanselle, 1995b, p. 591) and Shillingsburg (1996a, p. 95) added to this the requirement of critically reconstructed texts. Whereas the 1997 CSE Guidelines for Editors of Scholarly Editions were fairly prescriptive on the contents of an electronic edition,1 Dan O'Donnell observes in 2005 that no 'standardization exists for the electronic editor' (O'Donnel, 2005b) while he points to electronic editions without textual introductions, without critical texts, without traditional textual apparatus, and without glossaries. The last version of the Guidelines published in 2006 (CSE, 2006), however, does not prescribe the contents of an electronic edition anymore, but provides some generalizations of the methodological orientation by which specific materials are edited. These Guidelines state that reliability is a defining quality of the scholarly edition and that this can be established by accuracy, adequacy, appropriateness, consistency, and explicitness. About how this is achieved in the edition, the Guidelines only observe that 'most scholarly editions' include a general introduction and explanatory annotations; they 'generally' include some sort of editorial statement; and 'commonly' include documentation of alterations or variant readings in appropriate textual apparatus or notes. In the attached list of Guiding Questions for Vetters of Scholarly Editions to these Guidelines, one could find all elements of the contents of an electronic edition as prescribed in the 1997 version of the Guidelines, but no minimal or defining score for scholarly editions of any nature is given. Robinson (2007a, p.8) summarizes: 'for a digital edition to be all it can and should be, then it will let the editors include all that should be included, and say all that needs to be said.'

The use of a general markup language like the TEI for the encoding of the electronic edition's contents and its exploitation by publication suites that take advantage of this encoding allow the organisation and reorganisation, that is the grouping and selection, of the edition's data according to a variety of principles. One of the earliest noted advantages of the electronic edition is, as far as it is not published on a fixed medium like a CD-ROM, as was the case in its early days, its openness to revision and change. But how we should keep track of the subsequent versions of such an edition is another matter which we may discuss in the session on preservation on Friday. The issue of third party annotation creation and display in a digital edition is another much debated central component of the electronic edition we want to be making (e.g. Robinson, 2003a; Boot, 2007a; 2007b). As a matter of fact, user driven annotation tools were often integrated in the SGML publishing software with which early electronic editions were published (e.g. De Smedt & Vanhoutte, 2000) but disappeared when interfaces to electronic editions were built on the basis of open source engines and suites of XML related formatting, stylesheet and query languages. Possible models to empower the user have recently been proposed by Shillingsburg's knowledge sites (Shillingsburg, 2006) and Ray Siemens' social edition (Siemens et al., forthcoming). Also on Friday, we can discuss this further in the session led by Anna Gerber.

The main function of an edition, whether it is conceived of and published electronically or in print, is to mediate, as Paul Eggert has reminded us, 'according to defined or undefined standards or conventions, between the text of a document made by another and the audience of that anticipated publication.' (Eggert, 2002, p. 17) Thereby the editor is involved in taking attitudes towards the preservation, presentation, and transmission of an existing text (Eggert, 2002, p. 17-18). Consequently, the electronic edition must contain the data to present a text and ways to explicate the editor's attitudes. On top of that, the electronic edition may contain analytical tools by which the user can replicate the editor's methodology and data processing. Another essential function of the edition is what I am calling the communicative function, namely to make sure that it reaches as wide an audience as possible.

This is where the fridge-model does not represent the reality of electronic editions anymore. The relative failing of the electronic edition has been lamented by their creators on many occasions (Robinson, 2005; 2010; Vanhoutte, 2009). Overall, the existing electronic editions have lacked to find their audience and thus failed in their communicative function. As an undergraduate student taking my philosophy exam I had to answer the question whether a chair on Mars was useful. The correct answer, which I happened to produce, was that the usefulness of a chair on Mars was dependent on the presence of subjects to whom the chair could be useful. If we take for granted that Mars is not populated by subjects who could appreciate the functional qualities of the chair, that chair is useless and functionally non-existent. Peter Robinson once claimed that 'an edition is an act of communication.' Consequently '[i]f it does not communicate', he says, 'it is useless.' (Robinson, 2009 [1997-2002]) By contrast, the fridge appeals to everyone, from the food addict, the really hungry, and the professional chef, to the keen amateur cook, the incidental snacker and the complete novice, to the food hater. From the omnivore, to the health guru, to the vegetarian and the vegan. But if fridges and electronic editions have so much in common, why is it then that not every household has at least one of each? The fridge's success can be explained by its consistent offer of the same functions and opportunities to all human beings and through them even to animals – dog and cat food are also preserved in the fridge. On top of that, the basic interface and functions are fixed and independent of what colour, size, type, or design the fridge itself takes. The fridge's success is thanks to its design for one culture. The electronic's edition's failure is due to its design for two cultures.

The problem of two audiences and two natures

John Lavagnino identifies the communicative function of the edition as problematic and points at an anomaly in the context of academic publishing. Whereas scholars across all disciplines mainly publish within the circle of their peers and address a larger community in popularized writings, 'a scholarly editor', according to Lavagnino, 'is still always expected to serve a larger community that may not – and, at present, usually does not – take any great interest in the discipline of editing.' (Lavagnino (2009) [1997-2002]) In this greater community, which Lavagnino also names 'the popular audience' or 'the common reader', he includes many scholars who haven't had any involvement or interest in editing, and thus don't understand the codes of the scholarly edition. The tension between serving both the common reader and the editor's peer with the same product he calls 'the problem of two audiences'.

The literary critic is in the first place a reader, possibly an academic, and exceptionally a textual critic or a scholarly editor. As Dirk Van Hulle has reiterated, '[L]iterary critics tend to take the text for granted by assuming that the words on which they base their interpretations are an unproblematic starting point.' (Van Hulle, 2004, p. 2) Scholarly editing as a product generating activity can react to this observation in two extreme ways. The first possibility is not to contest the literary critics' or common readers' assumptions about the definite singularity of the text and provide them only with the result of scholarly editing, namely an established text, preferably accompanied by annotations of some sort. The second option is to confront them with their wrong assumption and draw their attention to the multiplicity of the fluid text caused by its genetic and transmissional history. This can be done by introducing them to the data of textual scholarship.

The first option is a function of the reading edition, which I am calling the minimal edition (Vanhoutte, 2010), the second option is a function of the historical-critical or variorum edition, which I am calling the maximal edition. The minimal edition is a cultural product that is produced by the scholarly editor who acts as a curator or guardian of the text, whereas the maximal edition is an academic product that is produced by the scholarly editor who demonstrates their scholarly accuracy and scrutiny. The minimal edition is targeted towards well but negatively defined audiences – that is, readers who are not interested in scholarly editing – and present only the conclusions of the full critical and historical research on the genetic and transmissional history of the text. Besides an editorial statement and some sort of commentary, they most importantly present a citeable text which can be enjoyed as an aesthetic reading object. The maximal edition, then, presents the critical and historical research itself in an attempt to engender understanding amongst the editor's peers.

The two audiences are neatly but separately served by the minimal and the maximal edition which are essentially different in nature. Therefore, the commercial reality of scholarly editions of the minimal and the maximal type should be taken into consideration when theorizing about their essential function and audiences.

But who are the other audience – the editor's peers? I guess there does exist a small but growing group of literary critics who do take some interest in scholarly editing. Besides this group, there are other scholarly editors who may be interested in a scholarly edition for a variety of reasons. One group may be interested because they are editing or have been editing the same text. Given the reality of scholarly editing, however, this is very unlikely, except, perhaps, for a couple of important and much debated texts. Another group may consist of editors who work or have been working on editions of texts by the same author. Often, these editors work in close collaboration with each other and hence don't really form the most critical group. A third group consists of editors who work or have been working on texts from the same period or the same literary tradition; texts with a similar document architecture or complexity; or texts with a similar transmissional or genetic history. This group is interested in the edition's theoretical solutions to a variety of problems offered by the text. A last group consists of editors who are interested in another editor's methodology, the edition's technology, and the editorial models suggested, explored, and demonstrated by the edition.

The first two groups are interested in all of the presented texts across the fullest appearance of the edition, including the record of variants, the commentary and all other parts of the edition that are focused on gaining an understanding of the text. It has to be added that their interest may be comparative in nature. The third group is only interested in those parts of the texts which are problematic to scholarly editing and useful to their own purposes. The interest is often in methodology, visualisation procedures, and techniques. The last group is not interested in the text or its meaning, but only in the technology applied to the text and the edition.

The members of all of these groups change according to the edition. Since I have mainly edited works by Flemish authors, for instance, I belong to the first two groups whenever an edition of works by one of these authors appears or in the very unlikely situation when a text I have edited is edited again. I belong to the third group, however, when consulting editions of works of Flemish authors I haven't worked on or of a modern foreign author. However, I most often belong to the fourth group when looking at editions of classical, mediaeval, or renaissance texts, or any electronic edition I can lay may hands on. A member of the first two groups, I will study the integral edition in the hope the edition will add to my understanding of the text. A member of the third group, I will try to isolate some samples in the edition which demonstrate a problem akin to the problems I'm working on in the hope that the edition will add to my understanding of how to deal with these kind of problems. My attention will go in particular to the editorial statement and the textual essay explaining the applied methodology. A member of the last group, I will explore the edition's functionalities and architecture, study the edition's markup and rendering, including its design, and try to understand the edition's technological issues. My attention will go in particular to the technical documentation, the encoding strategies, and the source files I can rip off the edition.

Being practical

At the Centre for Scholarly Editing and Document Studies of the Royal Academy of Dutch Language and Literature, we create editions for both of these audiences and exploit either the print or the digital medium. For the common reader, we're editing a series of complete works by modern Flemish poets which are published in print. Since 2004, four volumes appeared with the collected poetry of Jos De Haes (1920-1974), Hugues C. Pernath (1931-1975), Paul Snoek (1933-1981), and Eddy Van Vliet (1942-2002). These poets are roughly comparable in name and fame to British poets like Philip Hobsbaum, Philip Larkin, Stevie Smith, and John Silkin. For the kind of editions with an explicit cultural function, we developed the text-critical edition which presents itself explicitly as a reading edition, but contains elements which are traditionally found in a study edition, for instance concise annotations and the textual essay containing chapters on the genetic history of the text, on the transmission of the text and the bibliographic description of the extant witnesses, and on the editorial principles. The textual essay is written from the perspective of the reader who wants to be informed about the reading text rather than from the perspective of the textual scholar who wants to demonstrate the results of their research. It has to be noted that this textual essay can easily be ignored, which is why we print them after the reading texts. The books are easily between 500 and 900 pages, are published in paperback and are sold for no more than € 30. The edition of the complete poems of De Haes sold 585 copies in three years time, Pernath sold 663 copies in two years time, the edition of Snoek – the most voluminous one – sold 1,126 copies in only three months time, and Van Vliet sold 756 copies in three years. And it's not only poetry that sells. An annotated reading edition of the selected letters of Herman De Coninck (1944-1997), another Flemish poet and critic, sold over 3,800 copies in two years time. I'm convinced that the successful sales figures of these editions are thanks to the unambiguous focus on the common reader who wants to read texts as aesthetic and historical objects.

There is no use in taking advantage of digital technology for the publication of electronic editions of these collected poetry because the audience is simply not there, and the common reader who is buying the print edition does not want the electronic edition. In 1999-2000 I edited a novel by Stijn Streuvels. The text-critical reading edition was published in 1999 by a literary publisher and sold all of the 375 copies which were printed in less than three months time. Although the demand was there for a second print, the publisher was not interested and refused. A year later, the electronic-critical edition of the same novel, targeted at a more academic public was published with an academic publisher. Some 200 CD-ROMs got sold and I still consider this a success. It seemed however, that there was a clear distinction between the different audiences of either product. The reading edition was completely sold to the group of common readers. The electronic edition, however, was sold to a more diverse group of people, namely the four groups of peers I introduced above, accompanied by common readers with an interest in scholarly editing, collectors of Streuvelsabilia, policy makers, and the technologically curious. I estimate that the anticipated audience which was really interested in the genetic and transmissional history of the text as explained by the electronic edition, consisted of about thirty people. At least, I have the impression I know each one of them by name. But the momentum was there for the electronic edition. The publisher had just returned from a visit to Jerome McGann's Rossetti Project in Virginia when I contacted her. She had heard about SGML, and I offered her an SGML encoded edition at that very moment. After a 15-minute chat and a demonstration, the contract was signed for the production and distribution of 500 CD-ROM's. I know that I will never be able to close such a deal ever in my life again. In 2007 I managed the production of probably the most successful electronic edition on CD-ROM ever. A total of 2,350 copies of the electronic edition of Willem Elsschot's Achter de Schermen was produced. But none of them was sold individually: 650 copies were inserted into Dirk Van Hulle's book on genetic criticism, 750 copies were sold to the Elsschot Society who gave it out as part of the membership package, 960 copies were presented as a Xmas gift to the contacts of the Dutch Huygens Institute, and 90 were retained for marketing and demonstration purposes.

These figures are interesting, because they suggest that these electronic editions have found their audience and they argue against the failure of the electronic edition. However, their audience has come by accident to the electronic edition. These electronic editions were not successful because they were state of the art products of textual scholarship, but because of the immense popularity of the original author, because of the inclusion of digital images of the manuscripts, which always appeals to collectors, because of its novelty character, or because of the big give away campaign. These accidental audiences were never taken into consideration when producing the edition. But if this accidental audience does become the target audience of electronic editions that is instrumental in the fulfilment of the communicative function, the edition must also provide this audience with access to a text and access to understanding by means of the same product. If this is the case, we are in the business of creating electronic editions of two cultures.

Editions of two cultures

The electronic edition distorts the efficiency of this system by ignoring the problems of two audiences and two natures in trying to combine two cultures in one product. The technical possibilities of the electronic edition brought to scholarly editing the option of all-inclusiveness which led early anticipators like Shillingsburg to visions of blurred distinctive lines among electronic archives, scholarly editions, and tutorials (Shillingsburg, 1996b, p. 25). Three central qualities of the electronic edition answered the call in conventional scholarly editing for the discipline's movement towards a true science, namely storage capacity, text encoding, and visualization technology.

The cornerstone of true science is the principle of external replication. This means that the scientific results or data obtained under conditions which are the same each time should be reproducible by peers in order to be valid. Further, the report on the research should contain sufficient information to enable peers to assess observations and to evaluate intellectual processes (Council of Biology Editors, 1994). This is exactly what maximal editions do by the presentation of their formalized and formulized apparatuses – apart from providing the data for a more or less correct assessment of the genetic and transmissional history of the text. The scientific reflex in editorial theory could hence be interpreted as the recognition that the function of the maximal edition is not to inform the reader but to protect the editor. This is why I call these maximal editions ambiguous and ambidextrous. Ambiguous because the presentation of the genetic and transmissional variants subverts the stability of the reliable textual basis the literary critic is looking for, but at the same time, the presentation of an established reading text may be too speculative for geneticists and scholars interested in the variant stages of the work. Ambidextrous because a maximal edition logically contains a minimal edition and presents the textual archive alongside. The key feature of the electronic edition, then, in order to appeal to many audiences would be a differentiation of the supply by user controlled selection mechanisms which can turn the all-inclusive edition into a minimal version presenting one citeable text accompanied by selected categories of commentary. Only, as I argue elsewhere (Vanhoutte, 2009), the electronic edition, despite its dynamic architecture, fails through its medium as cultural product and can't compete with any printed version of the text which is easily available for the reader. We should have learned by now that common readers seldomly turn to the screen for aesthetic experiences other than those offered by the exposition of full colour digital facsimiles of exceptional manuscript material.

This was cleverly exploited a couple of years ago by the Dutch Royal Library which provided on-line access to the full colour facsimiles of the famous Flemish Gruuthuse manuscript shortly after their acquisition caused a political scandal in Flanders. The Flemish common reader on the one hand argued against their own government who had let the manuscript leave the country, but on the other hand praised the newly achieved access to the facsimile edition which was just right: an introduction, the digital facsimiles and a transcription offered in two interfaces, namely a Flash version which allows you to browse through the manuscript, and an HTML version which presents the digital facsimiles next to the transcriptions. Also very clever from a marketing point of view is the direct entrance to the most famous song in the manuscript offered from the welcoming page of the on-line exposition. This on-line edition provides access to data rather than understanding and is a huge success with the public thanks to its singular focus on one culture.

The central assumption of the electronic edition that the reader's understanding of a text is better catered for by a capacious edition representing a multitude of versions and states of the text under study and databases of critical analyses and commentaries submitted by the (co-)editors and critical user is based on the utopic concept of the professional student of the text, not on the concept of an educated and interested reader with other professional occupations. Electronic textual editions are highly specialized tools that are only understood by scholars who are akin with the principles and functions of textual editing and have read the users' manual. Editions for everyone are therefore a utopic concept.

Editions for everyone

In From Gutenberg to Google, Peter Shillingsburg introduces the concept of the Knowledge Site as an elaboration of his early vision of the blurring distinctive lines among electronic archives, scholarly editions, and tutorials (Shillingsburg, 1996b, p. 25).

The space and shape I will try to describe is one where textual archives serve as a base for scholarly editions which serve in tandem with every other sort of literary scholarship to create knowledge sites of current and developing scholarship that can also serve as pedagogical tools in an environment where each user can choose an entry way, select a congenial set of enabling contextual materials, and emerge with a personalized interactive form of the work (serving the place of the well-marked and dog-eared book), always able to plug back in for more information or different perspectives. (Shillingsburg, 2006, p. 88)

The knowledge site would provide the information needed to understand the meaning of textual variation rather than information needed to preference one text over another or separate right from wrong readings. Peter Robinson's concept of 'fluid, co-operative and distributed editions' (Robinson, 2003a, p. 125) that are truly actively interactive4 through their instinctive interface design (Robinson, 2007a; forthcoming a; b; c)5 realizes Shillingsburg's concept of knowledge sites through the formation of active on-line communities.6 This is a loud echo of Lavagnino's suggestion of a model for electronic editions based on interactive, collaborative work on texts: 'In this model, you not longer have the sharp division between producers and consumers of information [...] an interactive and collaborative edition would instead be open to incorporating work from everyone who's interested in contributing'. (Lavagnino, 1997-2002) Also, Robinson's ideas of 'electronic editions for everyone' (Robinson, forthcoming b) correspond with Shillingsburg's concepts of the convenient and the practical edition (Shillingsburg, 2005) that must bridge both the theoretical and practical differences between textual and literary critics and that goes back to Fredson Bowers' concept of the 'practical edition'.7

These new models of distributed and collaborative editions Shillingsburg and Robinson develop, however, will not provide the general model for electronic editions nor will they propose a generally applicable and stable interface for electronic editions that would approximate the fridge model. The distributed and open model for electronic editions may well be suited for the specific texts from classical, medieval, and Victorian Anglo-American textual traditions Robinson and Shillingsburg are involved with and they may well respond to the needs of the broad communities interested in them, but they may prove less useful for editors of texts from smaller and language specific traditions.8 Editors of modern Dutch and Flemish texts, for instance, work for a mostly receptive audience of only a few interested academics, and a reading public of a couple of hundreds who mainly want a practical reading edition in print. The idea of the active involvement of a computer literate and critical community with a knowledge site built around a modern Dutch or Flemish text is but an idle phantasy.

Add to this another characteristic of the average scholar which C.P. Snow already observed in his seminal 1959 Rede Lecture entitled The two cultures and the scientific revolution, namely that intellectuals are Luddites, further complicates the case of the distributed collaborative knowledge site-like edition. The theoretical model of the electronic edition for everyone as envisioned by Robinson, will in practice be the most specialized edition thinkable for the smallest group thinkable, consisting of editors of the same work or text and the same author and those literary critics interested in the scholarly edition of this specific work.

Being practical, again

Therefore, at the Centre for Scholarly Editing and Document Studies, we have developed a model that considers the electronic edition as a maximal edition that logically contains a minimal edition. An essential function of this maximal edition is that it fulfils the users' need for a reliable textual basis by the inclusion of a critically established reading text. Rather than providing a valuable supplement to a print edition, as is often the reduced function of an electronic edition in an editorial project, this model empowers the user to check upon the choices made in the critical establishment of the text by way of access to the textual archive. At the same time, the model allows the user to ignore the editors' suggestions and to develop their own perspective on the maximal edition, or to generate a minimal edition of their choice. The reproducability of the thus generated minimal edition is guaranteed by a record of the choices that informed it. This documentary feature of the electronic edition facilitates the scholarly debate on any one of the many texts and provides any reader with a clear statement on the status of the minimal edition generated and printed for distribution or reading. Because of the scholarly basis of the electronic edition as a whole, even the most plain reading text with no additional information generated by the user qualifies as a scholarly edition. By emphasizing the on the fly generation of user defined printable editions as a central feature in our system together with the documentation of its definition, we strive towards the re-evaluation of scholarly editions as cultural products. So we see the electronic edition – or the maximal edition – as the medium par excellence for the promotion of the scholarly reading edition – or the minimal edition – and the recentering of the printed edition.

I will demonstrate this with the electronic edition of De trein der traagheid which will be published online next May, after having served for many years as our tinkertoy for our experimental modeling approach. The edition currently presents a critically established reading text and nineteen versions of the novella from its print history. The result of the collation of all versions is documented according to the TEI parallel segmentation method inside a master XML file that also contains all editorial annotations. This guarantees the completely equal treatment of each version of the text in the generating processes invoked by the user. Through the interface of the edition, the user can exploit the underlying TEI encoding by selecting any version and generate three possible views of the texts: XML for analysis, XHTML for consultation on the screen, and PDF for printing out as a reading edition. Any version can also be combined with any combination of any number of witnesses whereby the initial version functions as orientation text and the other selected versions are displayed in a lemmatized apparatus variorum. From within this apparatus, the generated edition can be reoriented from the point of view of any included witness. The model applied to this specific textual history allows the user to generate 10,485,760 possible editions of the complete text of the novella and when taken into account that editions for each separate chapter can be generated as well, this figure is multiplied by 35 which gives a total of 367,001,600 possible editions.1 Any one of these editions can again be exported to XML, XHTML, or PDF. Any number of versions, depending on the dimension and resolution of the user's screen, can also be displayed in parallel and the respective lists of variants can be generated on the fly.2 The minimal and the maximal editions are fully searchable, and the search results can be displayed in a KWIC concordance format.

The edition is powered by a dedicated suite of open source XML-aware parsers, processors, and engines combined with appropriate XSLT, XQuery and XSLFO scripts.

Concluding remarks: Teach the audience how to swim

On the 1997 Toronto Conference on Editorial Problems, Michael Sperberg-McQueen and Peter Robinson wrapped the thesis of their papers in a swimming metaphor. Sperberg-McQueen advised the audience not to teach their edition how to swim, but instead concentrate on the content, not on the behaviour of the edition in order for it to survive. His paper summoned its audience to invest in the data and to use encoding standards like the TEI for that purpose. In a later revision of that paper, Sperberg-McQueen retained the metaphor but reversed its polarity, explaining now how to teach your edition how to swim. In that revision he refined his earlier focus on the content by adding that editions should also be given the capabilities 'of doing things interactively with the reader.' (Sperberg-McQueen, 2009 [1997-2002]) Peter Robinson, in his paper, replied to Sperberg-McQueen by contending that: '[T]he great promise of electronic editions [...] is not that we will find new ways of storing vast amounts of information. It is that we will find new ways of presenting this to readers, so that they may be better readers. To do this,' he added, 'we will have to teach our editions to swim to the readers.' (Robinson, 2009 [1997-2002]) Discussions on text-encoding, he called 'dry-land swimming' and in order to 'make some real editions for real readers' he reminded us why editors have to learn to swim.

Robinson's concepts of editions have been based on the anticipated reader which forms the essential basis for his understanding of text and meaning ('Text does not exist outside the meanings we create: and these meanings are all the text we will ever know.' (Robinson, 2009 [1997-2002])); for the purpose of encoding ('We do not ask: what is the right encoding of this word. We ask: who is to use the text we make? What use do they want to make of it? What do we think this text is saying? How can we, as editors, help the text speak to its readers?'); and for the edition ('A transcription, an edition, is 'right' only in that it might serve these purposes'). Or in a more direct formulation which appeals to Sperberg-McQueen's point: 'Editions do not survive because they are preserved in elegant encoding and in government-maintained electronic archives. They survive because they are read. They survive because people find them useful, they survive because scholars, students, school children find they help them read.' (Robinson, 2009 [1997-2002])

In my lecture today, I have tried to illustrate why I think this last statement is problematic when tested against the reality of electronic editions. Instead of teaching how (not) to teach your edition how to swim, or teaching the editors to swim, I suggest that we start teaching the audience to swim. When I, just minutes away, argued that electronic textual editions are highly specialized tools that are only understood by scholars who are akin with the principles and functions of textual editing and have read the users' manual, I meant what I was saying. We need to accompany our electronic editions with detailed manuals that outline the functionalities of the edition; that explain the anticipated audience how they can make use of the knowledge which has been put into the edition; how they can operate the included tools; how they can replicate the editor's research; how they can interact with the edition; and how they can contribute to the edition. We have to publish in papers and essays examples of the research questions generated by the edition as a trigger for scholars to dive into the edition and come up with suggestions and hypotheses which might solve them. We have to continue the communication about the discipline so that working with electronic editions becomes more academically acceptable.

But above all, as editors, we have to make sure, that each audience is allowed to swim in the pool which is best suited to their skills and purposes. I think we all know how annoying it is to try to float in an attempt to reach the state of weightlessness in a pool full of lane swimmers, and to try to swim lanes in a pool full of floaters. If the fridge was a utopic model for the electronic edition, a well organised swimming pool is a realistic one.

References

  • Boot, Peter (2005). Advancing Digital Scholarship using EDITOR. In Humanities, Computers and Cultural heritage. Proceedings of the XVI international conference of the Association for History and Computing 14-17 September 2005. Amsterdam: Royal Netherlands Academy of Arts and Sciences, p. 43-48.
  • Boot, Peter (2007a). A SANE approach to annotation in the digital edition. In Braungart, Georg, Gendolla, Peter and Jannidis, Fotis (eds), Jahrbuch für Computerphilologie, 8: 7-28. Also published in Jahrbuch für Computerphilologie - online.
  • Boot, Peter (2007b). Mesotext. Framing and exploring annotations. In Stronks, Els and Boot, Peter (eds.), Learned Love. Proceedings of the Emblem Project Utrecht Conference on Dutch Love Emblems and the Internet (November 2006). The Hague: DANS, p. 211-225.
  • Bowers, Fredson (1969). Practical Texts and Definitive Editions. In Hinman, Charlton and Bowers, Fredson, Two Lectures in Editing: Shakespeare and Hawthorne. s.l.: Ohio State University Press, p. 21-70.
  • Council of Biology Editors (1994). Scientific style and format: the CBE manual for authors, editors, and publishers. 6th ed. Cambridge: Cambridge University Press.
  • CSE (1997). Guidelines for Electronic Scholarly Editions. (1 December 1997, revised June 2002).
  • CSE (2006). Guidelines for Editors of Scholarly Editions. (Last revised 7 April 2006) Also published in Burnard, Lou, O'Brien O'Keeffe, Katherine, and Unsworth, John (eds.) (2006). Electronic Textual Editing. New York: Modern Language Association of America, p. 23-46.
  • Deegan, Marilyn and Robinson, Peter (1994). The Electronic Edition. In Scragg, D.G. and Szarmach, Paul E. (eds.), The Editing of Old English. Papers from the 1990 Manchester Conference. Cambridge: D.S. Brewer.
  • De Smedt, Marcel & Vanhoutte, Edward (2000). Stijn Streuvels, De Teleurgang van den Waterhoek. Elektronisch-kritische editie/electronic-critical edition. Amsterdam: Amsterdam University Press/KANTL.
  • Eggert, Paul (2002). The Importance of Scholarly Editing, and the Question of Standards. In Plachta, Bodo and Van Vliet, H.T.M. (eds), Perspectives of Scholarly Editing / Perspektiven der Textedition. Berlin: Wiedler Buchverlag.
  • Lavagnino, John (2009 [1997-2002]). Access. In Julia Flanders, Peter Shillingsburg & Fred Unwalla (eds.) Computing the edition. Thematic Issue of LLC. The Journal of Digital Scholarship in the Humanities, 24/1: 63-76.
  • O'Donnell, Daniel Paul (2005b). O Captain! My Captain! Using Technology to Guide Readers Through and Electronic Edition. The Heroic Age. A Journal of Early Medieval Northwestern Europe, 8.
  • O'Reilly, Tim (2005). What is Web 2.0. Design Patterns and Business Models for the Next Generation of Software. O'Reilly. http://www.oreilly.com.
  • Price, Kenneth M. (2007). Electronic Scholarly Editions. In Siemens, Ray and Schreibman, Susan (eds.), A Companion to Digital Literary Studies. Malden, MA / Oxford / Carlton: Blackwell Publishing, p. 434-450.
  • Robinson, Peter (2003a). Where We Are with Electronic Scholarly Editions, and Where We Want to Be. In Braungart, Georg, Eibl, Karl and Jannidis, Fotis (eds.), Jahrbuch für Computerphilologie, 5: 125-146. Also published in Jahrbuch für Computerphilologie - online.
  • Robinson, Peter (2003b). The History, Discoveries and Aims of the Canterbury Tales Project. The Chaucer Review, 38/2 (2003): 126-139.
  • Robinson, Peter (2005). Current issues in making digital editions of medieval texts–or, do electronic scholarly editions have a future? Digital Medievalist,. 1.1 (Spring 2005).
  • Robinson, Peter (2009 [1997-2002]). What text really is not, and why editors have to learn to swim. In Julia Flanders, Peter Shillingsburg & Fred Unwalla (eds.) Computing the edition. Thematic Issue of LLC. The Journal of Digital Scholarship in the Humanities, 24/1: 41-52.
  • Robinson, Peter (2010). Electronic Editions for Everyone. In McCarty, Willard (ed.), Text and Genre in Reconstruction. Effects of Digitalization on Ideas, Behaviours, Products and Institutions. Cambridge: Open Book Publishers, p. 145-
  • Shillingsburg, Peter L. (1996a). Scholarly Editing in the Computer Age. Theory and Practice. Third Edition. Ann Arbor: The University of Michigan Press.
  • Shillingsburg, Peter (1996b). Principles for Electronic Archives, Scholarly Editions, and Tutorials. In Finneran, Richard J. (ed.), The Literary Text in the Digital Age. Ann Arbor: The University of Michigan Press, p. 23-35.
  • Shillingsburg, Peter L. (2006). From Gutenberg to Google. Electronic Representations of Literary Texts. Cambridge: Cambridge University Press.
  • Snow, C.P. (1961). The two cultures and the scientific revolution. London: Cambridge University Press. Originally delivered as the 1959 Rede Lecture.
  • Siemens, Raymond, Timney, Meagan, Leitch, Cara, Koolen, Corina & Garnett, Alex (forthcoming 2012). Toward Modeling the Social Edition: An Approach to Understanding the Electronic Scholarly Edition in the Context of New and Emerging Social Media. Forthcoming in LLC. The Journal of Digital Scholarship in the Humanities.
  • Sperberg-McQueen, C.M. (2009 [1997-2002]). How to Teach Your Edition How to Swim. In Julia Flanders, Peter Shillingsburg & Fred Unwalla (eds.) Computing the edition. Thematic Issue of LLC. The Journal of Digital Scholarship in the Humanities, 24/1: 27-39.
  • Tanselle, G. Thomas (1995b). Critical Editions, Hypertexts, and Genetic Criticism. The Romanic Review, 86/3: 581-593.
  • Vanhoutte, Edward (2009). Every Reader his own Bibliographer: an Absurdity? In Deegan, Marilyn and Sutherland, Kathryn (eds.), Text Editing, Print and the Digital World. Aldershot: Ashgate, 2009, p. 99-110.
  • Van Hulle, Dirk (2004). Textual Awareness. A Genetic Study of Late Manuscripts by Joyce, Proust, and Mann. Ann Arbor: The University of Michigan Press.