This is the text of my keynote address at the 2011 Annual Conference and Members' Meeting of the TEI Consortium (Wednesday, 12 October 2011) in the Toscana Saal of the prestigious Würzburg Rezidens. A revised article version of the text will be published in the Journal of the Text Encoding Initiative. Both this blogpost and the journal article come without the chocolates.
The slides of this keynote have been published on Slideshare
Read Toma Tasovac's comments on So meta
First, let me say how honoured I am to have been asked to be the opening keynote speaker at this 2011 Annual Conference and Members' Meeting of the TEI Consortium. I have to confess, however, that I was surprised at first, then I became seriously nervous and in the end a feeling of relief fulfilled me. I was surprised because my academic work on the topic of Philology in the Digital Age has only produced four published articles over the last five years. Moreover, of the seven scholarly editions I've published so far, only one is a digital edition and to digital standards it was published in the dark middle ages of SGML. Even worse, both scholarly editions I'm working on at the moment will be published as books. I know that many of you have been more actively involved with the theme of this conference, which makes me quite nervous. I also realised that many of you will probably tweet comments about what I'm saying, if you haven't already done so. If you do, be sure to include my twittername @evanhoutte and the hash-tag #tei2011 in your comments, so I can prolong my nervosity till after I've read it all tonight. But then I thought, what the hell. If the programme committee of this conference wants me to present the opening keynote, they must have had at least one good reason to think I'm fit for the job. Unless of course I was the only one left who hadn't declined their invitation. I was charmed by their implied conviction that what I could tell you would be interesting enough, even if it was said from a spectator's point of view. Because I feel that's the position I'm gradually moving into due to my huge involvement with the administrative side of my job. Whatever their motive was, the fact is that you're stuck with me for the following hour or so. So sit back and let me entertain you.
For those who don't know me yet, I'm the director of research and publications of the Royal Academy of Dutch Language and Literature in Belgium, or should I say Flanders. The length of my job title is inversely proportional to its importance. Bygone are the days that the Royal Academy, which celebrates its 125th anniversary this year, was of any political importance. The Royal Academy of Dutch Language and Literature was founded in 1886 with the explicit task to design a uniform spelling for Dutch and to promote and facilitate the use of Dutch as a language for literature, science, scholarship and higher education. Since all of these initial goals were reached ages ago, and since we have language laws in Belgium protecting the status of all three of the country's official languages, the Royal Academy lost its influence and prestige and only recently discovered new opportunities to act as a moderate player in the cultural and academic field in Flanders. As part of the action to set new challenges for the Academy, I was asked to set up the Royal Academy's research department in the year 2000 and to concentrate its activities around two main topics: the scholarly edition of important literary works and cultural documents from Flanders and the building of linguistic historical corpora. From its start, I have pushed humanities computing as the centre's methodology and TEI as the expressive language for its research. In the first two years, the Centre for Scholarly Editing and Document Studies or CTB is it is called, saw an explosive start and employed a staff of 15 researchers. Nowadays we have been rationalized down to 2.8 fte. In the past eleven years we managed to publish 20 scholarly editions of Flemish prose, poetry, and correspondence collections, about the same amount of essay collections, monographies, and theoretical studies and close to 200 articles, papers, and book chapters. Since we're zooming in on the 19th and the 20th century, only two online editions have been published so far, due to copyright restrictions. In the coming year or so, we plan to publish two online editions of 20th century novels and two online editions of correspondence collections containing two and a half thousand letters.
My outfit gives away that I'm also involved in the food business as Malte has pointed out, both as a food writer for different media and by running cookery courses which welcome about 500 people a year. I thought it would be fun to add another 130 to that number tonight, but the organisers couldn't afford a full blown cookery studio without at least quadrupling the registration fee. So I had to come up with something else instead. I asked my friend and top shock-o-latier Dominique Persoone of The Chocolate Line to create five exciting and unusual chocolates that would add an extra dimension and experience to the second half of my speech. If you follow the instructions on the screen and eat the right one when I ask you to do so, you'll not only hear and see, but also feel, smell, and taste what I'm talking about.
I beg you, however, not to hold them too close to your body, because chocolate is the only food product that melts at body temperature, and the fluid text is, after all, not the topic of my talk. And all this is of course one big fat excuse for me to wear my chef's jacket in front of an academic audience, something I always wanted to do.
1.4. Introduction proper
So what will I be talking about? The title of my lecture refers to two hugely popular television formats which have been adapted and broadcasted worldwide. The first programme So You Think You Can Dance is a hit dance competition and reality show which inspires and amazes viewers as dancers showcase their unique and eclectic style and talents. This programme not only boosted registration numbers in dance schools, it also generated a whole series of spin-off dance reality shows and line extensions. Through the programme, dance became a phenomenon in present day society and commerce. In nutshell: dance became an industry. Macade Brandl, the executive director of Dance New Jersey, even spoke of the 'So You Think You Can Dance Effect' (Brandl, 2011).
The other programme Masterchef is a cooking competition for amateur cooks who are challenged to perform at gourmet level. The programme format is adapted and broadcasted in over twenty-five countries and it has generated huge public interest in food-related activities. In a column in the Sydney Morning Herald, Thomas Hunter quotes a report from analysis group IBISWorld which claims that the programme has a huge impact on the Australia's food industry (Hunter, 2010). The intense focus on fine food and the unique ingredients used to create it, has developed a taste for specialist, gourmet foods with the audience. The report predicts an economic growth of more than 60 % in the restaurant and catering business in the coming 4 years and Coles, a major MasterChef partner which has advertised heavily within the series, has reported a 1,400 % raise in the sales of what it terms 'unusual' ingredients after they feature in a MasterChef recipe. The programme's cookery books and magazine sell out instantly, and the programme revives cookery courses and the sales of kitchenware. In his column, Hunter calls this 'the MasterChef effect'.
Looking at the spectacular impact these programmes have on society I wondered whether it isn't about time we create a 'Digital Edition Effect'? Imagine that programmes like So You Think You Can Edit and The Masterchef Edition are broadcasted worldwide and are adapted to local literatures: Programmes in which editorial talents compete against each other for the best edition, in which they are challenged to incorporate the latest insights and technologies and to demonstrate their skills and knowledge of scholarly editing judged by a jury of international specialists. These programmes would introduce digital editions in society and have a huge impact on the way we treat texts from the past. The public would develop a passionate interest in edition-related activities, summer schools and workshops would sell out, enrollments in our university courses would be going through the roof, readers would read our editions and publishers would beg us to publish with them, the TEI Guidelines would be consulted more often than the Bible, public libraries would stop selling off previous editions of books and advertise the amount of different versions they hold in their collection, facebook groups would like us, the twittersphere would be vibrant of comments on editorial decisions, and we would get So You Think You Can Edit-slippers for Christmas or a Masterchef Edition-watch on our retirement. Digital editing would become an industry and generate enough money to build the infrastructure we need and fund the research groups and projects we have been dreaming of. Digital editing would become rock' n roll and we would be stars!
OK, the chance that this will happen is rather small, because the audience is simply not there or doesn't understand what Digital Editing is all about. Today I'm going to talk about two moves towards that audience: one from the perspective of text encoding and one from the perspective of the social edition. I am starting off with a bit of reality talk on the problematic perception of the role of text encoding in digital editing. Next I'm going to talk about the emerging social edition. In the meantime I will let you eat your chocolates and rattle on about television programmes, culinary history and modern day cuisine, gastronomic technology and their effects on products and the food experience. And finally, I'm going to let you crave for more, and I even promise you fireworks.
2. The Common Perception of Digital Editing
2.1. Two editions
Let me introduce to you two digital editions which have been sumbitted as MA theses at a Flemish University. For privacy reasons I am going to be deliberately vague about the titles and creators of the editions.
The first one is a digital edition of a travel story from the twelfth century. The edition allows the user to compare any two of five elements, that is the transcriptions and the images of the two primary sources and a modern translation. The texts can also be searched by approximate string matching which generates a static result. A magnifying glass facilitates the reading of the images and clicking on an image presents the user with larger versions. The student didn't do any editorial work herself: she copied the transcriptions from the latest diplomatic editions available and she added the modern translation of the text from another edition. She managed to get hold of existing scans of one original manuscript and acquired images from the other manuscript by scanning the facsimiles of the diplomatic edition. She neither did any work on the digital presentation of the edition. The edition lacks a real introduction and says nothing about the used technology. The accompanying essay presents a historical overview of the previous editions of the work and discusses the advantages of the digital edition for New Philology. In her chapter on the digital edition, she mentions TEI twice: once when she claims that encoding the texts with TEI was not necessary for this edition and a second time when she mentions that TEI encoding can always be added later. Instead she proposes a client side model based on HTML and Ajax and points out that its main advantage is that full page reloads are avoided.
The second edition is a digital edition of a corpus of 63 letters between two Flemish poets from the twentieth century. The edition is generated from the DALF-encoding the student used for her transcriptions of the letters, the articulation of her critical and editorial decisions, and her annotations and commentaries on the texts. DALF is short for Digital Archive of Letters in Flanders and is a TEI extension for the description and encoding of modern correspondence materials developed by Ron Van den Branden and myself at the CTB, the Royal Academy's Research Department. By making detailed structural and semantic information explicit in the encoded letters, a powerful digital edition could be generated as a Cocoon (+eXist) web-application which runs on a server running Java via Tomcat. The user can browse through the collection and refine selections by faceted searching which generates interactive results on the fly. The user can also search the complete collection by combining search terms and address the underlying XML-encoding. The thus generated result links directly to the letter which can be consulted in a variety of formats: as generated HTML on the screen, as plain DALF-XML or as generated PDF. The annotations can be read and rearranged on the screen, and a zoomabled digital facsimile accompanies the transcription. Both the reading text and the diplomatic transcription can be visualized and the encoding can always be checked. The user can cruise through the corpus by trails defined by the underlying semantic encoding. Individual letters or subcorpora can be stored and exported in various formats for distribution or further research. Both the encoding and the rationale of the edition was developed at my research centre and documented in the DALF Guidelines for the description and encoding of modern correspondence material (Vanhoutte & Van den Branden, 2003) and in Bert Van Raemdonck's recent PhD dissertation on the digital edition of letters (Van Raemdonck, 2011). The student didn't design the system and interface herself, but Ron Van den Branden designed it on the basis of her detailed encoding. This digital edition is all about knowledge representation, and the knowledge it represents is the student's, not somone else's.
The student who submitted the HTML-based edition got a distinction for her work, the student who submitted the XML/DALF/TEI based-edition failed. And this, ladies and gentlemen, is our fault. We owe her a big apology.
2.2. Text Encoding
Since the first TEI chapters on the transcription of primary sources and critical apparatus appeared in the TEI P1 Guidelines, now 21 years ago, we have been very good in persevering to minimize the importance of text encoding as an academic activity. Instead, when talking about digital editions to the outside community of humanities scholars, academic administrators, and funding bodies, we still take a 1990s attitude and emphasize the advantages of the internet as a distribution medium and the possibility to include and present all the material that printed editions have to leave out. We do that because we want to avoid having to explain the what, why, and how of text encoding to people outside our community. The technical complexity of an encoding standard like the TEI and the apparent incomprehensibility of the TEI Guidelines are debit to this reaction. By focusing on the end-product we shift the emphasis to what is familiar and we try to make the digital edition a sexy or at least an acceptable investment of talent, time, and money. At the same time we reduce the digital edition to its interface. Think about how you explain what you do to your family and friends who are not your colleagues. Consequently, the evaluation of our work is based on the presentational features and qualities of the digital edition instead of on the theories of the text and textual criticism which are expressed in the encoding.
Apparently, text encoding is something we should hide from people because it scares the hell out of them. They can watch airplanes crash buildings, people being shot in riots, car crashes, medical operations, and children dying from starvation, but text encoding? Oh no.
I taught text-encoding as a university course for many years and ran workshops attended by a wide variety of people, some without any humanities or computing background, and some in their seventies. Within less than six hours, all of them could deal with XML-encoding both as a human readable language and as an activity. We don't serve ourselves well by investing time, energy, and funds in trying to get more people on board by hiding the encoding behind WYSIWYG interfaces which gives them the illusion that everyone can do it without any training and exercise. The fear for the angle bracket is a psychological condition which can be overcome by instruction and training. I have argued for years now that simplifying our profession for students, aspiring editors, and unwilling colleages is the worst strategy ever if we want full academic credit for what we do. By trivializing and hiding the intellectual and physical blood, sweat, and tears we invest in the encoding of our texts, and by failing to explain what it is that we do, we create an atmosphere in which text-encoding does not count towards academic credits.
According to her supervisor, the second student also failed because she didn't add an essay on the poetics of both poets involved in the correspondence. Next to the question since when such an essay has become an essential part of a digital edition of letters, a more important question pops up: why wasn't the encoding taken into full consideration by the examiners? Because we as a community failed to explain and show the importance and academic value of the activity of text encoding for scholarly editing. So we may ask ourselves if it is a good thing to try and shield humanities scholars outside the text encoding community from the technical particulars?
As you may have guessed by now, I am not a huge fan of the WYSIWIG editors and edition machines we design in order to attract the interest and collaboration of more scholars. They basically reduce the intellectual activity of knowledge modeling through text-encoding to layouting a word-processor's document. They can no doubt be useful in crowd-sourcing projects where the focus is on mass transcription or the identification of simple features of the text, but using them in order to gain a wider understanding and support for textual criticism and digital scholarly editing is hopeless. Let me remind you that text encoding already provides a human readable interface to text modeling and that SGML was issued as a much needed high functional alternative to less functional wordprocessors. If we want the editing machine of the future to be as powerful and expressive as its underlying encoding standard, it must cater for all options within this standard. Understanding its graphical interface to the linguistic interface of text-encoding is as complicated as understanding the linguistic interface proper. It seems that by promoting a wordprocessing interface for text-encoding we're back to square one. Or as Charles Goldfarb put it once: '[I]f you are going to mess around with something powerful that you do not fully understand – even something benign – you had better do it with your eyes open.' (Goldfarb, 1990, p. xiii)
2.3. TEI by Example
Apart from the huge social and economic impact, reality shows like So You Think You Can Dance and Masterchef have contributed to an understanding and appreciation of dancing and cooking as a profession. These programmes never claime that dancing or cooking is easy and they zoom in on the hardships of mastering advanced techniques. This, however, doesn't put the audience off to take dancing or cookery classes where they are being taught step by step how to achieve a certain level of expertise.
With the freely available online tutorials TEI by Example which also contains modules on primary sources and critical editing, Melissa Terras, Ron Van den Branden and myself have tried to offer a comprehensible alternative to the mastodont TEI Guidelines. The tutorials are designed for self-directed learning but can also be used by TEI instructors in classroom and workshop situations. We introduce the students to text encoding by taking them through the process of marking up real documents. Our didactic approach is explicitness and learning by trial and error. What you see is what you get. Each module is accompanied by real examples from real projects, quizzes, and exercises. We built feedback into the quizzes and the assignments can be performed interactively using the TEI By Example Validator, a great application that in real-time parses any XML you enter, and produces a report, telling you if it qualifies as valid TEI, or if there are any errors. We could have included footage of us encoding texts in real life, but we're confident that even without these clips, TBE is as close you can get to a reality show version of Encoding with the Stars.
Fifteen months after the launch of the tutorials, the site has attracted close to 30,000 unique page views with 1,900 unique views for the modules on primary sources and critical editing together. The statistics and logs show that users are finding their way to the tutorials directly, via Digital Humanities courses or via the TEI website and we see that there is high activity from the US, Germany, the UK, France, and Canada: not surprisingly countries with a high digital humanities and digital editing profile. And we're particularly proud of our single visit from Vatican City. 18% of the visitors stay for more than 15 minutes on the site, which suggests that they really do some work. We also see a decent amount of returning visitors.
So in order to get full academic credits for the encoding work we're doing, we should be less social when we create encoding tools and offer training instead which shows the reality of the activity of text-encoding and emphasize its function as a modeling language for knowledge about texts.
The other day I was reviewing a cookery book with portraits and recipes from the 30 most influential contemporary Belgian chefs, and it struck me that, although culinary technology has never been more advanced than now, and the finest ingrediënts from all over the world have never been more accessible, the 500 page book is one long argument in favour of traditional cooking techniques and local produce. Contemporary dishes are built around the pure flavour of one product. The essence of its flavour is supported and lifted by only a few other ingredients which complete the total food experience. The subtle balancing of the four basic tastes – sweet, sour, bitter, and salty – together with a perfect control over texture and temperature in innovative food creations refer more to traditional cooking than newcomers in the food trade generally acknowledge. Surely, technological research and development affects the clientele's food experience because it alters the way chefs cook and dress up their dishes but it hardly replaces the achievements and insights of traditional cooking. An inclusive approach towards tradition and innovation is therefore key.
One such exemplary dish is the Black pudding red beet oyster by the Flemish chef Kobe Desramaults who cooks in his Michelin star restaurant In De Wulf in Dranouter. This chef shocked the restaurant scene years ago with his avant garde cuisine full of molecular cooking techniques, but has now returned to a pure kitchen which tries to bring out the essence of locally produced food. He admits 'When I started cooking, I mainly wanted to impress. Technique took priority over the rest.' (Asaert & Declercq, 2011, p. 204) In this dish, Kobe highlights the four basic flavours by pairing the black pudding which already combines all four of them with the briny saltiness of the oyster, and the sweet and sour of the beetroot and elderberry. This modern and contemporary looking dish has many emotional references to the past. As the chef explains 'I like to listen to local farmers and the older generation. They tell me stories from the past and how they cooked with their own produce. This inspires me to create new dishes.' (Asaert & Declercq, 2011, p. 205) However, this seemingly simple dish has a quite complex palate.
I couldn't help seeing in this a metaphor for the future of scholarly editing and its practitioners. Just as the young chef I was just talking about, every young and ambitious scholarly editor is enthousiastic about new technology and feels the urge to generate paradigm shifts. This drive to impress, however, only finds its true balance when it is enriched by knowledge about the past.
One element which seems to survive any innovation in scholarly editing is the established reading text. In Scholarly Editing in the Computer Age, published in 1996, Peter Shillingsburg observed that '[i]n spite of the fact that in the 1980s editorial circles witnessed a paradigm shift in which the concept of a definitive end product was widely replaced by the concept of process in which multiple texts represent the work, nevertheless, the physical limitations of print editions and the linear reading habits of most readers have continued to force the predominance of clear-reading texts as a primary feature of new scholarly editions.' (Shillingsburg, 1996, p. 77) A bit further, when he's talking of the qualities of the hypertext edition, he predicts: 'So, even with hypertexts, the question of “a best text for some purpose” will remain very much with us.' However, he adds, 'the most important point arising from recent theoretical discussions and computer capabilities may be the inescapable recognition by the general reader that any reading text is merely a representative of a work, not “the work itself”; for there are other representations of it crowding in demanding attention as well.' (Shillingsburg, 1996, p. 77-78).
Since Peter Robinson admitted that he was mistaken to abandon the single (edited) text in the edition of The Wife of Bath’s Prologue (Robinson, 1996c) in favour of a set of different views of the text, he has moved from advocating the reader’s freedom of choice among many texts, to recognizing the function of the one text, to looking for the ideal model of an electronic edition and its functions. Currently he advocates ‘fluid, co-operative and distributed editions’ (Robinson, 2003a, p. 125) that are truly actively interactive through their instinctive interface design and incorporation of social media. This concept is indebted to Peter Shillingsburg’s ‘knowledge sites’ which are built by and around active on-line communities. At the same time, Robinson’s ideas of ‘electronic editions for everyone’ (Robinson, 2007b; 2009) correspond with Shillingsburg’s concepts of the convenient and the practical edition (Shillingsburg, 2005) that must bridge both the theoretical and practical differences between textual and literary critics. This concept recalls Fredson Bowers’ idea of the ‘practical edition’ from 1969 (Bowers, 1969). Fredson Bowers used the term ‘practical edition’ as opposed to ‘scholarly definitive edition’ to name commercially inspired products that ‘present to a broad audience as sound a text (usually modernized and at a minimum price) as is consistent with information that may be procurable through normal scholarly channels and thus without more special research than is economically feasible.’ (Bowers, 1969, p. 26) In a way, this was Bower's attempt at socializing the scholarly edition by making it possible for literary and textual scholars as well as for the common reader to profit from the results of textual scholarship.
Just as traditional cooking techniques, terroir cuisine, and local produce have survived many gastronomic innovations in the past five decades, different elements of the traditional scholarly edition, such as the reading text, has survived five decades of technological and theoretical innovation in textual scholarship. Therefore, our thinking about the digital scholarly edition should take an inclusive approach towards the accomplishments of the past. Whichever new technology is applied to the scholarly edition, every new model it generates is indebted to the tradition of textual scholarship.
In the time left, I am going to zoom in on the social edition and comment on its flavours. In passing, I am going to reiterate the importance of the 'four traditional basic tastes' of a scholarly edition – the constituted reading text, the apparatus variorum, the genetic and transmissional history, and the commentary – and add social technologies as savoriness or umami to the editorial dish. I'll do that through a brief discussion of the sweet promise of the social edition, the sour reality of sustainability, the bitter destiny of the record of variants, and the salty need for referentiality. Time to dig up your chocolates
4. The Flavours of The Social Edition
4.1. The Sweet Promise of Social Media
What do you need to know about the social edition?
- The social edition is a proposal to remodel the scholarly edition with the use of social media and extend digital editorial traditions well into the age of Web 2.0.
- The social edition is a proposal for modeling professional reading.
- The social edition wants to provide a timely alternative to the current types of digital editions which were mostly conceptualized before the ubiquity of the web.
- The model for the social edition is built on the achievements of theories such as New Historicism which blurs the distinction between text and context, and The sociology of text which considers the text as a result of a social process rather than an authorial product.
- The social edition is not a static end product but a continuously changing knowledge space that generates meaning through collaboration.
A forthcoming paper in LLC written by a research team around Ray Siemens summarizes it nicely: 'with the tools of social media at its centre, the social edition is process-driven, privileging interpretative changes based on the input of many readers; text is fluid, agency is collective, and many readers/editors, rather than single editor, shape what is important and, thus, broaden the editorial lens as well as the breadth, depth, and scope of any edition produced in this way.' (Siemens et al., forthcoming)
The sweet promise of social media is thus the redefinition and acknowledgement of digital editing as a digital humanities' activity where collaboration is key. Social media empowers the critical reader, destabilizes traditional scholarly editing both as a theory and as an activity, and questions the scholarly edition as a product. Social media also suggests the extension of community membership beyond academics and into the interested and general public.
But how social is the social edition, really? One of the main advantages of traditional textual scholarship is that its research produces a maximal edition which includes a minimal edition. The maximal edition is an academic product in which scholarly editors present their research data, demonstrate their scholarly accuracy and scrutiny, and articulate their attitute towards problems and theories of the text. The maximal edition is a knowledge space where the history of the text is examined 'for all their clues in order to get a solid view of how they were created, deployed, manipulated, and appropriated so that we can better understand the history and significances of books.' (Shillingsburg, 2006, p. 77 n. 27) The target user is clearly the expert reader.
The minimal edition, on the other hand, is a cultural product that is produced by the scholarly editor acting as a curator or guardian of the text. In other words, it is a reading edition which presents the established text in a no-frills format. The minimal edition is aimed at the common reader who just wants to have access to a text and read it for fun. This inclusive, or social, approach of traditional textual scholarship is central to its primary aim which should be the transmission, preservation, and presentation of texts to future and present-day readers. All readers. That's why, at the Centre for Scholarly Editing and Document Studies, we built the minimal edition into our model of the maximal digital edition by providing the user with the option to build their own representations of the material in the edition and generate distributable PDF files which are formatted as print editions. This way, the user of our digital editions can choose to read the established reading text with our without annotations, print it out or have it printed on demand as a practical paperback. With a share of 0.3 % of the book market, the e-book is no real alternative to print books in Flanders.
The social edition, however, is a maximal edition which does not include a minimal edition and does not respond to the communicative function of textual scholarship. It is targeted at expert readers who participate in collaborative activities of various kinds and form a knowledge building community. The incorporation of social media would make it possible to extend this community to the interested and engaged general public who are practicing socalled citizen scholarship (Greenberg, 2010). But even the most succesfull crowd-sourcing projects which call on the general public for the transcription of the Bentham papers, for instance, has to admit that over half of the transcriptions is being produced by one and the same person, as Melissa Terras recently admitted (Terras, 2011).
Despite the inclusion of social media, the social edition leaves out the common reader as well as the general public and thus narrows down the main function of textual scholarship to its academic focus. In this respect, the social edition is asocial.
4.2. The Sour Reality of Sustainability
Perhaps the first social edition, both in its collaborative approach and in its socialization of text, is Jerome McGann's iconic Rossetti's Archive. This archive 'comprises some 70,000 digital files and 42,000 hyperlinks organizing a critical space for the study of Rossetti’s complete poetry, prose, pictures, and designs in their immediate historical context.' (McGann, 2010) The Archive has high-resolution digital images of every known manuscript, proof, and print publication of Rossetti's textual works, and every known or accessible painting, drawing, or art object he designed. It also has a substantial body of contextual materials that are related in important ways to Rossetti’s work. All of this is imbedded in a robust environment of editorial and critical commentary and it is encoded in TEI based XML. It took McGann and his team about 18 years to finish the project and it involved some 40 graduate students plus a dozen or more skilled technical experts, not to speak of the cooperation of funding agencies and scores of persons around the world in many libraries, museums, and other depositories. Apart from the digital edition itself, the project generated a series of lectures, essays, papers, and books by McGann himself, and perhaps thousands of comments and references in print and online by others. Since the mid-1990s, this project also transformed editorial theory and confronted it with the digital paradigm.
The Rossetti Archive has functioned for years as a research environment grouping scholars from a variety of disciplines around Rossetti's works. In this respect, but of course lacking today's Web 2.0 technology, The Rossetti Archive is a social edition.
In a recent paper on sustainability, Jerome McGann concludes the following about the Rossetti Archive, and here comes the sour reality: 'In order to preserve what I have come to see as the permanent core of its scholarly materials, I shall have to print it out.' (McGann, 2010)
As no collaborative efforts are ever lasting, the social edition as a research environment will gradually transform from an engine of scholarship to an object of scholarly interest. 'They will not be sustained' McGann predicts. 'They will be – we hope their most significant parts will be – preserved'. (McGann, 2010) At that moment, and if the collaborative research environment would make its findings accessible to future scholarship, which is the academic function of textual scholarship, the social edition will have to transform itself into the self-contained editorial object it criticizes.
4.3. The Bitter Destiny of the Record of Variants
At the DH2011 conference in Stanford, Meagan Timney, Cara Leitch and Ray Siemens presented the social edition as a new model for edition production in a time of collaboration (Timney et al., 2011). Their thought provoking paper was one of the inspirations for my speech today because they sketched an exciting future for the digital edition but were quite negative about the achievements of the past. Their claims about the 'anonymous' apparatus variorum, the failure of 'self-contained' editions and the role of the editor as 'progenitor of knowledge creation' in print and digital editions so far, for instance, signal some misunderstandings of traditional bibliography from the perspective of the social web. Historically, the record of variants serves four purposes. First, it is a documentation of the variation between all of the extant versions of a text which allows for the reconstruction of these versions. In pre-digital times this was the only affordable way to represent the genetic and transmissional history of texts. Second, it provides the account of the emendation of the base text and the constitution of the reading text. Third, it provides the user with control data which allows for the repeatability of the criticism performed on the text. Four, it functions as a research data base. It is in the record of variants that scholarly editors expose themselves and are explicit about their choices. The apparatus variorum is the place to prove editors wrong and to falsify their textual criticism. Saying that the editor of a self-contained edition is hiding in the anonymous record of variants, as the presenters of the paper did, is a total misconception which grew out of the supposed explicitness of the social edition.
It's in the social edition that the record of variants meets its bitter destiny. Their model of the social edition proposes the inclusion of collation software which could generate a record of variants which, funilly enough, anonymizes the apparatus. Because the social edition does not contain a fixed and established reading text – it does not accept the authority of the editor –, because the inclusion of all of the versions of the text in the edition makes a reconstructive representation redundant, and because the social edition includes software for the analysis of the date, the generated record of variants is nothing else but an anonymous database with no real function.
4.4. The Salty need for Referentiality
In the discussion following the paper I just mentioned, Michael Sperberg-McQueen and myself drew the attention to what's possibly the greatest challenge of the social edition. If the social edition is a 'living edition', as Greg Crane (2010) put it, that is constantly evolving and is being improved by its knowledge building community, its 'current state is only a single datapoint'. (Crane, 2010) This single datapoint, however, is the basis for scholarly debate and forms the foundation of knowledge creation about the text. To guarantee the scholarly integrity of this debate, we need mechanisms to register these data points, archive them as snapshots, and refer to them when needed.
As Emerson Marks reminds us 'No cognition, whether scientific or “aesthetic,” is conceivable without referentiality.' (Marks, 2001) It only takes one to read Roman Jakobson's model of the function of language (1960) to understand the importance of referentiality for scholarship. If the context of our argument is changed following our argument, our argument loses its scholarly value. Moreover, the true or false status of our argument can never again be affirmed or questioned and the argument cannot generate any new arguments. In other words, if the data on which to perform scholarship is in constant change, we cannot know what we know, which leads to an epistemological crisis. Just as salt is a basic mineral for the body to function properly, referentiality is a basic quality of scholarship.
In the academic debate, referentiality is commonly facilitated by footnotes and references pointing to retractable 'datapoints' such as essays, articles, books, blogs, websites, tweets etc. If the social edition wants to function properly and live up to its mission, we need agreed referencing schemes to facilitate academic debate of the primary materials as well as of the collaborative annotation, tagging and analysis the social edition promotes by the inclusion of social software.
I don't know how we can solve this problem in a manageable way, but I do hope that the TEI steps in and proposes ways to do this, because it hugely involves the text-encoding community which in the social edition finds a laboratory for finding out what we know and don't know about the modeling of knowledge.
5. Wrapping It Up - Umami
To conclude this keynote, let me tell you one last story taken from the history of food. It is the story of the fifth basic taste which completely changed the way chefs design their dishes, the way we experience food, and the way the industry generates huge profits. And it all started in 1908, when the Japanese scientist Kikunae Ikeda identified a taste which was distinct from the four basic tastes sweet, sour, bitter, and salty and named it umami. Umami has no translation but means something like 'pleasant, savory taste'. Although umami has no taste of its own, Ikeda found that the combination of glutamates with any of the four basic tastes resulted in an intensity which is higher than the sum of the ingredients. Umami has the ability to balance the taste and round the total flavour of the dish. The effect of umami is difficult to describe but it induces salivation and fills the mouth with a sensation that makes you crave for more (Yamaguchi, 1998). However, it needed an international symposium to recognize umami as the fifth basic taste in 1985.
Maybe the social edition is the fifth basic taste of textual scholarship. Although it is distinct from the kinds of editions we know already, it is hugely dependant on the accomplishments of the past and it can only be valued from the context of traditional textual scholarship. If we reocgnize this, the social edition has the ability to balance the theories and practices of textual scholarship and generate an impact which is higher than the sum of its ingredients. It has the potential to change the ways we think about scholarly editing, to change the ways we create scholarly editions, and to change the ways we use scholarly editions in a collaborative environment. The social promise of the social edition lets us crave for more. Maybe, the social edition will generate the Digital Edition Effect I have been adressing at the beginning of my talk. The future will show whether that will happen or not.
I have tried entertain you about what I think are the challenges textual scholarship is facing, and I somewhat regret I haven't been able to talk more about the specific challenges of the TEI in this project. You may agree or disagree with what I've been saying and you can do this because I put this speech up on my blog and a revised version will be published in the Journal of the TEI.
But before you comment on my thoughts, remember that I gave you gastronomic designer chocolates.
And there's still one left. Since the organisers couldn't arrange any fireworks at the end of my lecture in this room, I did enquire about it, Dominique Persoone put your very personal fireworks in this umami chocolate. Thank your for listening and enjoy!
- Asaert, Willem & Marc Declercq (2011). De chefs van België. Tielt: Lannoo.
- Bowers, Fredson (1969). Practical Texts and Definitive Editions. In Hinman, Charlton and Bowers, Fredson, Two Lectures in Editing: Shakespeare and Hawthorne. s.l.: Ohio State University Press, p. 21-70.
- Brandl, Macade (2011). Blogpost. The So You Think You Can Dance Effect. Culture Vultures, 20 September 2011. (accessed 7 October 2011).
- Crane, Greg (2010). Give us editors! Re-inventing the edition and re-thinking the humanities. The Shape of Things to Come. Charlottesville, Virginia, March 2010. (accessed 9 October 2011).
- Goldfarb, Charles (1990). The SGML Handbook. Oxford: Clarendon Press.
- Greenberg, Josh (2010). The Institution and the Crowd. Presentation.
- Hunter, Thomas (2010). The MasterChef effect. The Sydney Morning Herald, 22 July, 2010. (accessed 7 October 2011).
- Jakobson, Roman (1960). Linguistics and Poetics. In Sebeok, T. (ed.), Style in Language. Cambridge, MA: M.I.T. Press, p. 350-377.
- Marks, Emerson R. (2001). Referentiality and modern poetics. Philological Quarterly, Fall 2011.
- McGann, Jerome (ed.). The Complete Writings and Pictures of Dante Gabriel Rossetti http://www.rossettiarchive.org/ (Accessed 8 October 2011).
- McGann, Jerome (2010). Sustainability: The Elephant in the Room. Shape of Things to Come. Charlottesville, Virginia, March 2010. (accessed 9 October 2011).
- Robinson, P.M.W. (ed.) (1996c). The Wife of Bath’s Prologue on CD-ROM. Cambridge: Cambridge University Press.
- Robinson, Peter (2003a). Where We Are with Electronic Scholarly Editions, and Where We Want to Be. In Braungart, Georg, Eibl, Karl and Jannidis, Fotis (eds.), Jahrbuch für Computerphilologie, 5: 125-146. Also published in Jahrbuch für Computerphilologie - online: http://computerphilologie.uni-muenchen.de/jg03/robinson.html
- Robinson, Peter (2010). Electronic Editions for Everyone. In McCarty, Willard (ed.), Text and Genre in Reconstruction. Effects of Digitalization on Ideas, Behaviour, Products and Institutions. Cambridge: OpenBook Publishers, p. 145-163.
- Shillingsburg, Peter L. (2005). Practical Editions of Literary Texts. Variants. The Journal of the European Society for Textual Scholarship, 4: 29-55.
- Shillingsburg, Peter L. (2006). From Gutenberg to Google. Electronic Representations of Literary Texts. Cambridge: Cambridge University Press.
- Terras, Melissa (2011). Crowd sourcing: beyond the traditional boundaries of academic history. Paper. Hidden Histories: Symposium on Methodologies for the History of Computing in the Humanities c.1949-1980. London: UCL, 17 September 2011.
- Timney, Meagan, Leitch, Cara & Siemens, Ray (2011). Paper. Opening the Gates: A New Model for Edition Production in a Time of Collaboration. DH2011. Stanford: University of Stanford, 19-22 June 2011.
- Van den Branden, Ron, Terras, Melissa & Vanhoutte, Edward. TEI by Example. (Accessed 8 October 2011).
- Vanhoutte, Edward & Van den Branden, Ron (eds.) (2003). DALF guidelines for the description and encoding of modern correspondence material Version 1.0. Gent: CTB-KANTL.
- Van Raemdonck, Bert (2011). 'Voor ons en voor ons tijdschrift'. Context en codering van een digitaal correspondentiecorpus ron Van Nu en Straks. PhD Dissertation, University of Ghent, Faculteit Letteren & Wijsbegeerte.
- Yamaguchi S. (1998). Basic properties of umami and its effects on food flavor. Food Reviews International, 14:2&3: 139–176. doi:10.1080/87559129809541156.