Digital Literary Stylistics in Utrecht: A Report on the DH2019 Conference

Most of us are back from the post-conference summer break: Time to revisit! Seven weeks have passed since the DH2019 conference in Utrecht: presented as the biggest DH conference so far (with about 1,100 participants and more than 400 presented papers), it also distinguished itself for the number and quality of contributions in digital literary stylistics. With this post, through highlights and summaries, we try to provide an overview of the most recent outcomes of stylistics research in DH. Well aware that (as our colleague José Calvo Tello already highlighted when reporting on DHd2018) these notes cannot be considered but a preliminary, incomplete, and subjective report on a much more complex and stratified event. As usual, more information can be found in the conference programme and book of abstracts, both available on the conference website. To extend the variety of documentation, then, the #dh2019 and #sig_dls hashtags were remarkably active on Twitter, while participants like Geoffrey Rockwell even shared their conference notes online.

But well, if you would like to know the specific point of view of the SIG-DLS (and, more specifically, of the member who’s writing this report), here are a few notes.

The first step, inevitably, is the pre-conference workshop organized by the SIG-DLS steering committee. Held on the morning of 9 July, DLS Tool Criticism. An Anatomy of Use Cases was conceived as a moment of self-reflection (and of self-criticism) on the tools and methodologies most frequently used in digital literary stylistics. During the workshop, Clémence Jacquot presented her use of the TXM software in the study of Apollinaire’s poetry (stimulating a discussion on limits and potentials of pre-compiled, user-friendly tools for the study of style); Geoffrey Rockwell presented a case study and a theoretical reflection on the concept of replication (thus rekindling the still lively discussion sparked by Nan Da’s criticism, but going beyond, querying ‘replication’ as a more flexible concept); and Steffen Pielström resisted the temptation to advertise his efficient topic modeling tool, inciting on the contrary an acute disputation on the pros and cons (and on the aleas) of such a technique. One of the major achievements in the final discussion was the shared impression that the quantitative/qualitative dichotomy is simply misleading, and digital literary stylistics will have to move beyond it, looking for a more organic integration of approaches.

The SIG_DLS Workshop (photo credit: J. Berenike Herrmann)

When reaching the bulk of the conference, the impressions we can record here become inevitably more random and scattered. A general trend was evident, however, with digital stylistics contributions divided roughly in two main categories: methodological analyses, providing very advanced and stimulating discussions of (more or less) shared approaches; and applications on complex and engaging case studies. This can be considered as a reflection of the state of things in stylometry, where methodological research is needed to ground more and more steadily the scientific rigor of the approaches (with the risk of an excessive technicism), and new applications even more so, if our research field is to gain  recognition beyond the boundaries of DH.

Among the methodological papers, some titles call for a specific mention. In “Comparing Assonance and Consonance for Authorship Attribution”, Lubomir Ivanov proposed prosodic elements as possible new features in stylometric classification, considering the extremely relevant role of prosody in 18th century prose (and beyond). His experiments showed how the combination of assonance/consonance with traditional stylometric methods generates significant improvements in the classification. Further research on the subject is definitely advised! In “Feature Selection in Authorship Attribution: Ordering the Wordlist”, Maciej Eder and Joanna Byszuk worked on the effects of re-ordering the (traditional) list of features. Main discovery: a combination of term frequency (the widely-used MFW) and a coefficient of variation slightly improves the results. Finally, in “Identifying Similarities in Text Analysis: Hierarchical Clustering (Linkage) versus Network Clustering (Community Detection)”, a team of four researchers led by Jeremi Ochab tried to provide an answer to a still under-studied–but ever-present–question in stylometric classification: how do I cluster the texts to visualize similarity/dissimilarity? Results showed that network clustering tends to underestimate the actual number of clusters, while hierarchical clustering overestimates it (and of course, which one is better will depend on the research question!). Traditional algorithms (Ward, Louvain) were confirmed as the most efficient.

Placing itself between method and application, David L. Hoover proposed his “Invisible Translator Revisited”. Given the (supposed) invisibility of the translator in stylometry, Hoover proposed a method for making her visible again, by filtering out the words preferred by the author. The approach showed surprisingly effective, but will it work for all pairs of authors-translators? As it happens, the idea found an indirect confirmation in “Attributions Of Early German Shakespeare Translations”, where a team of three researchers led by David Lassner was equally successful in revealing the translator (here, by combining advanced classification methods based on sub-word-level features). Shifting the focus towards themes and genres, Fotis Jannidis and his colleagues (in “Thematic complexity”) used two approaches, LDA and Zeta, to model genre distribution and to measure thematic complexity (via the “Gini index”) for dime novels and high-brow genres. Their results support their hypothesis of high literature comprising a broader spectrum of subject areas than pulp fiction genres. On a similar line of enquiry, a team of seven researchers led by J.D. Porter used stylometric approaches to identify passages in novels that adopt stylistic traits from other genres (in the paper “Microgenres”).

Miffy’s Traffic Light (photo credit: J. Berenike Herrmann)

Moving then to the applications, it is worth mentioning “Challenging Stylometry: The Authorship of the Baroque Play La Segunda Celestina”, by Laura Hernández Lorenzo and Joanna Byszuk (who wins our award of the most prolific stylometrician, with three co-authored papers, and who on top of that also has contributed a SIG-DLS blog post!). That of La Segunda Celestina is a real “challenge” for authorship attribution: in fact, the results of the analysis did not limit themselves to “solving” the problem, but suggested possible new lines of enquiry. In “Stylometric Analyses of Character Speeches in French Plays”, then, Ioana Galleron provided an effective example of “scalable reading”, combining multiple approaches to distinguish the style of male and female characters. Finally, Massimo Salgaro and Simone Rebora (i.e. myself, in the panel “Digital Humanities for the Study of Social Reading”) showed how stylometric approaches can also be used to distinguish professional and non-professional book reviews.

A conclusive note should be dedicated to “Stylometry for Noisy Medieval Data: Evaluating Paul Meyer’s Hagiographic Hypothesis” by Ariane Pinche, Jean-Baptiste Camps, and Thibault Clérice. The work stands out as a perfect realization of a processing pipeline (from handwritten text recognition to stylometric classification, with advanced techniques to reduce noise and manage spelling variants) that instantiates a fruitful dialogue with traditional literary research, by confirming (and in part disputing) Paul Meyer’s hypotheses about the composition of the French saint’s Lives. The paper was awarded the Paul Fortier Prize for the best young scholar paper of the conference. A further success that confirms the increasingly central position of digital literary stylistics in DH research.

The SIG-DLS steering committee reiterates their congratulations to Ariane, Jean-Baptiste, and Thibault, but also to the Programme Committee chairs Elena and Fabio, and to the local organizers Joris and Franciska, for making such a spectacular conference possible. Looking forward to meeting you all again in Ottawa (hopefully, with even more stylometry to discuss)!

Computer Assisted Text Markup and Analysis (CATMA) – An undogmatic approach to corpus analysis and Germany’s literary super heroes

Do you know CATMA? Not yet? Then read the following blog. Do you think you know CATMA? It’s still a good idea to read the following blog…

CATMA is a web-application for text markup and analysis. Its central function is annotating, analyzing and visualizing one or multiple texts. CATMA has been around since 2008.

When the implementation of the web-application CATMA started in 2008, the focus was to create a tool for digital close reading. Although there always has been the possibility to run some standard queries as e.g. word frequencies automatically, the focus of the application was entirely on the annotation functionality of single texts. Ten years later, CATMA provides the possibility to store, manage and analyse not only single texts, but also bigger corpora – using both close and distant reading methods. Although CATMA still can be used for manual annotation of single texts, in this article, we want to show you how the web-application can be used on corpora.

Goethe and Schiller, Germany’s literary super heroes

Let’s say you would like to explore the authorial styles of Germany’s literary super heroes, Goethe and Schiller. As you want to make the two author corpora comparable, you choose 12 dramatic and 5 prosaic texts of each author. Just upload those texts to CATMA and organise them into different corpora, one with texts by Goethe and one with texts by Schiller. Once your corpora have been created in CATMA, you can easily export them, or, more important, share them with your team.

Without any further preparation you can now start analysing your corpora. Using the Analyze modul you can generate a word list with frequencies. This list will show you that there are around 800.837 words in your selection of Goethe’s texts with around 46.840 types. Schiller’s texts contain 35.351 word types and a total of 498.790 words. These numbers are not very meaningful yet, because, obviously, the texts do not have the same length. Looking at the word frequency list, from a narrative perspective, the first person pronoun “ich” (I) is probably most interesting as it points to a predominant first-person perspective. Or is it just the dramatic characters? Other highly frequent (but narratologically possibly less relevant) words in the Goethe corpus are “und” (and), “die” (the/who/which – female), “zu” (to), and “der” (the/who/which – male), much as in the Schiller corpus (albeit in the order “der”, “und”, and “die”). At this top tier of frequency, there seems to be a similarity.

Word frequencies in Goethe’s and Schiller’s texts

But how about the other end of the lexical spectrum? Goethe is known to have had a comparably large vocabulary, so we may quickly check the least frequent words in the corpus – those which are used only once. Are there many of these hapax legomena?

In the Goethe corpus they sum up to 28.792 words altogether. Schiller uses 21.025 words only once in the entire corpus. This means that Goethe uses 61.4% of all the types of this corpus only once. Schiller uses 59.4% of all types in the corpus only once. Maybe this similarity suggests that Schiller after all was closely following Goethe in variability of word use. However, nobody will ever know how large the vocabulary of Schiller’s works might have become had he reached the same old age as Goethe in the end…

If we turn away from the low frequencies now and have another look at the high ones, we get hold of another interesting phenomenon. Among the most frequently used signs by Schiller are question- and exclamation marks. They appear at ranks 4 (exclamation mark) and 8 (question mark). In the Goethe corpus they only make it to ranks 11 (exclamation mark) and 27 (question mark). A look at the distribution of exclamation and question marks in our corpora shows that the distribution of exclamation marks ist always higher than the one of question marks in the Goethe corpus. Schiller on the other side has written five texts in which there are more question marks than exclamation marks. Is it maybe that Schiller’s authorial signature is characterized by an unusual amount of questions? Especially his play “Don Carlos. Infant von Spanien” is characterized by question marks, as you can see in the CATMA small multiples view below:

Small multiples view of distributions of question- and exclamation marks in the works of Schiller

Our case study so far has applied some handy distant reading functions, which are often used in one way or another in stylistics. But CATMA offers also a method for what we call scalable reading. Starting off with distant reading as we did, you may now scale down a bit: Simple double clicking on one point in the distribution graph or one keyword in the keyword in context table will take you to the specific position in the text where you find either the keyword or the accumulation of a word or tag you found in your small multiple distribution graphs. So, you can go from corpus to multiple texts in one visualization, on to single text view and even to the very position at which you find one word in one text – changing dynamically from distant to close reading and back just as you wish. This scalability also allows to develop different kinds of interpretation from data analysis to more context-oriented interpretation of certain passages in single texts.

I leave you here to start your own case study now, be it on the style of Goethe and Schiller or other literary phenomena. But just before you do that, do know that there are more functions for working on corpora in CATMA, among which are automatic annotations of part of speech (POS) tagging, as well as that of verbal tense and temporal signals (in German texts).

And of course you can also use the central CATMA function, which is annotating one text, or a whole corpus, with your very own categories, analyzing and visualizing them. CATMA ist running as version 5.0 right now, but in 2019, CATMA 6.0 will be launched. Users can look forward to the new design of the graphical user interface, the optimization of workflows and the addition of some features. 

As we focused on the corpus-specific functions in CATMA in this article, you might want to have a look at our tutorials for a complete overview of the application’s functionalities: http://catma.de/documentation/tutorials/

Have fun annotating, exploring, and scaling!

Harry Potter, computational fun and sexy gains

About the workshop on the Recreation of Harry Potter, endorsed by the SIG-DLS at DH2018 on 25 June 2018, led by Mike Kestemont and Enrique Manjavacas; and developed in cooperation with Greta Franzini and Marco Büchler, part of the 2018 Digital Humanities conference in México City

By Corina Koolen

Harry Potter novels, not one of the first topics one might think about when performing stylometry. But, as Mike Kestemont argues, popular literature is one of the unrightfully overlooked areas of digital literary studies, which still often focuses on the classics. With the workshop run by him and Enrique Manjavacas (many thanks for all your help!) during the international Digital Humanities Conference 2018 in México City, they show us exactly why this is undeserved. I am impressed by the setup chosen, which combines computational analyses with thoughtful reflection on a number of humanistic issues and interests.

One of them is the legal aspect of researching contemporary novels. After a discussion on the differences per country, where the basic conclusion is that there it is very hard to determine what exactly is legal in which country, Kestemont remarks: “We always talk about the author’s rights. I believe I have rights, too, as a researcher.” I could not agree more, this is going to be ammunition for legal discussions in the future.

Then the stylometry. Generally, stylometrists are known for performing authorship recognition. J.K. Rowling herself found out about the discipline when she was unmasked by Patrick Juola as the real author behind Robert Galbraith. There are other cool things to do, however, through stylometry. And this is where it becomes interesting for me as a researcher of popular literature: we are going to look at stylistic similarity between the novels and HP fan fiction. There are two cases that we will test, both of which follow the original structure of the novels. The first case is Aidan Chase. This fan fiction author, of whom little biographical information is available, reframed the originals to create a story world where Harry’s parents never died, but stayed true to the main story line. The second is Norman G. Lippert. He created new novels, based on the original characters, because his children were so disappointed that the series had ended. And indeed, the tools we apply show that Lippert deviates more from the originals than Chase. By using text-matcher by Jonathan Reeve, a not-yet-completely-polished but impressive tool nonetheless, in combination with the visualisation tool Bokeh, it is possible to visualize the overlap (see image); including the opportunity to make a line-by-line comparison of the sentences that have similar word usage. Chase is proven to copy large parts of the originals literally; showing how we can also apply stylometric methods to examine intertextuality, including reuse of materials.

Visualization in Bokeh - credit: Christof Schöch

Bokeh visualizes the overlap between two texts – size and location in the text. Credit: Christof Schöch

It gets even more interesting when we start to examine a larger body of Harry Potter fan fiction. Kestemont uses the database Archive of our Own to mine metadata and texts of fan fiction novels. This gives the researcher information that is never as easy accessible as it is here: who are the main characters, who have relationships, what are the fandoms that the author chooses to include — fan fiction authors provide this meta information because potential readers can use it to more easily to select fiction on the relationships and fandoms they prefer. The creativity is astounding: Lord of the Rings combined with Harry Potter is not as rare as I had expected it to be; and there are TV shows being crossed with Harry Potter that no one in the room has ever heard of and appear to not even be decently Google-able.

When we dive into the contents of these fan fiction stories, of course, there is sex. Lots of sex. (I for one never thought I would hear the term ‘elf porn’ in an academic context.) A computational topic model of fan fiction versus the original novels shows that topics specific to fan fiction are pornography, transportation and modern technology — that last one, interestingly, is also a topic The Riddle project team found as more typical (pdf) of ‘popular fiction’ as opposed to ‘literary’ fiction. But apart from the giggles the porn topic generates, it also shows something about how readers engage with characters. ‘Slash’ is, as fanfic researchers know, an important genre within fan fiction. Central characters, especially male ones, are paired in a romantic and often pornographic relationship. One of the first pairings was Kirk/Spock; the ‘/’ sign gives slash its name. Kestemont focuses on how attention to certain characters deviates from the original novels. Draco Malfoy and Severus Snape for instance, are much more present in fan fiction than in the Harry Potter novels, whereas Ron Weasley has the opposite effect. As Kestemont states: it gives us the possibility to research reception in a different way, to see how readers/writers engage with the original materials and characters.

That, I would stress, is an important academic outcome of this workshop, but I would like to end on another: fun. What always strikes me about Mike Kestemont’s work, is the joy he appears to get from his materials, the humour he brings to it. As he stresses, this is enhanced by working with bright and motivated colleagues such as co-tutor Enrique Manjavacas. But it is also partially explained by the type of material. As popular fiction is as much part of our cultural heritage as Great Literature is, this serves a dual purpose. First, we get a better view of fiction in general. Second, with that fun and humour comes creativity —  something we could use a little bit more of every now and then. Because that, I think, is where the magic happens. Bombarda Maxima!