DH2022 SIG Workshop – Not FAIR yet? Infrastructures for Digital Literary Stylistics under the global looking glass

Single autodidacts, local labs, university centres, national, and transnational structures –  there are many ways in which scholars pursue DLS projects. The global, diverse, and dynamic developments of the past few years alone have demonstrated that “DH infrastructures for Digital Literary Stylistics” is a timely topic.

Taking seriously this factor, the SIG DLS pre-conference workshop at the international conference DH2022 in Tokyo will ask about the current forms and functions of digital infrastructures for DLS, at a diverse global level. 

  • What types of infrastructure are used by whom to what ends? What particular functions are used? 
  • Where and why are scholars missing infrastructure?
  • And also: can there be too much emphasis on infrastructure in certain contexts?

In particular, we will try to apply the notion of FAIR (findable, accessible, interoperable, reusable) and more broadly of Open Data best practices, including data citation to the DLS domain, with a critical perspective.

Research infrastructures, in particular in Europe, have provided tools and methods for the application of FAIR principles. ERICs such as CLARIN and DARIAH have been offering in particular data deposit (with persistent identifiers to cite collections), guidance for licences, standardised metadata, support for open standards, interoperable tools and platforms, access via single sign on, and also training to develop new competences for scholars. More recently the Computational Literary Studies (CLS) infrastructural project has been launched, as well as the German National research infrastructure text+, both of which aim to build a shared resource of high-quality data, tools and knowledge to aid new approaches to studying literature in the digital age.

At the same time, several problems still prevent data sharing, citation, reuse in DLS. These are of legal, technical, but also disciplinary nature. While European (or national) infrastructures may offer a partial solution to these problems, can we say that this is enough? And is it an exportable model?

In this workshop on 25 July 2022 we want to tackle the issues of making DLS research data FAIR, taking into account what RIs can offer but also the intrinsic limitations. We do so by inviting three short inputs, representing the perspectives of infrastructures and of researchers, followed by a discussion. Here, prompts from a forth speaker on the Japanese and East Asian research context will be given, addressing FAIR DLS from a diverse and global perspective.

We will also collect feedback from researchers prior to and during the meeting, and we envisage the possibility of a position paper on the issue after the conferece. 

Program


10:00 – 10:30 UTC Short provoker keynotes (confirmed speakers: Maciej Eder, Suzanne Mpouli, Andrew Piper)


10:30 – 11:15 UTC Discussion with attendees (confirmed short provoker keynote by Chifumi Nishioka: Asian Diversity in DH)


11:15 – 11:30 UTC Wrap up


11:30 – 12:00 UTC SIG’s members’ meeting


Info and registration: https://dh2022.adho.org/workshops-and-tutorials

Organizers: Berenike Herrmann, Simone Rebora, Francesca Frontini, Anne-Sophie Bories

“Tool Criticism 3.0” Workshop Moving Online!

With DH2020 Ottawa converted into a Virtual Conference, the ADHO-Special Interest Group “Digital Literary Stylistics” (SIG-DLS) has decided to convert also its new Tool criticism workshop into an online, hybrid format.

First phase (asynchronous)

From Tuesday 14 July, we will publish the contributions by invited speakers on the “DH2020” Humanities Commons Group (where the main conference will take place).

Speaker

Title

Quinn Dombrowski

‘It doesn’t work’: failure and DH tools

Thierry Poibeau

It’s complicated! On Natural Language Processing Tools and Digital Humanities

Joanna Byszuk

Direct speech for multilingual corpora  –  some problems and one possible solution

Avery Blankenship

Coding the Bechdel Test for Nineteenth-Century Novels: Gender Classifiers and Scikit-Learn

These contributions will provide food for thought for an asynchronous discussion on Humanities Commons. The discussion thread will be open for the entire week before the official opening of the Conference.

Second phase (synchronous)

On Monday 20 July, at 3pm CEST, an online live event will take place on Zoom (at a link provided on the “DH2020” Humanities Commons Group), where invited respondents will discuss the contributions and open the discussion among participants. Here is a provisional schedule:

Time (CEST)

Activity

Respondents

15:00 – 15:15

Welcome. SIG’s activities overview

 

15:15 – 15:45

Paper 1 (Quinn Dombrowski)

Ted Underwood and Frank Fischer

15:45 – 16:15

Paper 2 (Thierry Poibeau)

Christof Schöch and Francesca Frontini

16:15 – 16:30

Virtual Coffee break

 

16:30 – 17:00

Paper 3 (Joanna Byszuk)

Suzanne Mpouli and Simone Rebora

17:00 – 17:30

Paper 4 (Avery Blankenship)

J. Berenike Herrmann and Simone Rebora

17:30 – 18:00

SIG’s members’ meeting

 

Of course, participation is free (you just have to register to the Conference and to Humanities Commons), so we are just waiting for you to join us! 

CfP – Tool Criticism 3.0 Present, past, and future methods in Digital Literary Stylistics (DLS)

After the success of the second edition in Utrecht, the ADHO-Special Interest Group “Digital Literary Stylistics” (SIG-DLS) organizes a new version of its Tool criticism workshop for DH2020 Ottawa. This year, it will be open to contributions of all interested researchers and it will also set a focus on historical tool criticism.
The main goal of the workshop is to provide more critical awareness on the use of tools in DLS.This type of tool criticism can only be stimulated in two ways: (a) by a direct confrontation with user feedback and reports,which usually remain “in the background” of research papers; (b) via reconstructions of the historical evolution of tools and methodologies (which are never a given, but are built on top of complex –and sometimes unpredictable– research paths). The workshop is a natural extension of the DLS Tool Inventory (DLS-TI), which gathers information on the practices of the various traditions present in DLS. 
The workshop is scheduled for a pre-conference slot (the date pending confirmation). It will be structured as a mini-conference, where participants will be asked to present short critical reports (20 minutes each) on their use of tools in DLS. We encourage submissions on the following topics:

  • user interface criticism;
  • criticism of implied/hidden epistemologies (anatomy of tools); 
  • theoretical tool criticism (e.g. issues in modeling); 
  • historical tool criticism;
  • biases in DLS studies; 
  • replication studies.

Submission details

Proposals should be sent by 15 April 2020 to simone.rebora@univr.it. They should contain a title and a brief abstract (max 300 words).
Also feel free to enter contributions to the DLS Tool Inventory (DLS-TI), if relevant.
Acceptances will be notified by 15 May, 2020.

Organization

Organizers: Simone Rebora (University of Verona & University of Basel), J. Berenike Herrmann (University of Basel), Francesca Frontini (UPVM & Praxiling), Geoffrey Rockwell (University of Alberta), and Thierry Poibeau (CNRS & LATTICE)

Scientific committee: Anne-Sophie Bories (University of Basel), Julia Flanders (Northeastern University), Francesca Frontini (UPVM & Praxiling), J. Berenike Herrmann (University of Basel), Thomas C. Messerli (University of Basel), Thierry Poibeau (CNRS & LATTICE) , Simone Rebora (University of Verona & University of Basel), Geoffrey Rockwell (University of Alberta), Jan Rybicki (Jagiellonian University), and Peter Verhaar (Leiden University).

Registration

Please note that all participants and attendees will need to be registered for DH 2020. Full-day workshop registration for participants costs $70CAD and includes lunch.
Conference registration information can be found here: https://dh2020.adho.org/registration/

Digital Literary Stylistics in Utrecht: A Report on the DH2019 Conference

Most of us are back from the post-conference summer break: Time to revisit! Seven weeks have passed since the DH2019 conference in Utrecht: presented as the biggest DH conference so far (with about 1,100 participants and more than 400 presented papers), it also distinguished itself for the number and quality of contributions in digital literary stylistics. With this post, through highlights and summaries, we try to provide an overview of the most recent outcomes of stylistics research in DH. Well aware that (as our colleague José Calvo Tello already highlighted when reporting on DHd2018) these notes cannot be considered but a preliminary, incomplete, and subjective report on a much more complex and stratified event. As usual, more information can be found in the conference programme and book of abstracts, both available on the conference website. To extend the variety of documentation, then, the #dh2019 and #sig_dls hashtags were remarkably active on Twitter, while participants like Geoffrey Rockwell even shared their conference notes online.

But well, if you would like to know the specific point of view of the SIG-DLS (and, more specifically, of the member who’s writing this report), here are a few notes.

The first step, inevitably, is the pre-conference workshop organized by the SIG-DLS steering committee. Held on the morning of 9 July, DLS Tool Criticism. An Anatomy of Use Cases was conceived as a moment of self-reflection (and of self-criticism) on the tools and methodologies most frequently used in digital literary stylistics. During the workshop, Clémence Jacquot presented her use of the TXM software in the study of Apollinaire’s poetry (stimulating a discussion on limits and potentials of pre-compiled, user-friendly tools for the study of style); Geoffrey Rockwell presented a case study and a theoretical reflection on the concept of replication (thus rekindling the still lively discussion sparked by Nan Da’s criticism, but going beyond, querying ‘replication’ as a more flexible concept); and Steffen Pielström resisted the temptation to advertise his efficient topic modeling tool, inciting on the contrary an acute disputation on the pros and cons (and on the aleas) of such a technique. One of the major achievements in the final discussion was the shared impression that the quantitative/qualitative dichotomy is simply misleading, and digital literary stylistics will have to move beyond it, looking for a more organic integration of approaches.

The SIG_DLS Workshop (photo credit: J. Berenike Herrmann)

When reaching the bulk of the conference, the impressions we can record here become inevitably more random and scattered. A general trend was evident, however, with digital stylistics contributions divided roughly in two main categories: methodological analyses, providing very advanced and stimulating discussions of (more or less) shared approaches; and applications on complex and engaging case studies. This can be considered as a reflection of the state of things in stylometry, where methodological research is needed to ground more and more steadily the scientific rigor of the approaches (with the risk of an excessive technicism), and new applications even more so, if our research field is to gain  recognition beyond the boundaries of DH.

Among the methodological papers, some titles call for a specific mention. In “Comparing Assonance and Consonance for Authorship Attribution”, Lubomir Ivanov proposed prosodic elements as possible new features in stylometric classification, considering the extremely relevant role of prosody in 18th century prose (and beyond). His experiments showed how the combination of assonance/consonance with traditional stylometric methods generates significant improvements in the classification. Further research on the subject is definitely advised! In “Feature Selection in Authorship Attribution: Ordering the Wordlist”, Maciej Eder and Joanna Byszuk worked on the effects of re-ordering the (traditional) list of features. Main discovery: a combination of term frequency (the widely-used MFW) and a coefficient of variation slightly improves the results. Finally, in “Identifying Similarities in Text Analysis: Hierarchical Clustering (Linkage) versus Network Clustering (Community Detection)”, a team of four researchers led by Jeremi Ochab tried to provide an answer to a still under-studied–but ever-present–question in stylometric classification: how do I cluster the texts to visualize similarity/dissimilarity? Results showed that network clustering tends to underestimate the actual number of clusters, while hierarchical clustering overestimates it (and of course, which one is better will depend on the research question!). Traditional algorithms (Ward, Louvain) were confirmed as the most efficient.

Placing itself between method and application, David L. Hoover proposed his “Invisible Translator Revisited”. Given the (supposed) invisibility of the translator in stylometry, Hoover proposed a method for making her visible again, by filtering out the words preferred by the author. The approach showed surprisingly effective, but will it work for all pairs of authors-translators? As it happens, the idea found an indirect confirmation in “Attributions Of Early German Shakespeare Translations”, where a team of three researchers led by David Lassner was equally successful in revealing the translator (here, by combining advanced classification methods based on sub-word-level features). Shifting the focus towards themes and genres, Fotis Jannidis and his colleagues (in “Thematic complexity”) used two approaches, LDA and Zeta, to model genre distribution and to measure thematic complexity (via the “Gini index”) for dime novels and high-brow genres. Their results support their hypothesis of high literature comprising a broader spectrum of subject areas than pulp fiction genres. On a similar line of enquiry, a team of seven researchers led by J.D. Porter used stylometric approaches to identify passages in novels that adopt stylistic traits from other genres (in the paper “Microgenres”).

Miffy’s Traffic Light (photo credit: J. Berenike Herrmann)

Moving then to the applications, it is worth mentioning “Challenging Stylometry: The Authorship of the Baroque Play La Segunda Celestina”, by Laura Hernández Lorenzo and Joanna Byszuk (who wins our award of the most prolific stylometrician, with three co-authored papers, and who on top of that also has contributed a SIG-DLS blog post!). That of La Segunda Celestina is a real “challenge” for authorship attribution: in fact, the results of the analysis did not limit themselves to “solving” the problem, but suggested possible new lines of enquiry. In “Stylometric Analyses of Character Speeches in French Plays”, then, Ioana Galleron provided an effective example of “scalable reading”, combining multiple approaches to distinguish the style of male and female characters. Finally, Massimo Salgaro and Simone Rebora (i.e. myself, in the panel “Digital Humanities for the Study of Social Reading”) showed how stylometric approaches can also be used to distinguish professional and non-professional book reviews.

A conclusive note should be dedicated to “Stylometry for Noisy Medieval Data: Evaluating Paul Meyer’s Hagiographic Hypothesis” by Ariane Pinche, Jean-Baptiste Camps, and Thibault Clérice. The work stands out as a perfect realization of a processing pipeline (from handwritten text recognition to stylometric classification, with advanced techniques to reduce noise and manage spelling variants) that instantiates a fruitful dialogue with traditional literary research, by confirming (and in part disputing) Paul Meyer’s hypotheses about the composition of the French saint’s Lives. The paper was awarded the Paul Fortier Prize for the best young scholar paper of the conference. A further success that confirms the increasingly central position of digital literary stylistics in DH research.

The SIG-DLS steering committee reiterates their congratulations to Ariane, Jean-Baptiste, and Thibault, but also to the Programme Committee chairs Elena and Fabio, and to the local organizers Joris and Franciska, for making such a spectacular conference possible. Looking forward to meeting you all again in Ottawa (hopefully, with even more stylometry to discuss)!

The SIG-DLS Tool Inventory

Let’s create our inventory of Digital Literary Stylistics methods and tools!

This summer at DH2018, the members of SIG-DLS in situ came up with some new activities for the new season. Among these was a DLS Tool Inventory of our own, reflecting our practice, with tested and tried methods, software, tools.

Just a spreadsheet… Will it become magic?


Well, here we are: happy to share with you all a spreadsheet that will hopefully become truly awesome. Let’s create the magic together!

Please go ahead and enter the tool/method that you know well in a one-line report. Mind you, that shall be tools/methods that are real and recent, tested and tried within the last five years. Please don’t enter reports on tools created by yourself. Where possible, your one-line report shall also include a brief and constructive report on strengths and weaknesses of the tool/version that you used; on usability with regard to your research question.


Computer Assisted Text Markup and Analysis (CATMA) – An undogmatic approach to corpus analysis and Germany’s literary super heroes

Do you know CATMA? Not yet? Then read the following blog. Do you think you know CATMA? It’s still a good idea to read the following blog…

CATMA is a web-application for text markup and analysis. Its central function is annotating, analyzing and visualizing one or multiple texts. CATMA has been around since 2008.

When the implementation of the web-application CATMA started in 2008, the focus was to create a tool for digital close reading. Although there always has been the possibility to run some standard queries as e.g. word frequencies automatically, the focus of the application was entirely on the annotation functionality of single texts. Ten years later, CATMA provides the possibility to store, manage and analyse not only single texts, but also bigger corpora – using both close and distant reading methods. Although CATMA still can be used for manual annotation of single texts, in this article, we want to show you how the web-application can be used on corpora.

Goethe and Schiller, Germany’s literary super heroes

Let’s say you would like to explore the authorial styles of Germany’s literary super heroes, Goethe and Schiller. As you want to make the two author corpora comparable, you choose 12 dramatic and 5 prosaic texts of each author. Just upload those texts to CATMA and organise them into different corpora, one with texts by Goethe and one with texts by Schiller. Once your corpora have been created in CATMA, you can easily export them, or, more important, share them with your team.

Without any further preparation you can now start analysing your corpora. Using the Analyze modul you can generate a word list with frequencies. This list will show you that there are around 800.837 words in your selection of Goethe’s texts with around 46.840 types. Schiller’s texts contain 35.351 word types and a total of 498.790 words. These numbers are not very meaningful yet, because, obviously, the texts do not have the same length. Looking at the word frequency list, from a narrative perspective, the first person pronoun “ich” (I) is probably most interesting as it points to a predominant first-person perspective. Or is it just the dramatic characters? Other highly frequent (but narratologically possibly less relevant) words in the Goethe corpus are “und” (and), “die” (the/who/which – female), “zu” (to), and “der” (the/who/which – male), much as in the Schiller corpus (albeit in the order “der”, “und”, and “die”). At this top tier of frequency, there seems to be a similarity.

Word frequencies in Goethe’s and Schiller’s texts

But how about the other end of the lexical spectrum? Goethe is known to have had a comparably large vocabulary, so we may quickly check the least frequent words in the corpus – those which are used only once. Are there many of these hapax legomena?

In the Goethe corpus they sum up to 28.792 words altogether. Schiller uses 21.025 words only once in the entire corpus. This means that Goethe uses 61.4% of all the types of this corpus only once. Schiller uses 59.4% of all types in the corpus only once. Maybe this similarity suggests that Schiller after all was closely following Goethe in variability of word use. However, nobody will ever know how large the vocabulary of Schiller’s works might have become had he reached the same old age as Goethe in the end…

If we turn away from the low frequencies now and have another look at the high ones, we get hold of another interesting phenomenon. Among the most frequently used signs by Schiller are question- and exclamation marks. They appear at ranks 4 (exclamation mark) and 8 (question mark). In the Goethe corpus they only make it to ranks 11 (exclamation mark) and 27 (question mark). A look at the distribution of exclamation and question marks in our corpora shows that the distribution of exclamation marks ist always higher than the one of question marks in the Goethe corpus. Schiller on the other side has written five texts in which there are more question marks than exclamation marks. Is it maybe that Schiller’s authorial signature is characterized by an unusual amount of questions? Especially his play “Don Carlos. Infant von Spanien” is characterized by question marks, as you can see in the CATMA small multiples view below:

Small multiples view of distributions of question- and exclamation marks in the works of Schiller

Our case study so far has applied some handy distant reading functions, which are often used in one way or another in stylistics. But CATMA offers also a method for what we call scalable reading. Starting off with distant reading as we did, you may now scale down a bit: Simple double clicking on one point in the distribution graph or one keyword in the keyword in context table will take you to the specific position in the text where you find either the keyword or the accumulation of a word or tag you found in your small multiple distribution graphs. So, you can go from corpus to multiple texts in one visualization, on to single text view and even to the very position at which you find one word in one text – changing dynamically from distant to close reading and back just as you wish. This scalability also allows to develop different kinds of interpretation from data analysis to more context-oriented interpretation of certain passages in single texts.

I leave you here to start your own case study now, be it on the style of Goethe and Schiller or other literary phenomena. But just before you do that, do know that there are more functions for working on corpora in CATMA, among which are automatic annotations of part of speech (POS) tagging, as well as that of verbal tense and temporal signals (in German texts).

And of course you can also use the central CATMA function, which is annotating one text, or a whole corpus, with your very own categories, analyzing and visualizing them. CATMA ist running as version 5.0 right now, but in 2019, CATMA 6.0 will be launched. Users can look forward to the new design of the graphical user interface, the optimization of workflows and the addition of some features. 

As we focused on the corpus-specific functions in CATMA in this article, you might want to have a look at our tutorials for a complete overview of the application’s functionalities: http://catma.de/documentation/tutorials/

Have fun annotating, exploring, and scaling!