DH2022 SIG Workshop – Not FAIR yet? Infrastructures for Digital Literary Stylistics under the global looking glass

Single autodidacts, local labs, university centres, national, and transnational structures –  there are many ways in which scholars pursue DLS projects. The global, diverse, and dynamic developments of the past few years alone have demonstrated that “DH infrastructures for Digital Literary Stylistics” is a timely topic.

Taking seriously this factor, the SIG DLS pre-conference workshop at the international conference DH2022 in Tokyo will ask about the current forms and functions of digital infrastructures for DLS, at a diverse global level. 

  • What types of infrastructure are used by whom to what ends? What particular functions are used? 
  • Where and why are scholars missing infrastructure?
  • And also: can there be too much emphasis on infrastructure in certain contexts?

In particular, we will try to apply the notion of FAIR (findable, accessible, interoperable, reusable) and more broadly of Open Data best practices, including data citation to the DLS domain, with a critical perspective.

Research infrastructures, in particular in Europe, have provided tools and methods for the application of FAIR principles. ERICs such as CLARIN and DARIAH have been offering in particular data deposit (with persistent identifiers to cite collections), guidance for licences, standardised metadata, support for open standards, interoperable tools and platforms, access via single sign on, and also training to develop new competences for scholars. More recently the Computational Literary Studies (CLS) infrastructural project has been launched, as well as the German National research infrastructure text+, both of which aim to build a shared resource of high-quality data, tools and knowledge to aid new approaches to studying literature in the digital age.

At the same time, several problems still prevent data sharing, citation, reuse in DLS. These are of legal, technical, but also disciplinary nature. While European (or national) infrastructures may offer a partial solution to these problems, can we say that this is enough? And is it an exportable model?

In this workshop on 25 July 2022 we want to tackle the issues of making DLS research data FAIR, taking into account what RIs can offer but also the intrinsic limitations. We do so by inviting three short inputs, representing the perspectives of infrastructures and of researchers, followed by a discussion. Here, prompts from a forth speaker on the Japanese and East Asian research context will be given, addressing FAIR DLS from a diverse and global perspective.

We will also collect feedback from researchers prior to and during the meeting, and we envisage the possibility of a position paper on the issue after the conferece. 

Program


10:00 – 10:30 UTC Short provoker keynotes (confirmed speakers: Maciej Eder, Suzanne Mpouli, Andrew Piper)


10:30 – 11:15 UTC Discussion with attendees (confirmed short provoker keynote by Chifumi Nishioka: Asian Diversity in DH)


11:15 – 11:30 UTC Wrap up


11:30 – 12:00 UTC SIG’s members’ meeting


Info and registration: https://dh2022.adho.org/workshops-and-tutorials

Organizers: Berenike Herrmann, Simone Rebora, Francesca Frontini, Anne-Sophie Bories

“Tool Criticism 3.0” Workshop Moving Online!

With DH2020 Ottawa converted into a Virtual Conference, the ADHO-Special Interest Group “Digital Literary Stylistics” (SIG-DLS) has decided to convert also its new Tool criticism workshop into an online, hybrid format.

First phase (asynchronous)

From Tuesday 14 July, we will publish the contributions by invited speakers on the “DH2020” Humanities Commons Group (where the main conference will take place).

Speaker

Title

Quinn Dombrowski

‘It doesn’t work’: failure and DH tools

Thierry Poibeau

It’s complicated! On Natural Language Processing Tools and Digital Humanities

Joanna Byszuk

Direct speech for multilingual corpora  –  some problems and one possible solution

Avery Blankenship

Coding the Bechdel Test for Nineteenth-Century Novels: Gender Classifiers and Scikit-Learn

These contributions will provide food for thought for an asynchronous discussion on Humanities Commons. The discussion thread will be open for the entire week before the official opening of the Conference.

Second phase (synchronous)

On Monday 20 July, at 3pm CEST, an online live event will take place on Zoom (at a link provided on the “DH2020” Humanities Commons Group), where invited respondents will discuss the contributions and open the discussion among participants. Here is a provisional schedule:

Time (CEST)

Activity

Respondents

15:00 – 15:15

Welcome. SIG’s activities overview

 

15:15 – 15:45

Paper 1 (Quinn Dombrowski)

Ted Underwood and Frank Fischer

15:45 – 16:15

Paper 2 (Thierry Poibeau)

Christof Schöch and Francesca Frontini

16:15 – 16:30

Virtual Coffee break

 

16:30 – 17:00

Paper 3 (Joanna Byszuk)

Suzanne Mpouli and Simone Rebora

17:00 – 17:30

Paper 4 (Avery Blankenship)

J. Berenike Herrmann and Simone Rebora

17:30 – 18:00

SIG’s members’ meeting

 

Of course, participation is free (you just have to register to the Conference and to Humanities Commons), so we are just waiting for you to join us! 

The SIG-DLS Tool Inventory

Let’s create our inventory of Digital Literary Stylistics methods and tools!

This summer at DH2018, the members of SIG-DLS in situ came up with some new activities for the new season. Among these was a DLS Tool Inventory of our own, reflecting our practice, with tested and tried methods, software, tools.

Just a spreadsheet… Will it become magic?


Well, here we are: happy to share with you all a spreadsheet that will hopefully become truly awesome. Let’s create the magic together!

Please go ahead and enter the tool/method that you know well in a one-line report. Mind you, that shall be tools/methods that are real and recent, tested and tried within the last five years. Please don’t enter reports on tools created by yourself. Where possible, your one-line report shall also include a brief and constructive report on strengths and weaknesses of the tool/version that you used; on usability with regard to your research question.


Computer Assisted Text Markup and Analysis (CATMA) – An undogmatic approach to corpus analysis and Germany’s literary super heroes

Do you know CATMA? Not yet? Then read the following blog. Do you think you know CATMA? It’s still a good idea to read the following blog…

CATMA is a web-application for text markup and analysis. Its central function is annotating, analyzing and visualizing one or multiple texts. CATMA has been around since 2008.

When the implementation of the web-application CATMA started in 2008, the focus was to create a tool for digital close reading. Although there always has been the possibility to run some standard queries as e.g. word frequencies automatically, the focus of the application was entirely on the annotation functionality of single texts. Ten years later, CATMA provides the possibility to store, manage and analyse not only single texts, but also bigger corpora – using both close and distant reading methods. Although CATMA still can be used for manual annotation of single texts, in this article, we want to show you how the web-application can be used on corpora.

Goethe and Schiller, Germany’s literary super heroes

Let’s say you would like to explore the authorial styles of Germany’s literary super heroes, Goethe and Schiller. As you want to make the two author corpora comparable, you choose 12 dramatic and 5 prosaic texts of each author. Just upload those texts to CATMA and organise them into different corpora, one with texts by Goethe and one with texts by Schiller. Once your corpora have been created in CATMA, you can easily export them, or, more important, share them with your team.

Without any further preparation you can now start analysing your corpora. Using the Analyze modul you can generate a word list with frequencies. This list will show you that there are around 800.837 words in your selection of Goethe’s texts with around 46.840 types. Schiller’s texts contain 35.351 word types and a total of 498.790 words. These numbers are not very meaningful yet, because, obviously, the texts do not have the same length. Looking at the word frequency list, from a narrative perspective, the first person pronoun “ich” (I) is probably most interesting as it points to a predominant first-person perspective. Or is it just the dramatic characters? Other highly frequent (but narratologically possibly less relevant) words in the Goethe corpus are “und” (and), “die” (the/who/which – female), “zu” (to), and “der” (the/who/which – male), much as in the Schiller corpus (albeit in the order “der”, “und”, and “die”). At this top tier of frequency, there seems to be a similarity.

Word frequencies in Goethe’s and Schiller’s texts

But how about the other end of the lexical spectrum? Goethe is known to have had a comparably large vocabulary, so we may quickly check the least frequent words in the corpus – those which are used only once. Are there many of these hapax legomena?

In the Goethe corpus they sum up to 28.792 words altogether. Schiller uses 21.025 words only once in the entire corpus. This means that Goethe uses 61.4% of all the types of this corpus only once. Schiller uses 59.4% of all types in the corpus only once. Maybe this similarity suggests that Schiller after all was closely following Goethe in variability of word use. However, nobody will ever know how large the vocabulary of Schiller’s works might have become had he reached the same old age as Goethe in the end…

If we turn away from the low frequencies now and have another look at the high ones, we get hold of another interesting phenomenon. Among the most frequently used signs by Schiller are question- and exclamation marks. They appear at ranks 4 (exclamation mark) and 8 (question mark). In the Goethe corpus they only make it to ranks 11 (exclamation mark) and 27 (question mark). A look at the distribution of exclamation and question marks in our corpora shows that the distribution of exclamation marks ist always higher than the one of question marks in the Goethe corpus. Schiller on the other side has written five texts in which there are more question marks than exclamation marks. Is it maybe that Schiller’s authorial signature is characterized by an unusual amount of questions? Especially his play “Don Carlos. Infant von Spanien” is characterized by question marks, as you can see in the CATMA small multiples view below:

Small multiples view of distributions of question- and exclamation marks in the works of Schiller

Our case study so far has applied some handy distant reading functions, which are often used in one way or another in stylistics. But CATMA offers also a method for what we call scalable reading. Starting off with distant reading as we did, you may now scale down a bit: Simple double clicking on one point in the distribution graph or one keyword in the keyword in context table will take you to the specific position in the text where you find either the keyword or the accumulation of a word or tag you found in your small multiple distribution graphs. So, you can go from corpus to multiple texts in one visualization, on to single text view and even to the very position at which you find one word in one text – changing dynamically from distant to close reading and back just as you wish. This scalability also allows to develop different kinds of interpretation from data analysis to more context-oriented interpretation of certain passages in single texts.

I leave you here to start your own case study now, be it on the style of Goethe and Schiller or other literary phenomena. But just before you do that, do know that there are more functions for working on corpora in CATMA, among which are automatic annotations of part of speech (POS) tagging, as well as that of verbal tense and temporal signals (in German texts).

And of course you can also use the central CATMA function, which is annotating one text, or a whole corpus, with your very own categories, analyzing and visualizing them. CATMA ist running as version 5.0 right now, but in 2019, CATMA 6.0 will be launched. Users can look forward to the new design of the graphical user interface, the optimization of workflows and the addition of some features. 

As we focused on the corpus-specific functions in CATMA in this article, you might want to have a look at our tutorials for a complete overview of the application’s functionalities: http://catma.de/documentation/tutorials/

Have fun annotating, exploring, and scaling!

Harry Potter, computational fun and sexy gains

About the workshop on the Recreation of Harry Potter, endorsed by the SIG-DLS at DH2018 on 25 June 2018, led by Mike Kestemont and Enrique Manjavacas; and developed in cooperation with Greta Franzini and Marco Büchler, part of the 2018 Digital Humanities conference in México City

By Corina Koolen

Harry Potter novels, not one of the first topics one might think about when performing stylometry. But, as Mike Kestemont argues, popular literature is one of the unrightfully overlooked areas of digital literary studies, which still often focuses on the classics. With the workshop run by him and Enrique Manjavacas (many thanks for all your help!) during the international Digital Humanities Conference 2018 in México City, they show us exactly why this is undeserved. I am impressed by the setup chosen, which combines computational analyses with thoughtful reflection on a number of humanistic issues and interests.

One of them is the legal aspect of researching contemporary novels. After a discussion on the differences per country, where the basic conclusion is that there it is very hard to determine what exactly is legal in which country, Kestemont remarks: “We always talk about the author’s rights. I believe I have rights, too, as a researcher.” I could not agree more, this is going to be ammunition for legal discussions in the future.

Then the stylometry. Generally, stylometrists are known for performing authorship recognition. J.K. Rowling herself found out about the discipline when she was unmasked by Patrick Juola as the real author behind Robert Galbraith. There are other cool things to do, however, through stylometry. And this is where it becomes interesting for me as a researcher of popular literature: we are going to look at stylistic similarity between the novels and HP fan fiction. There are two cases that we will test, both of which follow the original structure of the novels. The first case is Aidan Chase. This fan fiction author, of whom little biographical information is available, reframed the originals to create a story world where Harry’s parents never died, but stayed true to the main story line. The second is Norman G. Lippert. He created new novels, based on the original characters, because his children were so disappointed that the series had ended. And indeed, the tools we apply show that Lippert deviates more from the originals than Chase. By using text-matcher by Jonathan Reeve, a not-yet-completely-polished but impressive tool nonetheless, in combination with the visualisation tool Bokeh, it is possible to visualize the overlap (see image); including the opportunity to make a line-by-line comparison of the sentences that have similar word usage. Chase is proven to copy large parts of the originals literally; showing how we can also apply stylometric methods to examine intertextuality, including reuse of materials.

Visualization in Bokeh - credit: Christof Schöch

Bokeh visualizes the overlap between two texts – size and location in the text. Credit: Christof Schöch

It gets even more interesting when we start to examine a larger body of Harry Potter fan fiction. Kestemont uses the database Archive of our Own to mine metadata and texts of fan fiction novels. This gives the researcher information that is never as easy accessible as it is here: who are the main characters, who have relationships, what are the fandoms that the author chooses to include — fan fiction authors provide this meta information because potential readers can use it to more easily to select fiction on the relationships and fandoms they prefer. The creativity is astounding: Lord of the Rings combined with Harry Potter is not as rare as I had expected it to be; and there are TV shows being crossed with Harry Potter that no one in the room has ever heard of and appear to not even be decently Google-able.

When we dive into the contents of these fan fiction stories, of course, there is sex. Lots of sex. (I for one never thought I would hear the term ‘elf porn’ in an academic context.) A computational topic model of fan fiction versus the original novels shows that topics specific to fan fiction are pornography, transportation and modern technology — that last one, interestingly, is also a topic The Riddle project team found as more typical (pdf) of ‘popular fiction’ as opposed to ‘literary’ fiction. But apart from the giggles the porn topic generates, it also shows something about how readers engage with characters. ‘Slash’ is, as fanfic researchers know, an important genre within fan fiction. Central characters, especially male ones, are paired in a romantic and often pornographic relationship. One of the first pairings was Kirk/Spock; the ‘/’ sign gives slash its name. Kestemont focuses on how attention to certain characters deviates from the original novels. Draco Malfoy and Severus Snape for instance, are much more present in fan fiction than in the Harry Potter novels, whereas Ron Weasley has the opposite effect. As Kestemont states: it gives us the possibility to research reception in a different way, to see how readers/writers engage with the original materials and characters.

That, I would stress, is an important academic outcome of this workshop, but I would like to end on another: fun. What always strikes me about Mike Kestemont’s work, is the joy he appears to get from his materials, the humour he brings to it. As he stresses, this is enhanced by working with bright and motivated colleagues such as co-tutor Enrique Manjavacas. But it is also partially explained by the type of material. As popular fiction is as much part of our cultural heritage as Great Literature is, this serves a dual purpose. First, we get a better view of fiction in general. Second, with that fun and humour comes creativity —  something we could use a little bit more of every now and then. Because that, I think, is where the magic happens. Bombarda Maxima!

A missing president?

Analysis of The President is Missing, by James O’Sullivan of University College Cork.

The answer is quite conclusive: ‘The accompanying graph represents the novel on the x-axis, broken into segments: the thicker the bottom line, the more certain the proximity to the relevant author’s style. Considering Patterson’s fingerprint is represented by green, it is plain to see that, contrary to our previous study, this is a co-authored novel in which he was the scribe.’

Of course, as O’Sullivan points out himself, we need to stay wary where the genre signal comes in to make life complicated (no fictional writing of Clinton’s could be used for the study, simply because there isn’t any). However, there is good reason to assume that the authorial signal here trumps the genre signal.

Finally, O’Sullivan points out another important dimension of that particular collaboration: that of the market. “What better way to sell a book, than to have a mogul of commuter fiction combine with a former US president?”

Interestingly, according to both Lane and O’Sullivan, it is the former US president who does raise his voice towards the end of the novel, contributing a finishing touch, or rather, a finishing strike to the tale:

“…there’s the chutzpah with which Clinton (Patterson, I would suggest, may have stepped aside at this stage) waits until the twilight of the novel and then, like Tolstoy, squares his shoulders and expounds, in fiction-free form, his politico-historical thoughts.” (Lane)

… and this is what it sounds like:

“I want the United States to be free and prosperous, peaceful and secure, and constantly improving for all generations to come.”

I’d like to conclude that the current state of world history is clearly up for political debate, while the novel is naturally subject of an aesthetic one. Meanwhile, O’Sullivan’s article is a great stylometry story –  and we look foward to more of those.

Distant Reading for European Literary History

A few months ago, our COST Action Distant Reading for European Literary History (CA16204) was approved and started running. Even more recently, our Working Group ‘Scholarly Resources’, aka its corpus backbone, met in Prague, for the first time! We are very excited about getting started and would like to briefly report on our work in this blog.

The COST Action Distant Reading has the goal to “develop the resources and methods necessary to change the way European literary history is written” (Memorandum of Understanding). To approach this goal, the Action’s Working Group ‘Scholarly Resources’ aims to create a big open source benchmark corpus of novels from 1850-1920, called ELTeC (European Literary Text Collection). During the course of the Action, we will examine ELTeC with different computational distant reading methods such as topic modelling, authorship attribution, network analysis, stylistic analysis, and different types of character analysis.

The Action’s Working Group ‘Scholarly Resources’ will coordinate the task of creating the ELTeC; It consists of members from all over Europe, from literary studies, computational and corpus lingustics, and from information science. The COST Action is a great opportunity to collaboratively work with researchers and bring together expertise for different languages as well as computational methods.

A challenging task has been defining the corpus selection criteria, a common annotation model and potential workflows for corpus creation for the ELTeC which can be applied to European novels from different languages as well as various publication contexts.

During our first Working Group meeting, we developed corpus selection and balancing criteria which follow a simple but consistent corpus design approach and represent the variety of novels in this period. Among other things, we do not want to solely rely on normative canon-based selection criteria and set a bias to our understanding of novels. The focus of the ELTeC encoding scheme is to uniformly represent historical novels from different languages with a basic TEI mark-up. The standard markup is necessary for applying different types of distant reading methods rather than representing the original text structure. ELTeC and the encoding scheme will be freely available at our GitHub Organization.

In this way, we would like to contribute a big chunk to the creation of a digital basis for cross-national and cross-language analysis of European literature!

The CLiC web app – a corpus tool for studying literary texts

There are already loads of digital humanities tools out there, but they don’t necessarily focus on the literary linguistic analysis of texts. In this post we introduce the CLiC web app, a corpus tool that has been specifically designed to address research questions in stylistics.

In addition to standard corpus tool functionalities, CLiC (Corpus Linguistics in Context) allows the user to restrict searches to text within or outside of quotation marks. As of March 2018, CLiC contains over 130 books. These are available via four corpora: the corpus of Dickens’s Novels (DNov), a 19th Century Reference Corpus (19C), a corpus of 19th Century Children’s Literature (ChiLit) – compiled for the GLARE project, which explores gender in children’s literature over time – and Additional Requested Texts (ArTs). We created the ArTs Corpus in order to respond to requests by users and particular teachers who want to use CLiC in their teaching. With ArTs, we added more books that are set texts for the British exam specifications (GCSE and A-Levels).

The poster gives you a quick overview of the main functions in CLiC. One feature specific to CLiC is the ‘KWICgrouping’ of results: the KWICGrouper makes it possible to semi-automatically group concordance lines based on searching the context. The lines with the most matches (i.e. shared patterns) are highlighted and moved to the top of the concordance. (For a demonstration of the KWICGrouper in action you can watch a video tutorial; although this video is based on an older version of CLiC, 1.5, the current KWICGrouper still works on the same principle; also see O’Donnell 2008.) For a more detailed description of the interface, please refer to the User Guide in the CLiC Help Menu.

CLiC Poster SIG-DLS blog

Here we focus on a concordance of the word chin, a body part noun, to illustrate the value of concordancing when looking for patterns in the text. Body part nouns are useful starting points to find descriptions of body language in novels. On the one hand, patterns of body language can relate to wider cultural behaviours (for an example of this, see our blog post about the ‘fireplace pose’). However, body language can also contribute to characterisation. Overall, there are 317 instances of the word chin in Dickens’s novels. In order to look for patterns that are character-specific we might want to search for a character name. For example, the Major in Dombey and Son co-occurs three times with chin. The other option is to explore patterns of body language and check which characters they occur with.

Using the KWICGrouper we can go through the list of words in the context of chin and select another body part word, such as eyes, in order to identify patterns of co-occurring body parts. In this case, just by adding the possessive her to the KWICGrouped terms we retrieve a pattern that is unique to one character. The resulting concordance on the poster shows that all instances of chin with her and eyes come from the novel Our Mutual Friend, and indeed all describe Miss Jenny Wren, who is also called “the person of the house”:

‘And she always has the use of this room for visitors,’ said the person of the house, screwing up one of her little bony fists, like an opera-glass, and looking through it, with her eyes and her chin in that quaint accordance. ‘Always this room for visitors; haven’t you, Lizzie dear?’

(Our Mutual Friend, Charles Dickens, Chapter 18)

Our example shows how the KWICGrouper can be used to highlight patterns across novels and provide examples for themes such as body language. A more detailed description of this “chin” case study can be found in the KWICGrouper section (I.4) of the CLiC Activity Book. The Activity Book contains a basic introduction to corpus linguistics and a collection of activities for teaching at secondary school and undergraduate level. Further news related to the project, case studies using CLiC and teaching materials are regularly posted to our CLiC Dickens Blog. We would be interested to hear about how you use CLiC (or what we could improve) and are always happy to consider related guest posts for the blog. Please get in touch via email or Twitter.

Written by Viola Wiegand and Michaela Mahlberg

The CLiC web app was initially developed as part of the AHRC-funded CLiC Dickens project at the Universities of Birmingham and Nottingham (grant reference AH/P504634/1). Please cite Mahlberg et al. (2016) when you use CLiC in your work.

References

Macroetym: a Command-Line Tool for Macro-Etymological Textual Analysis

Consider the set of words “kingly,” “royal,” and “regal.” The words are near-synonyms, but there are important distinctions between them, owing to their etymology: “kingly” is of Anglo-Saxon origin, while “royal” is descended from French, and “regal” from Latin. Macro-etymological text analysis quantifies the etymologies of every word in a text, and uses this as a proxy measurement of the tone, voice, genre, or level of discourse of the text. English texts with high proportions of Latinate words, for instance, are frequently non-fiction works, while those with high proportions of Germanic words are often works of fiction. Correlations like these also exist within a single literary work: Bildungsromans like James Joyce’s A Portrait of an Artist as a Young Man often begin with low proportions of Latinate words, and end with high proportions.

Macroetym is a command-line tool, written in Python, that provides verbose macro-etymological analyses of single texts, is capable of comparative analysis of multiple texts, and works for texts in any language that is widely represented in Wiktionary. It adheres to the Unix Philosophy, meaning that it produces machine-readable output, and is therefore interoperable with other command-line tools. When used in conjunction with a tool like chapterize, which breaks a novel into chapters, it can be used to identify changing language patterns over the narrative time of a novel or similar work.

To use it, simply type macroetym followed by the text file you want to analyze. Wildcard patterns (*.txt) can also be used to comparatively analyze a group of texts. To analyze Virginia Woolf’s Mrs Dalloway, use the command:

$ macroetym woolf-mrs-dalloway.txt --allstats

Which provides the output:

                                woolf-mrs-dalloway.txt
Anglo-Norman                                      7.23
Angloromani                                       0.03
Arabic                                            0.06
Aragonese                                         0.06
Dutch                                             0.29
Dutch, Middle (ca. 1050-1350)                     0.27
English, Old (ca. 450-1100)                      36.56
French                                            7.81
French, Middle (ca. 1400-1600)                    3.96
French, Old (842-ca. 1400)                       21.58
German                                            0.06
German, Middle Low                                0.12
... etc.

To compare the proportions of Latinate words among books in John Milton’s poem Paradise Lost, use the command:

$ macroetym paradise-lost-books/* --showfamilies Latinate        

Which provides the output:

          bk/book01.txt  bk/book02.txt  bk/book03.txt  bk/book04.txt
Latinate      52.622816      56.005313      52.644493      50.522588   

          bk/book05.txt  bk/book06.txt  bk/book07.txt  bk/book08.txt
Latinate      55.929858      56.608863       51.46886      54.492665   

          bk/book09.txt  bk/book10.txt  bk/book11.txt  bk/book12.txt  
Latinate      53.625632      54.745275      50.982633      52.195609  

Many more functions are available. See macroetym --help for more.

Pilot studies that use macroetym include:

Macroetym is free and open-source software. To download the program, report a bug, or contribute code to the project, visit the project’s GitHub repository.

DLS Resources: Journal of Cultural Analytics

Today, we’re beginning our series of DLS-resources with a very brief introduction of the Journal of Cultural Analytics (CA).

CA is an open-access journal dedicated to the computational study of culture. The journal currently features three sections: Articles reporting peer-reviewed new scholarship, Data Sets that facilitate cultural studies accompanied by discussions; and Debates offering interventions into current discussions surrounding the computational analysis of culture. The Clusters section provides an overview of the various themes or special issues addressed by the journal.

CA’s latest special issue is organized by the NovelTM research team. Its theme is “Identity.” Articles for this issue are posted successively, starting with the Introduction by Susan Brown and Laura Mandell and an article on The Transformation of Gender in English-Language Fiction by Ted Underwood, David Bamman, and Sabrina Lee.

 

Doing Digital Literary Stylistics!

SIG-DLS Resources and Events

On this blog, SIG-DLS members publish posts on resources and events relevant to our SIG. The posts introduce best practices, data and tools, but also journals, research groups, blogs, initiatives (including pedagogical ones), as well as reports of events.

So far there are two types of posts:
– Resources: Posts on resources such as tools, corpora, or coding manuals give a short narrative account of what a particular resource is and how it relates to DLS, possibly including a use case, in 200-500 words.
– Event Reports: Posts that report on conferences, workshops, courses (and so on) do so in addressing important, interesting, difficult issues for our group, values and priorities.

Would you like to introduce your resource in the SIG-DLS blog or report on a digital literary stylistics event? Please let us know: berenike[dot]herrmann[at]unibas[dot]ch