Midst: Poetry in Process
The heart of Midst is a poetry journal—to be launched in 2019 before the conference—that displays poems as interactive timelapses, so that readers can see exactly how they were written, from the blank page to the final draft, including the entire revision process. To accomplish this, we’ve built the Midst app, a word processor which privately and securely tracks the writing process, capturing everything that’s typed, deleted, copied/pasted, and more (including formatting and font changes). Finished poems will be displayed on the Midst website with “timelines.” Readers will see the finished poem by default, as in every literary journal, but will then be able to click and drag the timeline’s playhead to see every step of the writing process. Play/pause functions will also allow the poem to be “played” as a stop-motion video, with optional audio narration (like a “director’s cut” with commentary) from the poet explaining their thought process. The website will also offer additional features, such as short essays on process from each poet; timestamps showing the pace of editing (and making the labor of writing for the first time transparent to the reader); and, eventually, a community forum where anyone will be able to upload and share their poems and timelines.
Our goal is to account for the impact of digital technologies on the writing process, demonstrating the diverse strategies of contemporary poets in digital writing and editing. Over time, Midst will constitute a digital archive offering unprecedented detail into not just the products of writers who use computers (finished poems), but their wriitng process—the digital equivalent of a notebook filled with drafts. In addition to this digital humanities archive and its obvious function as a literary journal, Midst will serve as an educational resource, illuminating the writing process for scholars and students of literature and—crucially—dispelling harmful myths about writing poetry. We’ll show how writing and editing—particularly digitally—happen in parallel; how these processes are nonlinear; and how form and content emerge simultaneously and exert mutual influence over each other during the process of writing a poem. Midst “tears down the walls” between scholarship and art, between readers and writers, between poetry and the public. Our primary goal is simply to make poetry—what it is and how it’s made—more accessible to everyone.
Midst is a collaboration between poet Annelyse Gelman, currently an MFA fellow at the Michener Center for Writers at UT Austin, and programmer Jason Grier.
Annelyse Gelman (University of Texas at Austin)
Jason Grier
The Importance of Reflection: A Call for Slow Digital Humanities
Most of the discourse around slow digital humanities has been focused on the process of building projects, or on the methods of the practitioner. While this attention to deliberate choices and values is an important part of our work as scholars, it does not take into account the readers relationship to a project. Reflection is a key aspect of humanistic thought and learning, so why has it been ignored in digital humanities? In order to reflect, the readers needs time - as well as space - to slow down. Through examining different methods of designing ways for digital humanities projects to be slower, such as uses of time, physical space, and interaction, we will look at how we can encourage reflection from readers.
A slower experience allows time to play a role in knowledge production. Readers will have a different relationship with a project they spent 6 hours with than one they only spend 5 minutes with. By slowing down the interaction we allow layers of thought to be built up and for meaning making to happen “at a human pace” (Fullerton, 2019). How we think about and build our projects affects how our readers interact with them. When we are thinking about how to convey ideas to a reader we must consider how we are using time to our advantage to convey our themes. We should know from the start how we are designing our projects to best make use of reflection and immerse our readers in our ideas.
Claudia Berger (Pratt Institute)
Sentiment Analysis Methods in Translation
A method traditionally applied to product review and marketing, namely, sentiment analysis or opinion mining, has recently been adopted to conduct computational analysis of literary texts (Jockers). In principle, this methodology consist of assigning a positive or negative valence derived from a "bag of words" to sentences or words in order to study the progress of sentiments throughout the text. This represents the passage of time and, in novels, the narrative plot.
As with most digital analysis methodologies and experiments run in recent years, these sentiment analysis dictionaries, workflows, and corpora to test results have been developed and conducted in English. In a few occasions, the research even includes works translated into English (Underwood 2019). In most cases, the use of these tools in other languages requires adaptation.
In this talk, I will show the results of a three-dimention mid-distance reading of literary texts in Spanish using the Syuzhet Package in R. First, I present the analysis of the original text with the available version of the NRC sentiment dictionary. Later, I will run the original, English dictionary in the same work in its published translated version as well as on a (non-reviewed) machine translated version. As a point of contrast, I will run the same test with a text in English with its human and machine translations into Spanish. Preliminary results conducted on *La gaviota* (1849) by Böhl de Faber, *Pepita Jiménez* (1874) by J. Valera, *The Swam of Villamorta* (1885) by E. Pardo Bazán, *Frankenstein* (1832) by M. Shelley and *David Copperfield* (1850) by Dickens shows that results on a micro-level change but do not affect the overall or macro-level narrative plot result. *Marianela* (1878) by B. Pérez Galdós, *The Froth* (1890) by A. Palacio Valdés, *One Hundred Years of Solitud* (1967) by G. García Márquez or *The Handmaid's Tale* (1985) by M. Atwood, however, show distinct results on a micro and distant level in both two languages, bringing up questions such as: Is it sufficient to generate raw translations of datasets in English in order to conduct the same tests in Spanish or should we generate our own datasets and methods? What effect has norms on punctuation have on this type of text analysis? How do informal expressions that call for clearly different vocabulary to express the same emotion affect the results of this method? As a consequence, one can ask, how good is the idea of using translations when testing methods in English?
The ultimate goal of this presentation is, thus, twofold. On the one hand, I show the possibilities of sentiment analysis for literary works in Spanish. Most importantly, however, I show the need to break the tools before trusting them: I investigate the implications of relying on translation for text analysis, by studying the difference in results in using a translated version of the sentiment dictionary to original works, as well as using the original dictionary to works translated from other languages.
Jennifer Isasi (University of Texas at Austin)