Keynote Address

Lisa Gitelman (New York University)

Beginning with the computational contexts within which the term metadata was initially deployed, this talk first addresses ways that the idea may have achieved its belated power within the so-called archival turn and then explores its continued currency. If the notion of the archive can point us toward questions of power, truth, and fiction, then the concept of metadata stands to call our attention to matters of control. While suggesting the fantasy of a total description or a total ontology of information resources, the metadata concept helps to support a particular epistemic frame—vernacular, trenchant, inescapable—in which finding ostensibly equals knowing.


Bridget Whearty (Binghamton University)
Ruth Carpenter (Binghamton University)

University of Glasgow, Hunter MS 5 (a 15th-c copy of Lydgate’s Fall of Princes) has been defaced in strikingly queer ways. On fol. 197r, a reader rewrote the story of Pope Joan, changing all she/her pronouns to he/his—before scribbling out the entire thing. This paper begins with my attempts to digitally recover those revisions beneath the blanket obliteration of this queer medieval figure.

Using MS Hunter 5 as a launching point, I move on to explore the challenges of queering codicology in a BHDH context. As a queer medievalist, I love the clever defiance of queering the materiality of medieval books; but as a manuscript scholar I recoil from readings that claim as "especially queer" practices that are standard practice in manuscripts. As a digital theorist, I love the “make it unique, defy classifications” ethos championed in some queer+digital+book history work; but as a data curator, nonstandard metadata worries me since norm-defying data can push queer subjects out of the visibility gained through interoperable data standardization. Ultimately, my goal is not to force peace between any—perhaps insurmountable—differences, but, building on Heather Love, to tease out the fruitful messiness that can occur when digital medieval manuscript studies presses into queer BHDH.

Heidi Craig (Texas A&M)
Laura Estill (St. Francis Xavier)
Kris L. May (Texas A&M)

Our paper posits a framework for trans-inclusive (digital) bibliography, describing its necessity and challenges. Trans-inclusive bibliography shares many ethical concerns and strategies with trans-inclusive citation practices, yet differs in scale--comprehensive enumerative bibliographies take more time and effort to compile than individual works cited lists--and in stakes. We focus on enumerative bibliography, using the World Shakespeare Bibliography (WSB) as our example. Enumerative bibliography can often be taken for granted, especially in the age of digital searching, but it is the bedrock on which scholarly practice is built: enumerative, large-scale, digital bibliography shapes how and what we research. The issues we raise, however, will also apply to other bibliographies, online reference works, and individual academic practice. The question seems simple: how do we cite scholars and their work? The answers, plural, require a flexible and deliberate practice of citation that prioritizes the scholar above the scholarship. We argue that trans-inclusive bibliography requires accepting bibliography's flexibility and contingency and relinquishing comforting myths about the stability of the historical record.

Jungeun "June" Lim (Harvard University

Boys’ Love is a genre of male-male romance created and read by heterosexual women. Previous research has proposed conflicting interpretations of the motives of Boys’ Love fans. While some researchers suggest that Boy’s Love provides an alternative space for heterosexual women to enjoy freedom and equality in fictional romantic relationships without the influence of traditional gender roles, others argue that the genre reflects heteronormative stereotypes, mostly depicting romances between a "masculine" man and a "feminine" man. By computationally analyzing metadata about comics and novels collected from online book databases, this project aims to contribute to this discussion. Bookstores and book recommendation services in Japan meticulously categorize Boys’ Love books based on the protagonists’ characteristics and relationships so that the readers can easily find the pairings they like. The patterns of co-occurrence of the keywords used for categorization and how they have changed over time are analyzed using network analysis. Other metadata such as the publication dates and reviews are also examined. Data visualization, including the timeline view of the network and major clusters, will be shared along with the interpretation of the analysis results.

Ashley R. Maynor (New York University)
Amanda Belantara (New York University

In world where “library” and “book” have taken on vast new meanings, it's the last of Ranganathan's five "rules" of library science ("A library is a growing organism") that prompts us to continuously respond to changing environments and deeply interrogate the ways we curate, collect, organize, and preserve information for generations to come. Rule N° 5 is a new take on the artist’s book, an interactive experience that lives at the intersection of public humanities, critical librarianship, and installation art. This collaboration among library workers offers a sneak peek into the magical, mysterious, complicated, and controversial work happening inside a place–the library–users might not otherwise explore beyond its surface. Meticulously edited audio tracks draw upon over 50 interviews with library workers, dozens of archival recordings, and original music, including crowdsourced and curated voice and sound art submissions. Rule N° 5 sparks curiosity, wonder, and joy at the invisibility and vastness of library work but also engages listeners in difficult conversations about questions of labor, power, authority, and politics of seemingly innocuous labor, such as cataloging and information access.

Christine McWebb (University of Waterloo)

Our epistemological approach to medieval manuscript studies has changed with the proliferation of digital databases and tools of inquiry that have shaped our recent research. With a nod to Marshall McLuhan, the medium, i.e., the internet allowing for networked knowledge has enabled us to investigate large sets of data comparatively opening the field of medieval studies to a new range of research inquiries. We, and the cultural assets we produce, are integrated into networks, physically and virtually. Temporal and geographical contexts of production and how one work, in our case the Roman de la rose, fits into and relates to these contexts can now be studied, thanks to the proliferation of digital databases and archives. I intend to discuss an example from my own research process where access to digital corpora of its manuscripts has become absolutely indispensable. Having all these manuscripts at our fingertips is one thing, yet extensive metadata and advanced search capabilities are what make these digital collections so useful. The image delivery API defined by the International Image Interoperability Framework (IIIF) has advanced the development of ever more sophisticated search tools and I will show how this has transformed the analysis of medieval manuscript corpora.

Elizabeth Schwartz (University of Illinois at Urbana-Champaign)

The story of trade publishing’s conglomeration has been told at length, yet its technological mechanisms remain little understood. Conglomeration depended on the book trade becoming scalable and industry profitable. This was only possible once book distribution could be computerized. No technology was more critical to distribution’s automation than the Standard Book Number (SBN), the ISBN’s predecessor. In 1967, a British wholesaler, WH Smith, with the Publisher’s Association (PA) created the SBN to computerize its warehouse. Through analyzing Publisher’s Weekly coverage and the PA’s plan for the SBN, I uncovered the history of the SBN’s invention and implementation from 1965-1969. I argue that the standard book number was a critical conglomerating technology for two reasons: 1) it coordinated the activities of publishers, distributors, and retailers which made them scalable, and 2) it created a trade publishing technocracy. When scholars fail to historicize conglomeration’s technologies, we risk naturalizing it. This brief history of the SBN provides a window into the conglomerate era’s technological dynamics.

Christopher Walsh

How do you do book history when you can’t access the book whose history you want? In 2021 and 2022 Wiktenauer, a project cataloguing Medieval, Renaissance, and Early Modern codices important to the practice of Historical European Martial Arts (HEMA), addressed this directly, funding digitization of three copies of Joachim Meyer’s 1570 “Gründtliche Beschreibung/ der freyen Ritterlichen vnd Adelichen Kunst des Fechtens” held in European libraries previously unavailable online. These digital scans meant that members of the HEMA community, who often collaborate to scan, transcribe, translate, and catalogue texts for their study of martial arts (called Fight Books), could explore marginalia and annotations in those copies. This paper examines the Wiktenauer project to explore the relationship between communities of practice like HEMA and institutions, how communities think about books and their relationship to libraries, and how their involvement in providing access to these materials is (or isn’t) represented in bibliographic data. Finally, it examines the implications of the cycle of digitization and reproduction in terms of the physical practices the HEMA community seek to learn and enact.

Laura Estill (St. Francis Xavier)

Building on the work of Galey and Murphy on the history of digital Shakespeare texts, as well as Rowe and Jenstad on the purposes and audience of digital Shakespeare texts, this presentation compares two key elements across digital Shakespeare editions: copytext and encoding (including TEI-XML). Turning to the Internet Shakespeare Editions, Open Source Shakespeare, the MLA New Variorum Shakespeare, and the Folger Shakespeare, as well as existing scholarship about the encoding and editorial decisions that shaped these projects (such as Jenstad; Niles & Poston; Galey; Mandell & Torabi; Connell; Jakacki; and Johnson), this paper demonstrates how the textual instabilities in early printed editions of the plays are replicated and amplified by encoding. When it comes to printed editions, we don’t expect a single edition to serve all purposes: we should not lay this burden on digital editions either. Yet these digital editions are often the first (or even last) resource for those who want to know more, from a non-specialist doing a quick internet search to advanced research projects undertaken by expert Shakespeareans. This presentation demonstrates how, taken together, digital texts construct Shakespeare and shape the assumptions that underpin our research, teaching, and performance.

Stephen H. Gregg

In a recent analysis I traced the history of some book-copies of Patrick Browne’s A Civil and Natural History of Jamaica (1756/89) from print, to record, microfilm, and to digital image, combining bibliography and media archaeology. However, this approach was silent on how books like this were imbricated in structures of colonial power: it precluded other questions about its history and the situatedness of my own research. How did Atlantic enslavement enable the material existence of these book-copies? How was my access entangled with the colonial and neo-colonial conditions of both print and digital archives? These questions intersect with the postcolonial praxis and theorisation of digital archives by (among others) Nicole Aljoe, Kim Gallon, Jessica Marie Johnson, and Roopika Risam; and with transnational, intersectional, and indigenous bibliography by such as Kate Ozment, Sydney Shep, and Matt Cohen. The decolonial and anti-racist work of Tanja Dreher and Poppy de Souza argue for a ‘politics of listening’ which ‘must be located within specific contexts of power and privilege’ (2018). This paper is an attempt to listen to – in terms of place and history – the kinds of colonial power and privilege that condition a book’s materiality and the situatedness of a digital bibliographic approach.

Simone Murray (Monash University)

BookTok denotes a community of readers who make 1-minute videos, typically showing their strong emotional reaction to a particular book’s ending. The viral phenomenon has prompted newspaper features, sales spikes for backlist titles, and bookshop displays proclaiming ‘As Seen on TikTok’. BookTok lies on a continuum of readerly social-media use spanning bookblogs, litTwitter, bookTube, and bookstagram. Rather than foregrounding the written word, talking-head commentary, or still images, bookTok allows only a brief A/V run-time and minimal captions. The version of affective commentary bookTok performs is the antithesis of academic scholarship. Whereas literary criticism has long modelled a detached, isolated reader, bookTok revels in subjective, emotionally-motivated, social reading. 1970s reader response theory had to resort to generalizations about ‘the implied reader’. BookTok gives us a ready-made, inclusive, free archive of what real readers are consuming. But we need to exercise caution: bookTok participants are constrained by the affordances and competitive logics of the platform, and perform a certain style of readership, often in response to user prompts. Also, given most bookTokers are teens or young adults, what are the ethics of archiving evanescent material?

Leigh Bonds (The Ohio State University)

Working on The Routledge Companion to Romantic Women Writers, I discovered unexpected absences of women writers’ works in digital library collections—even those most reviewed in contemporary periodicals. Mary Robinson, the most-reviewed woman writer of the period, published twenty-seven works, yet HathiTrust (HT) only has two 1st editions, a partial 1st edition, and four later editions; the Internet Archive (IA), only two 1st editions. Two 1st editions and three later editions of Mary Wollstonecraft’s eleven works are available in HT, and three 1st editions are in IA, yet HT only has an American edition of her influential work A Vindication of the Rights of Woman. While Jane Austen holds the distinction of having all five of her works available, only one 1st edition of Charlotte Dacre’s seven reviewed works are available. Although Gale’s collections has twenty-four of Robinson’s works (twenty-one 1st editions), all of Wollstoncraft’s 1st editions, and four of Dacre’s (three 1st editions), the subscription costs preclude access in many academic libraries. This presentation will examine several key absences; discuss the implications to access, inclusion, corpus analysis, digital libraries’ collection strategies, and academic libraries’ digitization strategies; and propose a creative, collective approach to correct these issues of due representation.

Sara Penn (Simon Fraser University)

This presentation examines England’s first known female chapbook publisher, Ann Lemoine (fl. 1795–1822), and her anonymous chapbooks to interrogate book history’s prioritization of deluxe print and the male authorial genius. Predominantly considered lowbrow productions in both their material form and textual content, these cheaply bound booklets comprising recipes, songs, tales were widely disseminated among the late eighteenth-century’s working-class. In demonstrating how Lemoine and her chapbooks embody several bibliographical challenges within the larger context of eighteenth-century British book history, I will discuss the challenges of creating metadata on over 280 chapbooks for the WPHP, a relational database on eighteenth-century women’s books. A closer look at her data shows that her contributions to print as publisher, bookseller, and possibly printer have far exceeded what is listed in the BBTI and what other scholars have noted. The results not only reveal patterns in her contributions to print, but also that major institutions, often the main sources for digitization efforts, place more value on deluxe print forms and therefore reflect what book history considers ‘valuable.’

Kenneth Oravetz (Northeastern University)

This project accounts for risograph (riso) bookmaking through interviews with riso printmakers and hands-on printing. Risograph duplicators emerged in the 1980s for institutional use. Recently, riso has been repurposed for zines, comics, and artist’s books. This project participates in that repurposing through a riso zine made for scholars and the riso community. The zine, containing insights into riso and print culture based on interviews, is both an artifact and account of risograph’s DIY, intimate, and hybrid materiality. This presentation foregrounds riso, interviews, and hands-on printing as an insightful realm for and means for book history scholarship. As a medium, riso evinces the presence of the hand throughout processes indebted to automated digital tools, pointing toward new frameworks for the post-digital value of print. Intimate anti-capitalist printmaker-artist relationships in the DIY riso community are a collaborative alternative to contemporary anonymized commodified bibliographic networks. Riso studios pursue underrepresented and radical work rather than exponential financial gain and popularity, often merging high art and the lowbrow. This presentation touches upon these insights and others and details my research/making process.

Daniel J. Evans (University of Illinois at Urbana-Champaign)
Elizabeth R. Koning (University of Illinois at Urbana-Champaign)
Michael Dalton (University of Illinois at Urbana-Champaign)

This paper showcases, "Type High is Type High: Maker Space Wood Type," a novel project for streamlined, reproducible methods of making woodblock type and offers ways for others to follow our techniques within the framework of minimal computing principles. The paper highlights the ways in which we combine digital and historic techniques of typographic production that allow for the reproduction of a digital font for a moveable type printing press. Purchasing woodblock type, whether new or historical, can be prohibitively difficult and expensive for local presses or schools. We will walk through our methods, results, and mistakes. The goal is to present a viable means for others to produce their own moveable type with similar digital methods using accessible materials and equipment. Furthermore, our paper will showcase the ways in which a proto-digital font has been reimagined and reused in traditional print medium. Through this revitalization of a font, we hope to draw attention to an overlooked aspect of book and computing history. Finally, we consider our environmental impact and include these considerations in our choice to use our method over other 3D printing techniques. We present these as important factors in thinking about sustainable textual production, as well as furthering new directions in book history and digital humanities.

Alexander Huber
Emma Huber

This paper presents the motivation for and realisation of a new open scholarship platform (currently in public beta) named PRISMS. PRISMS is available open-access at http://www.prisims.digital The aim of the PRISMS Open Scholarship platform is two-fold: 1. It offers a publication platform for digital scholarly editions, with full-text (preferably encoded in TEI) and facsimiles, and any accompanying materials, such as introduction, editorial statement, critical apparatus, contextual source materials, bibliography, and indices; 2. It facilitates the semantic annotation of these editions and their related scholarship (in any format) by enabling easy-to-perform formal ontological modelling (based on the CIDOC-CRM family of ontologies), and thus hopes to contribute to providing “connective tissue”* not only for reassembling scattered collections, but to overcome the artificial print/digital divide. (*Whitney Trettien's term summarizing the key benefit of book history done digitally, from her 2020 talk “A Hornbook for Digital Book History”) Currently at 72,000 volumes from EEBO, ECCO, Evans, and other collections, PRISMS offers a nucleus for a digital Book History platform. PRISMS is easily extensible to include any primary sources and secondary materials.

S.C. Kaplan

One of the goals of my book, Women’s Libraries in Late Medieval Bourbonnais, Burgundy, and France, was to investigate what a large-scale study of French women’s books tells us about individual women’s reading habits. In consonance with this idea, I abandoned the traditional literature review in favor of a quantitative and qualitative approach. Simple searches of 24 pertinent terms in 3 key databases resulted in >1,800 hits published from the mid-19th c. to 2021. Of these, only 12.3% focused on women to a significant degree, and only 3.7% concentrated on women’s books. By contrast, men’s books comprised the subject matter of 9.2% of the corpus. A closer look at the bibliographical metadata reveals not simply a discrepancy in topical preference, but also in terms of real volume—women’s books merited discussion in only 5 book-length works, whereas men’s inspired at least 20. These results confirmed my anecdotal understanding of the gender disparities in research on late medieval France and revealed other gendered trends besides. This paper presents a more detailed explanation of the methodology and conclusions outlined above in an argument for using bibliography as data and to explore the advantages of quantitative approaches in concert with and in place of the typical literature review.

Marie Pruitt (University of Louisville)

What can readability studies tell us about the novel genre? By tracing both the history of readability studies, a partially-abandoned field located at the intersection of education and literacy studies, and the history of the English language novel, this project makes a case for the validity of conversations around readability within literary circles. One of the primary outcomes of readability studies is a number of formulas that measure various elements of a text, such as vocabulary and sentence structure. However, few formulas were created with fiction, or more specifically, the novel genre, in mind. To determine the possible applications of classic readability formulas for the novel genre, this project uses a digital readability formula to measure the readability of a corpus of 127 English language novels from 1800 to 1922. However, the resulting data highlights the difficulty of measuring such a wide-ranging, unique literary genre. Finally, this project proposes a framework for using a statistical analysis of novels to identify potential lines of inquiry favorable to close reading. By approaching novels through a quantitative lens, this project highlights how considering the bigger picture can help us determine which specific elements may lead to a richer understanding of the text.

Wenyi Shang (University of Illinois at Urbana-Champaign)
Ryan Cordell (University of Illinois at Urbana-Champaign)
J. Stephen Downie (University of Illinois at Urbana-Champaign)

According to Meredith McGill (2018), “from a publisher’s perspective, format is where economic and technological limitations meet cultural expectations” (p. 674). Bibliographers and book historians alike work to ascertain the technological processes through which books were made and the cultural ramifications of books’ structures to their readers. In this paper, we draw on HathiTrust’s collection of 133,266 books published between 1500 and 1799 to model format at scale. Using the MARC field 300-c, “dimensions,” we identify records where librarians cataloged books’ format and physical sizes, and use regular expressions to standardize the data. We then analyze trends in book size over our corpus’ 300 years and build machine learning models to classify known formats based on book sizes, and estimate the prevalence of formats across records where that information is not recorded.
Using 61,107 books with quantifiable size data we find that books overall decrease in size between 1500 and 1799 for 7 of the 11 most prevalent languages in the corpus (e.g. French, German, Chinese, and Japanese), with only four major counter-examples (English, Italian, Latin, and Ancient Greek). We note that the latter two of these counter examples are non-vernacular languages where we would expect high-prestige—i.e. larger and more costly—publications. We found the most prevalent format by proportion in both the 16th and 17th century books is the quarto, while the octavo is the most prevalent format in 18th century books. These results suggest an overall increase of smaller format books that corresponds to the change of book sizes. These corpus changes correlate to the emergence of vernacular genres, such as the novel, and trends toward cheaper, mass publication through these periods.
While historically books’ formats do not have a simple correlation to their physical size, we do find that a model using size to predict format has an accuracy of 66.1%, which is quite high for an 8-class classification task. Furthermore, the accuracy raises to 92.2% if we only model 16th century books. Therefore, we can preliminarily conclude that book size is a good predictor for early modern book format (and earlier books in particular), which could contribute to digital library metadata where information about book size is better represented in metadata than format. This kind of modeling could help book historians analyze publication trends at scale for particular formats in our corpus’ European books. As for Asian books where format is not applicable, the change in book size also opens up possibilities for understanding change in print culture, as it aligns with case studies such as Hegel (1998) and Vierthaler (2016), who found a decrease in sizes of late imperial Chinese books. Finally, we suggest that richer data about books’ physical size and formats might contribute to digital library interfaces, enabling comparative views that counteract the flattening effect digitization can have across widely various physical artifacts.


Sam Brooker (Richmond, American University London)
Alessio Antonini (The Open University)
Francesca Benatti (The Open University)

Christopher Ohge (School of Advanced Study, University of London)

Hypertext technologies and methodologies underpin much of our modern communications infrastructure. Whether we approach Hypertext as non-sequential writing that branches and allows choices to the reader, or as a body of written or pictorial material interconnected in such a complex way that it could not conveniently be represented on paper (Nelson), thinking in Hypertext has become a ubiquitous part of our reading and publishing lives. Initially framed against the printed work - the selectiveness of book publishing (Bolter); engendered notions of authorial property (Landow); the comparative instability of the object (Delany) - Hypertext is better understood as a sibling category, generative of new ways of thinking, new opportunities, new threats. “Computers are not intrinsically involved with the hypertext concept,” clarified Nelson, citing the magazine as an example, and suggesting that hypertext has never lived up to its inventive potential. Employing a series of case studies, this panel investigates the manner in which Hypertext as approach has influenced our perspectives onapproach to book history and may continue to challenge it: in the webcomics publishing circuit, in communities of practice among queer authors of interactive fiction, in our reading of transmedia memory, in digital scholarly editions, and in wider studies of the book.

Justin Wigard (University of Richmond)
Spencer D. C. Keralis (Digital Cultural Studies Cooperative)
Nicole Huff (Michigan State University)
Zachary Rondinelli (Brock University)

This roundtable focuses on the nexus of the history of the comic book and digital humanities. It builds on significant, but infrequent, conversations between comics studies and DH that have emerged in the past decade. This panel explores potential futures for how comics studies and digital scholarship can meaningfully converge to enrich both fields. We examine ways that DH approaches can disrupt normative understandings of comics across time, of historical graphic narratives. Each participant will begin with a brief position paper describing their distinct methodological and theoretical innovations: media archaelogy and software emulation with DIY queer comics; social media reader-response engagement with historical comics; comics collections as datasets that can codify race and representation; and digital visualizations of graphic narratives on a distant scale. From these points of departure, the roundtable opens into a facilitated discussion of our experiences working with DH and comics, as well as our hopes and plans for future endeavors, with generous time for audience engagement. Ultimately, we explore the possibilities of what digital technologies reveal about an inherently visual medium, presenting a syncretic vision of what a digital comics studies might be or become.


Giles Bergel (University of Oxford)

Computer vision has made significant progress in recent years, thanks in part to developments in machine learning (or ‘AI’), and is now an eminently practical tool for the book historian. Computers can now reliably match the same printed page or illustration, or visualise variant typesettings or images. More challenging applications, such as detecting illustrations, segmenting pages into meaningful parts and classifying their content, are within reach. This workshop will introduce participants to free and open-source software tools and demos maintained by the University of Oxford’s Visual Geometry Group and developed in collaboration with book historians and others. Attendees will leave the workshop knowing how to match, differentiate, classify and annotate images of various kinds of books and prints. No previous knowledge of computer vision or coding ability is required.

Andie Silva

This workshop will discuss ways to make digital book history more inclusive and accessible for students. We will begin by introducing participants to strategies for building and maintaining an inclusive syllabus and assignment ideas for student-centered book history work that need not rely on special collections, large budgets, or high bandwidth. In the hands-on portion of the workshop, participants will spend time learning how to adapt tools like Slack and Scalar for book history-specific projects and build a collaborative GoogleDoc of resources for inclusive DHBH pedagogy to be shared and expanded by members of our community.

Posters and Asynchronous Content

R.C. Miessler (Gettysburg College)

In the Gettysburg College First Year Seminar, “Oh! The (Digital) Humanities: Using Technology to Understand the Human Experience,” the first assignment students complete is not digital, but rather, analog: they must create a physical book. Over the course of three workshops under the guidance of our Special Collections and College Archives staff, students learn about the history of the book, create their own blank notebook, and compose a short reflection on the process. Students are introduced to the codex as an example of mass-produced information technology and discuss power structures in how information is disseminated. Through the book-making assignment, the making, failure, and reflection are introduced as key components of the field of Digital Humanities. This poster will provide an overview of the assignment, showcase student reflections, and discuss how the book-making process informs their work during the rest of the semester.

Alex Wingate (Indiana University)

Photogrammetry is a tried-and-true method often used for the creation of 3D models of cultural heritage objects and entire historical sites, but what about its use in rare books and for book history? Books and manuscripts are generally digitized in 2D and accessible as PDFs or in an online reader, but 3D models are relatively rare. This poster will present the photogrammetric process for creating models of books, analyze the success of the resulting models, and discuss issues and advantages with creating 3D models of books. Through these models, I argue that there are specific use cases where 3D models produced via photogrammetry (and potentially other methods of 3D digitization) could enhance digital access for scholars and the general public to rare books and other objects related to the history of the book, such as woodcuts, inking balls, and binding tools. Whether they are used in exhibits, for presenting delicate materials to classes, or providing book historians with virtual access to books, 3D models have the potential to bring us even closer to the physical object, enhancing existing 2D digitization methods.

Candace Reilly (Drew University)
Brynn Goldberg (Drew University)
Zoe Bowser (Drew University)
Sam Zatorski (Drew University)

The Pope Joan Project records and studies the various types of user engagement concerning the apocryphal popess from extant 1493 Latin copies of the Liber Chronicarum. The project aims to provide a comprehensive and accessible census of the imagery of Pope Joan, which scholars around the world can utilize using DH tools on the website, http://www.popejoanproject.com, such as through Storiiies and Tableau. Following the completion of the census, this study proposes to review the legend of Pope Joan through the lenses of historical silences, iconoclasm, and gender and feminist studies. Imagery of Pope Joan in the Liber Chronicarum is often noted to be corrected, erased, blotted, or pasted-over. However, there has never been a complete census of this imagery until this project launched in May 2021. Some of the acquired imagery has revealed the addition of facial hair to the popess, as well as the erasure of the child in her arms. These findings have been in the minority. This project hopes to quantify interaction through two defining methods – addition and removal to the image and text of Pope Joan on f.169v. Noted methods of interaction to the image are drawing-on, burning-of, cutting, scraping-off, and the addition of marginalia. By studying this woodcut image of Pope Joan, this project will analyze centuries-long interactions with one of the most popular incunabula.

David Bishop (University of Illinois at Urbana-Champaign)

This project uses computational predictive modeling to parse data consisting of 557 mystery fiction characters, 28 of which are the killers. At the center of this work is an investigation into the possibility of accurately predicting the outcome of mystery novels using computational methods. The eventual direction of the project will be to compare the predictions of actual readers with the predictions made by the statistical model and draw a conclusion concerning the accuracy of predicting elements of narrative using computational methods. Drawing on the methods of information science and critical literary analysis, the research done for this project sits at the intersection of Digital Humanities, book history, and literary criticism.

Cindy Boeke (Southern Methodist University)
Robert Walker

Many users may not think about how rare books and manuscripts were digitized before being made available for online reading, text analysis, re-publishing, and other uses. At the same time, the equipment and workflows used by the digitizing organization are fundamental to the quality of the online text. This poster will provide an overview of the main technologies used by the Norwick Center for Digital Solutions to digitize a variety of rare books and manuscripts from SMU Libraries special collections.

Elizabeth Schwartz (University of Illinois at Urbana-Champaign)
Elizabeth R. Koning (University of Illinois at Urbana-Champaign)

Founded in 1974 in Grinnel Iowa, RFD is the longest running reader-written gay magazine in the US, but it has received no scholarly attention. In an effort to preserve the magazine, we tried to run Tesseract, an OCR software, on its earlier issues. We quickly realized the software’s binarizing algorithm could not maintain the queer, nonbinary integrity of the content. RFD focused on rural queer people, introducing visibility politics that rely on the invisibility on rural queer folk to non-queers and their hyper-visibility to each other. As such, RFD was both visible and invisible at the same time. Unlike those of RFD, OCR's visibility politics are inherently binaristic —a problem scholars have yet to consider. To date digital humanists have critiqued OCR’s inability to digitize non-print and non-English languages. These are all problems with training and software; however, we want to think beyond OCR’s technological limitations towards its visibility politics. This zine reflects on the binaristic visibility politics of OCR and how unsuitable it is for digitizing queer content, as it is inherently non-binary. It will present OCR results and analysis and discuss digitization beyond binarization.

<p<Trish Baer

This poster presents discoveries made while researching metadata details for My Norse Digital Image Repository (MyNDIR). A publication announcement in the 1857 issue of “The Publishers’ Circular” (75) is the only primary source identifying Charles Altamont Doyle (1832–1893) as the illustrator of The Heroes of Asgard and the Giants of Jötunheim, or, The Week and its Story (1857). Doyle’s illustrations portray the Norse gods in scenes reminiscent of fairy tales, e.g., Thumbelina and Cinderella. David Murray Smith reused the illustrations in The Silver Star (1881), but the monogram within them is Doyle’s. Doyle is known as the father of Sir Arthur Conan Doyle, and as an illustrator, who believed in fairies and died in a mental asylum. Doyle’s connection with The Heroes of Asgard is a detail that has gone unnoticed by biographers and bibliographers for decades, and the lack of acknowledgement has resulted in the illustrations being overlooked. The inclusion of Doyle’s illustrations in MyNDIR (https://myndir.uvic.ca/) facilitates comparisons with the illustrators and illustrations in the subsequent editions in 1871 and 1930, i.e., Louis Huard (1814–1874) and C. E. Brock (1870–1938) and enables Book History insights concerning the transmission and reception of Old Norse mythology.