Humanities Computing and Big History
There were two main topics discussed this week: the argument over mathematical methods in Matthew Jocker’s Syuzhet package and humanities computing and the call for a return to the long duree of history and its relevance to public conversations in The History Manifesto.
The Syuzhet debate centered on Matthew Jockers and Annie Swafford. Jockers created an R package that will compute the overall plot structure of a work through sentiment analysis. He used the formula known as the Fourier transformation from physics and low-pass filters to smooth the graph and produce the overall plot structure, identifying six or seven common plot types. Swafford disagrees with the application of this particular mathematical principle to the package based on her tests of the code, because it produces ringing artifacts. She argues against using the package as it is for analyzing plot structure through sentiment analysis, a method already unperfected.
As the two scholars argue about the package, it’s clear that when engaging in humanities computing through creating R packages or testing and using them, the scholar needs to be very clear about how the mathematical principles or functions used in the code or methodology can produce the desired result. This directly relates to conversations in past weeks about needing to precisely understand the mathematics in order to produce as accurate a result as possible, understanding the inputs and outputs of algorithms and how they relate to the specific questions asked. Andrew Piper’s blog post, “Validation and Subjective Computing” also speaks to how work in the digital humanities needs to be analyzed across perspectives, because the data and analysis are so subjective.
The issues in The History Manifesto center on big history, the return to the long duree, and the historian’s role in the twenty-first-century. In the book, Armitage and Guldi argue that historians need to engage with more long duree histories that connect the past to the future and that historians need to use that big history to engage with and inform public policy. In addition, they argue that big data and the corresponding digital tools, such as Paper Machines, lend themselves to that big history.
Most of the scholars who criticized the controversial work disagreed with the authors’ de-emphasis and almost dismissal of micro-histories, or “short-termisms.” Matthew Francis and others noted that these micro-histories and subfields do engage with debate in the public sphere and enhance our understanding of the big picture. Armitage and Guldi later argued in the AHR Exchange that they did aim to praise those micro-histories, stating that “short-term analysis and the long-term overview should work together to produce a more intense, sensitive, and ethical synthesis of data.” (130) But as Laura Sefton argues, they don’t explain how to do that. I view the work as yet another way to view history and another way in which digital tools are useful. Yes, big data lets us get at larger swathes of time to analyze, but it also allows us to piece together disparate information to form richer micro-histories that still tell us something meaningful about the past.