Humanities Computing and Big History

There were two main topics discussed this week: the argument over mathematical methods in Matthew Jocker’s Syuzhet package and humanities computing and the call for a return to the long duree of history and its relevance to public conversations in The History Manifesto.

The Syuzhet debate centered on Matthew Jockers and Annie Swafford. Jockers created an R package that will compute the overall plot structure of a work through sentiment analysis. He used the formula known as the Fourier transformation from physics and low-pass filters to smooth the graph and produce the overall plot structure, identifying six or seven common plot types. Swafford disagrees with the application of this particular mathematical principle to the package based on her tests of the code, because it produces ringing artifacts. She argues against using the package as it is for analyzing plot structure through sentiment analysis, a method already unperfected.

As the two scholars argue about the package, it’s clear that when engaging in humanities computing through creating R packages or testing and using them, the scholar needs to be very clear about how the mathematical principles or functions used in the code or methodology can produce the desired result. This directly relates to conversations in past weeks about needing to precisely understand the mathematics in order to produce as accurate a result as possible, understanding the inputs and outputs of algorithms and how they relate to the specific questions asked. Andrew Piper’s blog post, “Validation and Subjective Computing” also speaks to how work in the digital humanities needs to be analyzed across perspectives, because the data and analysis are so subjective.  

The issues in The History Manifesto center on big history, the return to the long duree, and the historian’s role in the twenty-first-century. In the book, Armitage and Guldi argue that historians need to engage with more long duree histories that connect the past to the future and that historians need to use that big history to engage with and inform public policy. In addition, they argue that big data and the corresponding digital tools, such as Paper Machines, lend themselves to that big history.

Most of the scholars who criticized the controversial work disagreed with the authors’ de-emphasis and almost dismissal of micro-histories, or “short-termisms.” Matthew Francis and others noted that these micro-histories and subfields do engage with debate in the public sphere and enhance our understanding of the big picture. Armitage and Guldi later argued in the AHR Exchange that they did aim to praise those micro-histories, stating that “short-term analysis and the long-term overview should work together to produce a more intense, sensitive, and ethical synthesis of data.” (130) But as Laura Sefton argues, they don’t explain how to do that. I view the work as yet another way to view history and another way in which digital tools are useful. Yes, big data lets us get at larger swathes of time to analyze, but it also allows us to piece together disparate information to form richer micro-histories that still tell us something meaningful about the past.

Mapping and Historical Analysis

The readings on mapping this week discussed many ideas and issues. Tim Cresswell’s Geographic Thought: A Critical Introduction, explains the history of geography as a discipline and the varying frameworks of geographical thought, including humanistic and Marxist geography. Various authors in The Spatial Humanities: GIS and the Future of Humanities Scholarship, such as Karen Kemp in “Geographic Information Science and Spatial Analysis for the Humanities,” elaborated on the different data models for maps, such as raster and vector, and the concepts and language of GIS. Mark Monmonier explained the need to critically assess maps in How to Lie With Maps. The readings also discussed the difficulty of representing space and time in mapping, deep mapping, and the advantages and difficulties of using mapping for historical analysis.

Historical GIS has many advantages and issues worth reviewing. Maps are useful for seeing patterns, making comparisons, changing the scale of analysis, and layering different base maps, timelines, and sources. However, one of the issues discussed across the readings was the tension between qualitative and quantitative data when combining geography and history. History, of course, involves sources of both types. Quantitative data can include census data, agricultural schedules, city directories, voting records, etc. Geoff Cunfer in “Scaling the Dust Bowl” in Placing History: How Maps, Spatial Data, and GIS Are Changing Historical Scholarship, expands the temporal and spatial map of the Dust Bowl to argue that the environment, particularly drought, had more of an impact on the great dust storms than did capitalism. The entire “Part One” of Toward Spatial Humanities: Historical GIS and Spatial Humanities focuses on the use quantitative data in mapping, such as Andrew Beveridge’s analysis of racial segregation during the Great Migration. But what about textual sources in historical research, such as diaries, letters, magazines, and newspapers?

In “Mapping Text” in The Spatial Humanities, May Yuan suggests three ways to reconcile this qualitative data with mapping: spatialization, georeferencing and name-place recognition, and geo-inferencing. These methods transform the relevant information in the text document, such as named places in Gazetteers, into tables that are amenable to spatial analysis. Another way to progress in the tension between the two disciplines is deep, or thick, mapping. Deep mapping is a multi-layered map of an area, such as Ancient Egypt, that involves many different source types and timelines. Hypercities creates deep maps of cities or regions and their multi-layered records, providing multiple perspectives of the history. One other project that integrates different types of sources into a multi-layered map is Digital Harlem. This project maps the everyday life of Harlem in the early twentieth-century using mostly newspapers and legal records, layering people, places, and events. The interactive, layered map allows for contextualization of that important urban location.

Mapping is such a useful tool for historians, because it combines space and time, often in a multi-layered and multi-source representation. It can aggregate and display sources across space and time that otherwise couldn’t be analyzed simultaneously.

Visualization

Visualizations are powerful, and I had never understood how powerful until this week. They are not just a nice picture of the Columbian Exchange, or a family tree. Visualizations are arguments themselves, as Johanna Drucker asserts in Graphesis, and they do not sit passively on a printed or web page. Everything from their layout to the color of text matters and impacts the argument in some way. Someone looking at and and using the visualization should approach it the same way as a scholarly work- critically. What is the argument? What are the methods? What is the data? Why present the information in this way and not another? Visualizations encompass the history of knowledge organization and the concepts of graphic design and cognizance of information. It is not a simple task to create useful and meaningful visualizations, let alone effectively employ them in history.

It’s important for a scholar to consider why a visualization might be useful for the argument at hand. Visualizations, when done right, can be very useful to building and presenting historical arguments. In “Visualizing San Francisco Bay’s Forgotten Past,” Matthew Booker argues that visualization “helps us recapture the forgotten past.” He uses maps of the bay to uncover its environmental history and to argue that these visualizations of how humans and animals have interacted with the bay can impact the future. In “The Image of Absence: Archival Silence, Data Visualization, and James Hemings,” Lauren Klein uses visualization in the form of a network to argue for James Hemings supposed absence in the archive of The Papers of Thomas Jefferson. These visualizations are explanatory, rather than exploratory, but they were integral to the respective arguments. The scholars furthered their points through the visualizations.

Both Drucker and Scott Murray in “Designing Kindred Britain” assert that visualizations can be explanatory, exploratory, or both. Visualizations in digital history can use an interface that allows for multiple points of entry. This user engagement with the material, as Murray notes, depends on the good use of graphic design and that exploratory interface. Kindred Britain, for example, not only provides the choice to investigate people, the connections between them, or the known stories, but it also provides multiple ways to then visualize that data. The authors used network analysis, a timeline, and a map to contextualize the information. Visualizations used in this way are more generative of research questions and open to interpretation.

It’s also useful to know what constitutes a bad visualization. Edward Tufte in Beautiful Evidence claims cherry-picking and chartjunk can make ineffective visualizations. John Theibault argues in “Visualizations and Historical Arguments” that “the challenge for visualization is to be transparent, accurate, and information rich” (paragraph 5). Cherry-picking involves using data that supports a point and doesn’t necessarily relate the context. Tufte states that there is an “evidence reduction, construction, and representation process” that comes between the data gathering and presentation stage and it’s important to be honest and transparent about that process (147). Chartjunk refers to charts that are void of content, or useful information. Just as historians should consider different points of view in their sources and provide a balanced analysis, so, too, should the visualization.

Network Analysis

A network is a visualization of connections between entities, such as people or places. In Networks, Crowds, and Markets: Reasoning About a Highly Connected World, Easley and Kleinberg define networks as “a pattern of interconnections among a set of things,” explaining that networks are adaptable to any set of things and links, such as a kinship network or the spread of an epidemic. In Networks: An IntroductionNewman interprets networks as a “collection of points joined together in pairs by lines.” Elijah Meeks defines a network as “a primitive object capable of and useful for the modeling and analysis of relationships between a wide variety of objects,” also noting the flexibility of this method of analysis. Scott Weingart defines networks as an “interlocking system” of “stuff and relationships.” He states that networks are integral to the research conducted, because the relationships between entities render them interdependent and necessary for comprehension of the topic. Network analysis is not merely a visual.

As Weingart notes, this flexibility is both an advantage and disadvantage. Networks can be used in a variety of ways, but scholars should think about the methodology carefully before undertaking such a task. One particular measure of networks to think about is centrality degree. This measure shows the entity, or node, most connected within the graph. This can be misleading, as “importance” can mean many things depending on what you’re visualizing. Another factor to consider is the type of data you want to represent. A network of a fraction of interactions experienced by jazz musician Roy Haynes is not useful. It becomes a Roy Haynes network graph, evolving around him. The visualization should represent the larger social network, such as the professional and social network of jazz musicians in Linked Jazz. However, the data should not be too much, or else the network graph will be unreadable. Along with that point, you should consider the reduction of the data necessary to render such a readable graph. You have to negotiate between the entire set of data and the amount required, often reduced, to produce a useful network graph.

Despite those points of wariness, network analysis is a very useful methodology for history. One of the most important features is the ability to see the whole network and the individual nodes and relationships. With networks, you can “distantly read” the information and the overall structure of the connections. When studying the people of a small town in eighteenth-century Virginia, you can use network analysis to analyze the structure of the community. With textual data, you can graph the network of newspaper and magazine reprinting in nineteenth-century to study print culture and public consumption, as in Viral Texts. You can also focus on individual nodes within the graph and how they relate to each other and how they are affected by certain factors.

Network graphs also apply to other methodologies in digital humanities. You can employ mapping to the network analysis, as in Stanford’s Orbis. You can also apply network analysis to research generated from topic modeling, creating powerful and useful visualizations. Scott Weingart’s insightful suggestion about contextualizing networks with textured base maps is even more involved. He proposes the use of space, demographics, networks, and other relevant data to enhance and contextualize a digital humanities project, akin to deep mapping. Network analysis, along with text mining and mapping, are very powerful tools for historical research.

Textual Analysis Hysteria

In the digital age, there exists an abundance of digitized sources. Text analysis deals with large corpora of sources and how to access this vast and mostly available historical record, from nineteenth-century British novels to Early American newspapers. Humanities scholars wishing to use these sources should know how to navigate this revolutionary digitization of material. Numerous methodologies allow for this navigation, such as topic modeling, word frequencies, token distribution analysis, and even network analysis, but they do come with caveats.

Most of these tools involve distant reading, discussed by Italian literary scholar Franco Moretti. Distantly reading the entire run of an eighteenth-century newspaper could allow patterns to emerge that otherwise wouldn’t with traditional close reading. One of the central ideas in text analysis is that distant reading complements close reading. Textual analysis of over one million words provides context for each source closely read. It generates new research questions and puts the hundreds of sources in context with the thousands and millions of words in text. Then hopefully, the digital methodology illuminates patterns or anomalies in the sources that prompt close reading and analysis. It is a matter of scale. The scholar can “read” a large amount, closely read a selection, then interpret and provide a representative argument for the corpus.

Another issue of text analysis is whether the technology drives the historical questions and whether distant reading really tells us anything surprising. When typing in a search term or terms, the scholar is already expecting to find that pattern in the text, as Cohen and Gibbs made clear in “A Conversation with Data: Prospecting Victorian Words and Ideas.” The supposed advantage of topic modeling is that the program generates topics of collocated words across the “bag of words,” rather than someone manually defining the search terms. The scholar determines which words to omit from the computational analysis, or stop words, determines the number of topics generated, and assigns meaning to those topics and explores them across texts and time. In “Quantifying Kissinger,” Micki Kaufman states that she used forty topics for that corpus because Robert Nelson did so in Mining the Dispatch. Matthew Jockers, in Macroanalysis: Digital Methods and Literary History, generated about five hundred topics for his data set of over 3,000 nineteenth-century books, consulting with numerous colleagues across disciplines to name them. Would their research change with different numbers of topics, or with different ways of interpreting and visualizing those topics?

Benjamin Schmidt argues in “Words Alone: Dismantling Topic Models in the Humanities,” that scholars need to focus on the actual words in the topic and compare them, because this methodology can be misleading. I have to agree with his caveats with this methodology, as I played around with topic modeling in the second semester of Digital Humanities coursework at GMU and confronted problems. In an attempt to distantly read a small corpus of nineteenth-century medical journal articles on hysteria in Clio II, I used topic modeling. I wanted to compare case studies of male and female hysteria in the nineteenth-century to explore the differences and/or similarities in symptoms, diagnoses, prognoses, and proposed cures and therapies. I wanted to analyze those results in the context of American medical discourse about the body, mind, and gender.

I learned a lot about how to not approach text analysis. First, I only used a limited number of  search terms to find the articles in ProQuest and other databases with digitized medical journals. I searched for “hysteria” and “hysterical.” I only found about seventy case studies for males and females combined, but those often discussed more than one patient. In only using those search terms, I probably (definitely) missed out on other names and ideas of “hysteria” in the nineteenth-century. Amassing a much larger corpus of medical journals devoted to mental illness and separating the articles on gender would provide a more exploratory and less contrived experience. It would also provide a better context in which to situate ideas about hysteria.

Secondly, because my corpus was not large enough to generate more accurate topic models, the models changed from ten to twenty to forty topics. In comparing the two genders together, a set of words suggesting the reproductive system appeared, but it did not appear in the topics generated for the genders separately. That seemed like an important anomaly, because the root of “hysteria” is Greek for uterus. Third, I experienced trouble naming the topics and was not wholly satisfied with the ones I had interpreted. The project resulted in more of a proposal for a better use of topic modeling for the research questions. As in the projects for this week, text analysis seems to work better as a methodology for exploring an already amassed corpus or corpora of text. It is very useful for “reading” patterns and anomalies that then generate interesting research questions for close reading.

Evaluating Digital Scholarship

This week’s readings covered numerous issues in digital scholarship. Similar to the on-going discussions on the definition of digital history, digital scholarship can take many forms. Trevor Owens outlined suggestions for digital exhibits and even games as scholarship. The digital scholarship must also simultaneously make an easily accessible and clear argument, as Edward Ayers points out in his article, “Does Digital Scholarship Have A Future?” In “Living in a Digital World: Rethinking Peer Review, Collaboration, and Open Access,” Sheila Cavanaugh discussed the difficulties and limitations of creating digital scholarly projects, specifically the need for institutional resources and collaboration across institutions.

It was also evident from the readings that there is a need for sufficient guidelines and evaluation for creators and peer reviewers of digital scholarship. Digital historians need to effectively defend their digital scholarship, like a grant proposal does in Sheila Brennan’s piece, and collaborators should be credited in every case. Tenure and promotion committees, the AHA, and even dissertation committees need to follow some set of guidelines for evaluating the intellectual rigor and field innovation that that scholarship can bring, and several articles and blog posts proposed guidelines. There is also a need to develop a better peer review system for online work, specifically in journals, in order to promote open scholarship, as in the Writing History in the Digital Age, Press Forward, and Journal of the Digital Humanities. Lastly, Melissa Terras discussed the digital presence that scholars need to cultivate in order to share ideas and disseminate their research beyond publication.

This topic of digital scholarship involves many people in and outside of academia, but it is critical for graduate students who want to pursue the digital humanities in their careers. As discussed in previous class sessions, digital humanities is about building something, or creating digital scholarship. If a graduate student is claiming to be a “digital historian,” then he or she should have something to show for it, right? This something can take many forms, as said, such as a game or an online exhibit. A “good” piece of digital scholarship shows committees and employers that you can design, research, implement, collaborate, and contribute something innovative to your field, both in the form and content. The work must contain many of the elements discussed in the articles, such as a clear and advancing argument, an effective interface for a defined audience, and maybe a conference presentation or two.

The digital scholarship also allows you to articulate and defend what makes that scholarship “good” and a worthy endeavor when standing in front of a dissertation committee and/or job interview. A set of guidelines on the reviewing end is helpful, but it’s better if the candidate is proactive in this regard, as Brennan points out. A lot of the authors stressed how much of a risk it is for emerging scholars to engage in digital humanities, because they do not yet have stable jobs and reviewing committees usually do not have set guidelines for evaluation. It seems more fruitful for the candidates in these cases to bring the guidelines than have a set of guidelines drawn up by the institution, because the digital scholarship can take so many forms. Being prepared in this way is not that different from a regular dissertation defense. A graduate student must be able to clearly articulate the digital scholarship in the project itself and to reviewers.

Teaching Digital Humanities

This week’s readings on teaching digital history discussed many important themes. Two themes stood out particularly and go hand-in-hand: navigating the different technology backgrounds of today’s supposed “digital natives” and designing college and graduate-level courses that utilize digital media effectively to teach the content, the process of “doing history,” and the technology itself.

Ryan Cordell’s piece, “How Not to Teach Digital Humanities,” is a how-to guide for digital humanities courses, offering dos and don’ts for professors and institutions wanting to add digital humanities to the curriculum. In addition to suggesting that these courses should start small with one digital methodology or subject theme and should take advantage of local resources, he cautions instructors against assuming that their students are “digital natives.” He states that professors wishing to bring digital media to the classroom should be aware that students are still “technologically skeptical,” though they live in a world in which digital media is prevalent. This is an excellent point, because though students may interact with Facebook on a daily basis or are adept Googlers, they may not understand how that technology really works or how they can harness a digital tool for school projects and even build upon it.

Mills Kelly argues in his book, Teaching History in the Digital Age, that in order to incorporate the digital into the classroom successfully, the instructor must meet the student in that digital world that fosters active content creation and social networking. The instructor and the students must converge and work together within that digital world to learn the history itself, historical thinking skills, and the technology. By meeting the students in the digital world in which they already interact, such as Twitter, Wikipedia, or blogging platforms, the instructor can diminish that “technological skepticism” and then introduce more technology, such as Omeka. The students can engage with the history as they create digital content and learn about the historical process and technology along the way.

The “Lying About the Past” course that Kelly discusses in his work, though controversial in theory, did introduce digital media and the historical process to the students. They had to create a hoax, research real primary and secondary sources for context, write blog posts and Facebook posts about the topic and research process, and learn how to critically assess sources, especially those sources on the internet. The students directly engaged with Wikipedia, a source they undoubtedly consulted before at one time or another, and learned about crowdsourced history in addition to the evaluation of content that others have created. That course seems to be a good balance of introducing students to the process of history, how digital media interacts with that history, and the history itself. One key theme for teaching the digital humanities is balance. The instructor must balance the sophisticated technology and methodologies with the digital media that the students already use, and the course must balance the history itself, the content, with the process of making history and the active use of historical thinking skills.

Digital Public History

This week’s readings on public history raised a number of issues, namely defining audience and the “public,” building and maintaining that community of users, and the ability of digital exhibits to promote access, to encourage historical research, and to preserve museums’ collections. The issue that stood out the most was one that Sheila Brennan argued on her blog- that putting history on the web does not make it public. Defining the “public” in a digital public history project should be an integral part of the project’s conception, planning, implementation, and outreach. It must follow through from the beginning to the end of the project’s launch in order to be useful to a community of users and to remain that way. As Dan Cohen and Roy Rosenzweig state in their chapter on Building an Audience in Digital History, the website/project’s creators should see the users as a community, not a monthly statistic of views, or an unidentified “other.” Additionally, the digital public history project must be flexible in adapting to new audiences.

The Histories of the National Mall is a great example of the public, the audience, driving the project. As Sharon Leon notes in her blog post for AHA Today, the project was driven by a need of visitors to the Mall to engage with the historical context as they traverse the site. Everything from the mobile platform to the content was planned and created with that audience in mind. As Sheila Brennan stated in her article, too, graduate students were sent to continuously test it on the Mall. The user is free to navigate the site from their mobile device wherever on the Mall and is free to explore the content.

The digital public history project, the Cleveland Historical Project, at the Center for Public History + Digital Humanities at Cleveland State University is an app that was created for and by the local community. In his article, “Listening to the City: Oral History and Place in the Digital Era,” Mark Tebeau states that they “moved toward curating the city in collaboration with the community, rather than curating it for the city’s many constituencies.” That project keeps the integral audience engaged by publishing new stories and reaching out to the new audience of educational leaders. This project is a good example of integrating the audience from the beginning and using their strengths and interest to build and promote it. It also is an example of adapting to new audiences, or extending the existing community of users, through the teachers and students using the app to create new content in the classroom.

More than once, communities of users appear that were not previously expected, and the project should foster that new audience. One example from Cohen and Rosenzweig’s chapter is of The Library of Congress digitizing and building a public history site of primary sources for researchers. In reality, teachers and students constituted their main audience. They, in response, adapted their website of digital primary sources to include digital tools and resources for those users. The digital public history project, whether a mobile app or digital exhibit, should define the audience in the beginning stages, integrate that audience into the design and implementation, and adapt to new communities of users.

Databases and Audience

This week’s readings discussed the advantages and disadvantages of databases and searching in presenting and conducting historical research. One aspect of these web-based databases that intrigued me was the usability of the database’s interface and the intended audience. In his review of The French Book Trade Enlightenment in Europe, digital historian Sean Takats makes note of the database’s user interface. He states that perhaps in an effort to promote the database to people new to the technology, the search options, such as the numerous choices in the drop down menus, could prohibit easy and quick searches. That project seemed aimed at a wide array of scholars with varying backgrounds in digital media. Also, scholars are increasingly employing these databases in their research, as Caleb McDaniel points out. How should the project leads and database designers decide upon and ensure a user interface that is easy to learn, not overwhelming, yet academically sophisticated?

The Trans-Atlantic Slavery Database Project is an impressive project, providing information on the slave trade spanning centuries and geographies. It contains millions of records in two different databases, one on the actual voyages and trade routes and the other on the enslaved people themselves. However, the user is presented with an overwhelming amount of information when just opening the site’s home page. It’s not easy to decide where to start with not just the database search options, but also the instructions for the database. That project is not easily navigable to a scholar, or anyone, relatively new to digital research methods. The site contains a lot of information about just using the database, and perhaps those lengthy instructions could have been channeled into developing a better interface or database design.

I’ve been thinking about databases more critically since I’ve been involved with the Rebuilding the House for Families Database project at Mount Vernon. This project is both a source-based and method-based database, in that it involves entering in data only relevant to the enslaved community on the plantation. For example, one letter that George Washington wrote to his farm manager might contain one paragraph concerning his enslaved laborers- that is the only paragraph that is entered into the database. It’s still a while until the database is launched online to the interested public, but I’m always thinking about how the database will look on the web and how the public will interact with it. It’s been discussed to present some constructed narratives and search queries for those less familiar with the content and database structure, as well as the entire database open for analysis. I think this is a good idea, because the user interface becomes more accessible and still provides ample opportunity for new historical analysis. The key is to present a database that uses an interface that isn’t too simple, as to seem constrained, or too overwhelming to users less familiar with navigating them.

Crowdsourcing as Community Empowerment

A few weeks ago I came across The Library of Virginia’s Making History transcription project. Anyone with an internet connection can view digitized primary source documents from nineteenth-century African-American freedom suits to letters penned by Patrick Henry and transcribe them for public viewing. The project provides both a digital version of the document and the plain text box side by side. The user-interface is easy to navigate, as the project is mainly interested in extracting the plain text for searching purposes. Other users can review the finished transcriptions, then the library staff downloads them to the digital collections. The project description states that crowdsourcing “empowers communities to make their own history” and that this cultural institution “supports this empowerment by inviting the public to be our partners in making our collections more visible and more accessible.” The volunteer’s engagement with the sources has as much weight, if not more, than the end result of better access to the documents.

The “Transcription Maximized; expense minimized? Crowdsourcing and editing The Collected Works of Jeremy Bentham article from the Transcribe Bentham project seems to base a lot of weight in the success of the project on the cost and time of producing quality, transcribed documents. While this is an important factor in discussing and evaluating digitization projects, a more important factor is whether or not the project involved the public in a meaningful way. The authors state that only 259 volunteers did any actual transcribing out of the 1207 who registered, but that is significantly more people who engaged with the history than just the two full-time staff members of the project. The authors state that this 21% participation partly resulted in the complexity of the text encoding the project initially required of its users. The transcription tool, which would eventually provide TEI XML encoding to the documents, hindered some of the voluntary contributions.

In his blog post, “Crowdsourcing Cultural Heritage,” digital scholar Trevor Owens states that “at its best, crowdsourcing is not about getting someone to do work for you, it is about offering your users the opportunity to participate in public memory.” I agree with this statement that public, volunteer-based projects conducted digitally through cultural institutions is a great source for community engagement with history. A lot of the focus in these projects is the end result of better access to these important historical documents, available online in one digital repository. People from seasoned scholars to young students in history can then engage with the same primary sources across cities and countries.

Yet, the digitization of the sources themselves by a community of volunteers results in engagement with the sources on another level. As Owens stated, instead of merely reading the texts, manipulating the data, and analyzing their historical meaning, the voluntary, digital transcription allows the user to contribute to the furthering of scholarship and to “participate in public memory” by creating history. The tools involved in such projects are important, as they can deter possible contributors. The tools provided need to support the democratized environment that these community engagement projects present and depend upon.