Ernst is interested in reading history and memory into media objects and forms themselves. This becomes an archaeological project; by looking at long-abandoned forms of communication, there is much to be learned about how design standards and normalized processes fell into place. The histories of objects and digital formats that store and communicate information can offer these rich histories. Even as the book is focused on “digital memory,” the work is focused on the physical media processes are involved in creating these digital artifacts. For my own work, I am particularly interested in adapting Ernst’s emphasis of close readings of media forms to build a deeper understanding of how memory is conceptualized and materialized through data storage formats. Parikka mentions that this approach allows Ernst to move away from the sociohistorical contexts for the devices that he studies, and in doing so avoids discussing the “messy politics of technology.” I would argue that these contexts are inextricable from the kinds of forms that Ernst studies, and that in studying digital archival forms and their origins, one must consider the social and historical contexts of their emergence in order to fully understand its design and uses.

I was fascinated by Trettien’s “Deep History of Electronic Textuality,” as it set the process of digital reprinting in a much larger context of analogue reprinting, describing the ever-shifting materiality of textual works. The move to POD facsimile shifts from repackaging to reproducing, thus “historically remediating” older works. These digital reproductions (particularly OCR reprints) include scans that have residue from their physical origins, whether that is the formatting of the scanned edition, or remnants of previous readers like underlines and marginalia. When converted back to digital text, these scans often include fragmented words and other errors that makes the reader wonder where these errors were in the process of production. A typo in the original book? A smudge on the page? Low contrasted text which the algorithm failed to identify? Trettien argues that these errors “dislodge the reader from her passivity,” and back into the active role that Milton imagined.

How can we imagine digital humanities projects that acknowledge these kinds of residue across reproductions of texts? How could a historical project acknowledge these changes without merely bringing them into (or reproducing) its own digital context? Perhaps one would need to develop a “digital” humanities piece that also involved physical editions of the texts as well, allowing for cross-media examinations.