Commentary
-
response // 11-19
It feel likes media archaeology, according to Wolfgang Ernest’s theory, is always at the “inbetween”. It isn’t satisfied with the mainstream subjective, narrative voice of based on media studies and history, while on the other hand it is is also ambiguous because of its vague definition in academia and approach to the multi-dimensional aspect of humanities. I enjoyed the fact that Ernest focused greatly on “investigation” and “technicality” within the fluid subject, focusing on media as a somewhat “material” archieve to study its interventions, mechanisms and evolution in space and time, and the extent to which it participates in changing our bodies, senses and cognition. However, the definition is still confusing, as archaeology is about preserving artifacts and establishing relationships between them and the past culture, yet media archaeology, as it inevitably involves modern day perceptiona and understanding, are bound to add a new layer of subjectivity in the process. In addition, if past media is recreated and celerbrated using modern technology, then to what extent does it become a process of mimicry? This also relates to the idea of electronic texturality and “Primary record” in both of the digital humanities writing. I was especially intrigued by the example of textual materiality of the image swapping upon click (Milton text), and I think it is a very powerful demonstration of is paradoxal argument of context v.s. content in media archaeology. How does software culture change our understanding of the media? How is the interface and content development toolbox continuously reshaping the aesthetics and visual language of contemporary media design? and is that a good reflection of the humanities as a technological-driven machine. Material restoration is the primary focus for the development of traditional / conventional archaeology, while searching for heterogeneity and reassessing its value seems to be the primary value of media archaeology.
-
Ernst & Kirschenbaum: Reading Commentary
Wolfgang Ernst: Digital Memory and the Archive
Ernst has a perspective in media archaeology that emphasizes “microtemporality”, which is a term he uses to describe how technical memory is a constant, active process, rather than a stable and permanent object. I appreciate that he provides a reverse method of examining media archaeology, where he considers the object itself that creates the archive and how it impacts the archive or “memory”.
Matthew Kirschenbaum: The .txtual Condition
This work addresses, and also references, the concept of a fluid archive that Ernst introduces in his work. Kirschenbaum makes a point that in today’s world, the difference between preservation of a digital object and its creation has become quite blurred due to the accessibility of immediately created digital objects.
-
Digital Archives
The readings offer a new perspective to me as I often think of media as pure content deprived of its materiality. But after all no content can be delivered without a coded format, which is bonded by the form of certain type of decoding. Taking from that perspective and looking over a longer time span, you start to realize how ephemeral the embodied technologies are: one moment we’re used to the 3.5-inch disks and think they might be there forever, and 20 years later we can’t even find a computer that have a drive to read CDs.
It seems like the digital archiving activity itself then needs to be constantly renewed to catch up with the evolution of technology. With all the individuals constantly documenting their lives on earth in various mediums, you would expect the human history to be readily evidential and accurate from now on. But we find ourselves facing with the fragmented realities caused by the lost objects of transitioning technologies. Are we able to carry them along with us and pass them on, or will they be lost like all the rest of the fragments that we dig out from underground, carrying pieces of truths about the lives of our time?
Maybe one day artificial intelligence will also be dedicated to the archiving of human histories. Will we then be elevated from the duty of deciding what is important enough to be documented and maintained accessible, or will the task of designing and maintaining such system cost too much that the loss of information is still inevitable?
-
11-19 Response
Ernst is interested in reading history and memory into media objects and forms themselves. This becomes an archaeological project; by looking at long-abandoned forms of communication, there is much to be learned about how design standards and normalized processes fell into place. The histories of objects and digital formats that store and communicate information can offer these rich histories. Even as the book is focused on “digital memory,” the work is focused on the physical media processes are involved in creating these digital artifacts. For my own work, I am particularly interested in adapting Ernst’s emphasis of close readings of media forms to build a deeper understanding of how memory is conceptualized and materialized through data storage formats. Parikka mentions that this approach allows Ernst to move away from the sociohistorical contexts for the devices that he studies, and in doing so avoids discussing the “messy politics of technology.” I would argue that these contexts are inextricable from the kinds of forms that Ernst studies, and that in studying digital archival forms and their origins, one must consider the social and historical contexts of their emergence in order to fully understand its design and uses.
I was fascinated by Trettien’s “Deep History of Electronic Textuality,” as it set the process of digital reprinting in a much larger context of analogue reprinting, describing the ever-shifting materiality of textual works. The move to POD facsimile shifts from repackaging to reproducing, thus “historically remediating” older works. These digital reproductions (particularly OCR reprints) include scans that have residue from their physical origins, whether that is the formatting of the scanned edition, or remnants of previous readers like underlines and marginalia. When converted back to digital text, these scans often include fragmented words and other errors that makes the reader wonder where these errors were in the process of production. A typo in the original book? A smudge on the page? Low contrasted text which the algorithm failed to identify? Trettien argues that these errors “dislodge the reader from her passivity,” and back into the active role that Milton imagined.
How can we imagine digital humanities projects that acknowledge these kinds of residue across reproductions of texts? How could a historical project acknowledge these changes without merely bringing them into (or reproducing) its own digital context? Perhaps one would need to develop a “digital” humanities piece that also involved physical editions of the texts as well, allowing for cross-media examinations.
-
11/19 Reading
The .txtual Condition
For me, one of the most interesting parts of this piece was the was the usage of the word “archive” in the English language has changed to reflect some of the technological advances that came with the 20th century. Typically when I hear the word archive, or think of an archivist, they seem like concepts that I am so far removed from. Despite being aware of the increasing amounts of data created by ours phones and computers and some of the problems that come along with storing and recording it, there never was a link for me between preserving this data and archives. While I think it’s interesting to consider how to store, access, and engage with this information, to a degree I also wonder how much value is added by doing so. In some cases, I think certain behaviors haven’t changed, but we now have a bigger capacity to store information, and I don’t know that it will always bring about new insights.
-
Zucker_Readings 11/18
Zucker_Reading 11/18
The reading ‘Archival Media Theory’ begins by introducing the reader to a very important concept I want to open by discussing. The idea that while most people may not actively be engaged with archives but that they are in fact familiar do to language being incorporated in every day tasks such as archiving emails. This notion that we utilize language from other forms or representation without being aware of its origin is important to note as much of how we exist in the world adn relate to knowlege around us is informed by language we have embodied subconciously. The author continues with the exploration via Ernst’s theory of definitions and relation to objects for instance. The author poses ‘a question of where do we want to start?’ which is valid in many instances. One, in relation to archives, being the value of history and bringing past histories, often buried in archives, to the contemporary time where the history may be deemed as irrelevant. In order to understand where we came from and how we came to be it is importnat to understand history and in particular, the histories we are unfamiliar with as they only exist in archives.
In contrast, the reading ‘A Deep History…’ discusses the recent trend of online publishing of out-of-copyright literary editions and the challenges it presents when discussing digital text. It builds on the reading ‘.txtual Condition and the larger topic of what is digital humans as it relates to digitizing archives, text, editions of literature, and ethics behind the action, particularly as it relates to academia. The author frequently uses negatively assosicated words when referencing the subject suhc as ‘digital pollution’ and status of ‘fascimiles’. The mechanics of doing so I feel asserted an opinion in the piece that did not allow me as a reader to come to my own conclusions. To summarize the argument, in my experience, the key argument the author makes is similar to reprints of famous artworks. While the representation and content is the same adn true to the original form/intent, the reproduction calls into question access, intent and purpose behind the original form. Maybe the author was less concerned with the reproduction of a text and more limited by access to technology and funds resulting in less copies or singular additions? One might consider this. While the answer may cease to exist or be answered, the argument posed by the author is a key one I think about in regards to digital humanities and the piece was a concrete example of one application.
-
Text mining
Serving as a clarification of text mining for people new to the topic, ‘What is text mining’ provides her definitions to differentiate text mining with easy-to-understand analogies. To me this is something clear when you read it, but not that easy to come up with if you’re trying to articulate the concepts yourself. In ‘Untangling text data mining’ Mart Hearst further contrasts different applications of non-textual and textual related data processing tasks in terms of what kind of information is extracted at the end, where real text data mining lands within finding ‘nuggets’ with novel conclusions. It is a very nice conceptual model to start with, although I wonder if it’s too ideal to say only the result of having new findings constitutes real text mining: the attempt of doing so may not always end up in the result wanted. Should these attempts be rejected because of not having the right results? Some boundaries may be blurry. For example, before secondary patterns are discovered, standard classification and segmentation may need to be performed. And sometimes new information doesn’t implicate meaningful finding. Moving beyond the terminologies, these classifications may be better discussed on a case-by-case basis.
In ‘”Raw data” is an Oxymoron’, the final parts of introducing ‘dataveillence’ particularly interests me. It goes back to our previous readings and discussions about the monopoly of data access of big companies. The competition of data accumulation has already begun. I’ve seen the phrase ‘digital twin’ been used in many industries. For instance, in healthcare, it refers to building the digital version of every person, where simulation of body condition is possible for prevention and treatment of illnesses. While promising it may sound for more precision in treatment, whether our identities will be used in the wrong way when it’s in the wrong hand is another topic. And it’s probably just a matter of time before such thing happens.
-
Text Mining Readings
Reading… into Text Mining
Starting with Marti Hearst’s ‘What is Text Mining?’, one can gain a fundamental understanding of the concept behind computer mining of text and its various applications for text extraction. When the words machine learning enter a conversation, my gut instinct is to cringe. Not because I believe it is wrong, unethical, or useless; but rather I fear the removal of the human in the process and care that comes with human oversight. While human error is often times quite large, human nature plays a major role in social science and understanding the person behind the text or data is always important. Hearst speaks to this when discussing the difference between data mining and text mining. In text mining for instance, she states that it is based on natural language text instead of structured databases and argues this is because text is meant for people to read rather than automatically process. This statement quickly put my apprehensions to rest and allowed me to objectively read and understand the entirety of the article. Furthermore, her analogy referencing crime fighting and the difference between dsicovering new knowlege vs. showing trends allowed me to apply her viewpoints to my personal work which I appreciated. She strikes a balance between providing information and insightful critique that allows for a reader like me who is new to the topic to feel as though I have a grasp on the general concept, an effective writing style to say the least.
For the second reading I decided to read Marti Hearst’s ‘Untangling Text Data Mining’ in order to further my knowlege on the topic and explore the effectiveness of her writing style. This article in terms of chronology was written four years prior to the one mentioned above. She begins with the metaphor of finding information that meets the needs of a researcher to ‘looking for needles in a haystack’ and honestly I couldn’t agree more. The ability to apply a tool such as text data mining to initial research procedures could greatly reduce the amount of time spent jumping down rabbit holes or so-to-speak. On the other more critical hand, the inherent bias that exist with computer text data mind is hard to overlook. As the article suggests, there are different ways for computers to extract text and one of which might be more successful in my opinion is categorization. Similar to applying tags or subfolders in archives, by starting with categories the researcher would be able to navigate their personal bias rather than implied from the beginning with a computer. Again, this brings up a range of issues however the ability for human guidance is important. One area of the article that I found difficult to overlook in terms of proving the applicability of the technology is in the section addressing how to use text to uncover social impact. The case study references the mix of operations leading to the results and shows how this process can both simplify and complexify an argument. On one hand, it was able to process thousands of documents but on the other, as noted by the author, much of the work was done by hand due to the data not all being available online. This speaks to my apprehension with applying this tool to history or non-contemporary research as much of the documentation is handwritten or exists only in print. Lastly, she speaks to the untapped resource of large text collections and the need to not simply rely on artificial intelligence which allowed for a rounded argument.
The thoroughly enjoyed the third reading, ‘“Raw Data” Is an Oxymoron’. The author Lisa Gitelman writes in an almost satirical voice to try and reiterate the complexity of defining and categorziing various types of data. Using words like ‘sexy’ and ‘silly’ and metaphors of raw data to ‘jumbo shrimp’ in an effort to pull the reader into a critical discourse into contemporary applications of data. She takes us on a historic journey of data categorization from the 1960s until present data to discuss various professions viewpoints on how data is used and spoken about. In particular, I enjoyed her view on the reduction of literary objects to ‘graphs, maps, and other data visualizations’ was an important point to speak to the resistance in some fields vs. others. The need to create universal definitions for terms is important and one which I constantly grapple with in a cross-disciplinary field. Her ability to represent various applications and points of view in a concise way allows the reader to see the different vantage points on the topic and come to their own conclusion. This is all said prior to her deep dive into objectivity which creates a new layer of complexity that was more difficult for me to untangle. All in all I think the paper provides a wonderful introduction to raw data and the complex topics of discussion/arguments that arise when digging further into the topic and questioning the intent of its very definition.
-
Text Mining
In “What is Text mining”, the advantages and disadvantages of text mining are outlined clearly. It is a step closer in understanding natural language processing methods, while it is still a long way from extracting insights rather than already-known facts. It raises a good question of how this technology can be used as an effective and accurate method of harvesting information in humanities studies, where a lot of the data are spoken words and behaviors of people that are not recorded in easy-to-parse verses. But what’s intriguing about text mining is that it lies in the middle-ground between subjectivity and objectivity. Expression in textual form is still something that is dependent on the individual, yet analyzing the repetition and occurrence of these key words and phrases in an “dry” and objective form creates a different narrative than reading through these texts themselves. Such as MALLET, which outputs a summary of the topics extracted from the text, and exists as a stand-along, algorithm-generated interpretation that is neither the original text, nor an opinionated response. While Untangling Text Data Mining describes the heterogeneous mix of operations used in real efforts to extract information, the Alien reading was a lot more interesting as it talks about implications of context in relation to culture and history. It’s hopeful, in a sense that it hopes the unknown possibilities of natural language and humanities analysis can influence the way we act and record, hence paving way for the future where our records are more closely aligned with what is machine is capable of processing.
-
Text Mining Reponse
While Hearst’s piece was a useful primer on the initial concepts involved in text mining, she ends her piece by saying that the fundamental limitation of text mining is that “we will not be able to write programs that fully interpret text for a very long time,” a concept which I think completely obscures the meaning and use of these text mining tools. Binder’s chapter on Alien Readings addresses the fundamental issue with this statement. Ultimately, there is no “full interpretation” of any text, not one that any single human could ever generate, let alone any machine. But when thinking beyond the human, the drive to use algorithms to understand text based on any concrete semantic understanding will always be futile, as the tools themselves always will have tendencies towards interpreting text in various ways based on the kinds of language and writing style for which they were designed, among many other implicit biases. In “Alien Reading,” we have the example of Latent Dirichlet Allocation (LDA), which is based off of research from a DARPA initiative in the mid-nineties to conduct topic-based text analysis of news feeds. Though the updated topic-modeling tool can provide useful interpretation of texts beyond news feeds all the way to eighteenth-century essays, it breaks down when looking at various forms of poetry.
Binder’s point here is one that is quite useful not only for this context, but particularly when looking at contemporary modes of machine learning and creation beyond text. He suggests that any output from these tools can absolutely have use and provide additional interpretations, but should always be understood within the context of the tool itself. Any text scraping analysis can only be taken as part of the “truth” of a given text within the context of the tool that is being used to scrape and analyze. This idea isn’t particularly new – just as one would expect a psychoanalyst to provide a different reading of a text than a biologist, one must take these interpretive tools as interpretive! This understanding of the situatedness of tools for analysis is a practice that is pushed both by STS and Media Studies, and I think the tendency to forget this property of machine readings is one of the remaining frictions from the 20th century that we are slowly overcoming. I think there could be (and likely already has been) fascinating work with comparing the outputs from a variety of different textual analysis tools when inputting the same text. I would be interested to examine some of these examples.
-
Marti Hearst: Reading Commentary
What is Text Mining?
In this text Hearst defines text mining and distinguishes it from both web search and data mining. They also explain how it has been applied to various fields, specifically highlighting its frequent and promising use in bioscience. I had previously had the misconception that text mining is solely used in the humanities and linguistics, but I was surprised that Hearst actually saw its value in science.
Untangling Text Data Mining
Although many researchers in fields related to computational linguistics and NLP are focusing on developing complex methods to analyze text effectively, Hearst makes a compelling argument that this may not be necessary to find significant patterns about the world. They cite various examples of successful text mining using simple methods, which ranged from application in health to the American patent community. It seems that if a researcher has a refined approach and knows what type of trends to look for, they would be able to effectively use simple text mining to uncover interesting and/or important correlations.
-
Anna 11/12 Reading Response
Both Hearst and Gitelman provided important background information and challenged the assumptions we may have about data. Hearst’s essay was relatively straightforward but was still very useful. I know that I have merged what Hearst calls “real” text mining with what she calls “approaches that find overall trends in textual data.” Based only on this piece, I can’t say exactly where the boundary is between these two types of work- and I’m sure there’s a grey area- but I do think it’s important to keep in mind that text mining is more specific than we often consider it to be. I’ve read the Gitelman introduction before, so it’s hard to respond to on its own, but I think it presents a lot of really useful ideas that set up discussions of data well. It makes sense that it’s referenced so often! She really emphasizes the agency that exists behind any data whether that be in capturing or in mobilizing data graphically.
I found the Binder piece to be especially relevant to my own work. I definitely have used text mining carelessly in the past, focusing more on the implications of my results than on the implications of the method I used to get those results. The most alarming examples, for me, were those about ‘non-standard’ speech being excluded from a dataset.
As with a few of the pieces throughout this semester, I found myself agreeing with the premise, but wondering how the presented ideas would work in practice. I completely agree that humanists need to engage historically and critically with the tools we are using, but I am unsure how to do that without reducing that engagement to either a cursory comment/acknowledgment of the problems or to a significant diversion from a piece of scholarship. I don’t know how to engage in the type of interchange with media studies that Binder (and Alan Liu) suggest while maintaining a coherent humanistic argument, at least in shorter pieces of scholarship.
The closest I’ve come to doing the type of work Binder suggested is in a project last year on the Russian elegiac canon, which does not seem far from his example of Lisa Marie Rhody’s work. I think it was possible to do so as I was looking at a closed set of data and did not make claims about Russian speech or even Russian poetry. My paper listed the most commonly used “unusual” words, then explored why each of them was used in different contexts, how different poets used these words, and when they became part of the canon. It also looked into poets whose individual elegiac canons significantly differed from the ‘traditional’ elegiac canon. That said, this entire paper was about the sort of problems addressed here, so I’m not sure how one would be able to incorporate text mining strategies successfully- and ethically-without going into each word’s specific uses, contexts, and connotations. I’m sure that I have thrown in word frequencies or collocations as evidence in papers other than this elegiac one without fully thinking of the necessary explanation. Furthermore, even in this paper, I definitely didn’t fully explore the historical and cultural implications of text mining as a practice.
Across the pieces, something that seemed unfortunately consistent was, as Hearst writes, that “to get further though we need more sophisticated language analysis.” This is related to her point- and Binder’s- that the tools we have work best for certain types of texts and fields, especially in the sciences. This semester of digital humanities has definitely emphasized that nuance is important to humanistic inquiry, and that definitely is true in text mining.
-
Rodighiero response
“Printing Walkable Visualizations”
To me, the most interesting parts here are when people became upset that they weren’t represented in the visualizations; easily corrected in a digital environment, but when made physical, it is more permanent (or at least costly to amend). Thus when Dario visualized the collective, he was also, by necessity, excluding some people. I wonder what it could have looked like if it were more flexible, with the option of adding and removing people. Otherwise you end up with a defined, fixed, bounded group rather than a representation of the real ever-shifting, growing and shrinking network.
“Mapping as a Contemporary Instrument for Orientation in Conferences”
I was wondering what is meant by “Documentality.” Is there a definition to be included in later drafts? I like the idea of looking at the specific language used in conference publications as more than just a means to express an idea, but also as the expression of aspects of the author. Maybe that is what is meant by “Documentality.” I think there is more to unpack here too: institutional constraints, advisors/editors/coauthors perpetuating a certain standard language, etc.
I also noticed that the map is “dynamic and interactive” and as a digital artifact it can be amended. I wonder if that is a result of experiencing people’s feelings about exclusion from the previous project.
“The Daily Design of the Quantified Self”
I think this is really interesting, to unpack what it means for people to be constantly measuring, recording, quantifying themselves, knowingly and unknowingly. I see a couple of Foucault’s concepts coming into play here:
-
Technologies of the self: The idea of “self-design” attributed to Boris Groys (p.5) seems to get at this. Foucault writes that technologies of the self are those techniques “which permit individuals to effect by their own means or with the help of others a certain number of operations on their own bodies and souls, thoughts, conduct, and way of being, so as to transform themselves in order to attain a certain state of happiness, purity, wisdom, perfection, or immortality” (Foucault 1982). So where do those techniques, and the definition of those states (happiness, purity, etc) come from?
-
Governmentality: Foucault defines governmentality as the “contact between the technologies of domination of others and those of the self” (Foucault 1991 via Esteban-Guitart 2014). This is where others come in to make you regulate your own thoughts and behaviors, through the enforcement of norms. Governmentality is dictatation to others what to apsire to and how to get there; in other words, the sanctioning of technologies of the self. It is a way to maintain power over people by making them obsessed with self-improvement according to a defined standard.
Foucault argues that these processes are how notions of self and identity are formed in society. And I think they are relevant here because we are talking about self-governing techniques. In this case it is a process of continuous recording that allows one to govern (“design”) oneself. Foucault actually uses the example of Marcus Aurelius to explain technologies of the self: recording his actions was a way for Marcus Aurelius to confront himself with the difference between thoughts and actions, and discipline his actions to be more in line with his thoughts (Foucault 1982). So is this really about design, or is it about governance and discipline? Is there a meaningful difference?
I also wanted the “unknowing” part of this to be drawn out a bit more, the nonconsensual or dubiously consensual collection of biodata, private communications, and usage patterns. But maybe that’s a topic for another paper entirely. Books have already been written on the subject, at any rate. But some of this is kind of dystopian. (For some reason the Biological Passport struck me as especially dystopic, but I could see how it’s useful to a sports team.)
-
-
Response : Rodighiero
It’s always fascinating when we begin to look at visualization across scale, both in the context of physical scale - size of representation and the impression that it leaves the viewers - as well as the contextual scale - how complex or structured the visualization is and how much good it is a procedurally displaying the hierarchy of information rather than just mindlessly laying them out. Maps are large-scale indeed presents a visually impressive visualization, which allows the creator to easily able to encode multiple layers of information in it. I definitely agree with some of the limitations and challenges raised, and want to add that there are many perceptive issues associated with the representation, that when the viewers are not able to see the full picture (literally in this case), we should question if the visualization still has the intention to present data, or has the intention switched to creating data instead. However, one thing that I do like about the real-scale visualizations is that it pushes the viewer down to a the data level, and eliminates the hierarchical bias the viewer might have when looking from a bystander perspective. The DH2019 draft was a little bit hard to understand, as the visual correlation between the multiple layers of information isn’t super clear, given the fact that the texts don’t align well with the boundaries, and the colors are more of an aesthetic choice, rather than functional. However, I am fascinated with the idea of connecting putting elevation data on top. It’s interesting that the conclusion part started with “Language is not only a means to convey one’s ideas, but also to express taste and background.”, which I feel slightly contradicts the idea that the goal of this visualization is reflect a dynamic community, given that this seems to suggest a direct correlation between visual representation, interpretation and “taste background” with pre-existing author-bias ( but it could just be me ).
-
Nov 5 reading comments
Seeing the building of database and the visualization of data as two separate tasks, a lot of the issues we’re facing with data visualization is actually due to lack of organized database. Sometimes the seeking of good visual ends up merely in format rather than good communication.
The walkable affinity mapping is an interesting experiment as such scale is not often explored in data visualization. And interaction is often associated with pre-programmed animations imposed on the visuals by the designer, triggered by the viewer within the screen space. But in this case the data stayed passive, and interaction is only suggested upon human movements and perceived totally subjectively from each person’s perspective. The control of the dynamics is now in the viewer’s hand, having various possibilities. But also because of being static the flexibility of dynamically responding to new inputs and simulations of relationships is eliminated.
Reading from the paper of the design process, the map is trying to incorporate many levels of information and affinities into one graphic. Looking back on the topic of affordance and the three questions they’re trying to answer in the beginning of the project, I wonder how intuitive it is for people to read from the rather complex diagrams and find out about affinities at first glance. Without a little explanation the graphics may need some time to be understood properly, for example the meaning of the thickness of the rings and arcs.
-
11/5 Reading Commentary
I appreciate that both readings, but especially Printing Walkable Visualizations, were routed in such concrete examples. In doing so, I felt they provided more useful context about these usage of such changes that I feel we often haven’t seen in previous readings in this class. Many of the readings we have done for this class have focused on identifying limitations opportunities of improvement for certain visulizations, but then don’t offer an alterative, or those that are offered don’t seems nearly as practical. By contrast, Rodighiero took the time to explain why different choices were made, as also highlighted considerations such as the choice of materials to make a walkable network and the difference in price to install them. I found it much easier to wrap my mind around the concepts proposed than other more asbtract examples.
-
Zucker_Reading Response 10/4
Zucker_Readings Response 10/4
In the first reading, ‘Mapping as a Contemporary Instrument for Orientation in Conferences’, I found the discussion quite fascinating in connecting theory to conference visualization however I think the graphic itself is somewhat confusing. With little background information, it is difficult to understand how the zoom levels correspond with one another as they disappear when zooming into another layer. The act of disappearance without an obvious connection or linked connection requires one to constantly zoom in and out to orient themselves which became frustrating with time. Lastly, the author states that the rich information now dynamically presented as a map shows the community, however I might argue the map only shows a portion of the community as many attendees were undoubtably unpublished but key actors in the community.
In the second reading, ‘Printing Walkable Visualizations’, the author begins by discussing a graphic at a DH conference that showed the networks of attendees. As mentioned above and by this article, complaints regarding exclusion from non-published attendees further reiterate the need for a more inclusive dataset to populate the map. I appreciated the article’s discussion of theory about the environment but again believe that inherent bias in this process may have been overlooked. When the topic of affordance is raised for instance, the author states ‘more precisely, it refers to all the opportunities that a thing, a person, or a space makes available to others’. When the opportunity only presents names of published individuals, the environment performs a bias excluding many attendees from the said community. The article focuses on physical case studies for visualization which is important to note as technologies advance and as noted, digital projections and data visualizations are available. It would have been interesting to see if both case studies had been projected digitally, if their adaptation to public comment and feedback throughout the events would allow for a more inclusive network and community.
In the last reading, ‘Mapping affinities in academic Organizations…’, I struggled with similar critiques in regards to affinity mapping as it leaves at many scholars that may be underrepresented in academic publishing such as undergraduate and graduate researchers. On the otherhand, I appreciated the way in which the authors provided context for the historic arc of actualization mapping by providing background as to the development of ideas rather than providing primarily definitions of new terms as presented in the previous article. The entirety of the article presents very technical approaches to the concept of mapping affinities which is helpful for readers with limited knowledge of the topic and Digital Humanities in general. The ability to unpack the methodologies used for the case studies and topic combined with the interviews presented at the end provides a more human understanding of an otherwise complex topic.
-
Rodighiero & Moon: Reading Commentary
Chloe Ye-Eun Moon & Dario Rodighiero: Mapping as a Contemporary Instrument for Orientation in Conferences
It is very interesting that the language “signatures” of authors of conference articles can be used to find links between authors, rank the strength of these links, and finally use this information to create a spatial visualization of the authors. It seems pretty obvious to consider the individual authors as nodes which can connect with one another as a network, since we tend to use this model when describing interpersonal relationships with words such as “social networks”. However, I was surprised that an elevation map was created out of this data, since it seems to be a less obvious, more creative representation of it.
Dario Rodighiero: Printing Walkable Visualizations
I appreciate that Rodighiero thoroughly examines challenges that large-scale, walkable visualizations face, such as limits on zooming out and multiple-user interaction. In addition, he notes challenges that visualizations of interpersonal relationships in general face, which mainly stem from how the individuals represented in them respond to them. I agree with Rodighiero that walkable visualizations can be compelling and engaging in ways that no other forms can, due to the scale, the multi-user capability, and its public quality. However, it seems that the production of these require many significant considerations in terms of economics, user experience, and technology. I am curious to see how walkable visualizations can advance in the future, and how creators of visualizations involving human relationships tackle the challenge of how to effectively and sensitively represent them.
-
Anna Ivanov 11/4 Response
I really appreciated the emphasis on process in all of these articles. They clearly illustrated the work that went into each visualization, as well as the inspirations, complications, and collaborations involved in that work. It was evident that everyone working on these projects had thought seriously about the users and implications of their work, and they explained it very clearly for someone unfamiliar with these projects, data or institutions. I was especially impressed by the honesty in which they discussed their evolution throughout the working process, as they learned which factors to consider more and which to leave behind.
I do feel that maps at this scale somehow leave something behind, even when claiming to focus so heavily on the individual, although I don’t know how to articulate what that is. I think this mostly comes to light in the discussion of possible affinities, as it seems impossible to truly map on this scale while still representing actual human connection in all of its forms. They address this by emphasizing that their work only utilizes that which leaves a digital trace, but I was still left feeling that the possible affinities were not complete representations. That said, I don’t think any of the visualizations are claiming to represent everything about an institution, nor are they attempting to craft truly individual narratives.
I’m curious about how these visualizations work in practice, especially on the level of a poster or a printed image. In the printed articles, it was clear that zooming would be necessary to understand all the information provided by the visualization, so I don’t see how all of that would information would be accessible in a static image. I appreciated the idea of balconies above a static image on the floor, but that still basically only allows for two levels, whereas a dynamic, digital visualization allows the user to engage with levels and layers of information with much more nuance. I also was wondering whether such maps would work in non-academic settings, and if so, what sort of parameters and digital traces would be necessary to create them. Even in my own field, I find that people collaborate with others who work on very different topics from their own, and I don’t know that this would be represented in the sort of visualization that focuses on things such as keywords, institutions, and publications.
As I mentioned before, I felt uneasy with some of the ideas of representing individuals through large-scale visualizations. The authors addressed it at some level, but I think the implications of such maps could be not entirely positive. As far as I know, publishing and collaboration are still connected with bias in many institutions. I’m worried that in such a graphic, they would be emphasized even more than they are already, so members of an institution could be affected negatively for not having access to them. Of course, the article pointed out ways in which bias could be minimized through the use of their visualizations, and I’m sure there are many benefits, but with anything that represents nuanced activities through visualization, I’m worried about what can’t be represented.
These articles worked very well together to give a historical and theoretical understanding of the methods used, an exploration of their usage in this project, and a very clear explanation of their implementation and implications. I previously knew very little about any of the techniques or technologies mentioned, and they definitely succeeded in provoking my interest in them.
-
Tufte is undoubtly the guru to modern day visualization. All three chapters (color, narrative, layering) provides arguments and examples that one can always agree upon. However, one thing that Tufte rarely touches upon is the visualization context of he graphical representations he proposes, or the subjectivity of visual language. The cognitive and perceptual techniques he proposes are effective method of distinguishing hierarchical data, yet it narrows the presentation of information to one method.
The potential of spatial humanities is limited by the current, existing technology. GIS has potential, and hold massive amount of information that can act as the background to humanities research. Yet I don’t believe that the platform should be pushed out of its current, absolute context (reliable accuracy) to accomendate for a more fluid purpose such as humanities research.
Questions:
- Do you believe that Tufte’s principles of visualization is universal?
- How do we interpret if something is a research project v.s. a representation project?
- Maps Activity:
https://cdn.mbta.com/sites/default/files/maps/2019-04-08-rapid-transit-key-bus-routes-map-v33.pdf
https://www.vanshnookenraggen.com/_index/wp-content/uploads/2012/04/Anim.gif
https://fathom.info/notebook/4756/
http://mbtaviz.github.io/
http://www.stonebrowndesign.com/uploads/9/7/6/9/9769402/t-time.jpg
https://www.tillberg.us/mbta
-
Tufte & Bodenhamer: Reading Commentary
Edward Tufte: Envisioning Information
Graphic design often faces the challenge of being seen as an arbitrary and merely subjective art; therefore, I appreciated that Tufte references Eduard Imhof’s Cartographic Relief Presentation, which defines concrete standards to guide and evaluate the way that information is represented graphically. One suggestion, however, that I don’t completely understand is to use colors of nature to represent information. The argument given was that these colors are familiar and coherent, and have a certain “definitive authority”. It seems to me that a palette of colors that are not drawn from nature can still be made to be coherent, and that familiarity with the colors is not necessary for representing information unless that information relies on some association of the color with an element in nature (such as in the use of beige and blue to represent land and water in the ocean map example). I don’t know what the “definitive authority” means.
In the satellite corkscrew diagram example, I found it fascinating that the inspiration for the graphic representation of the satellite orbits came from a physical, 3-dimensional tool (the Jovilabes). It goes to show that even in making a 2-dimensional graphic, taking into consideration what a 3-dimensional representation would look like can help produce more intuitive and illustrative products.
Bodenhamer: Spatial Humanities
In this text, Bodenhamer takes a critical view on GIS in the context of humanities. He acknowledges that when the data being represented is quantitative, precise, and official, it can translate well using GIS. However, one major problem is that this causes GIS to be very biased and convey a false sense of legitimacy to uncertain data, which is often the type that is encountered in humanities. The topic of deep maps was a particularly thought-provoking response to these shortcomings of GIS, although the term is quite broad and ambiguous. I am curious how “deep” maps can get - what levels of depth can be added to traditional maps to convey data that are not best represented using GIS?
-
Oct 22 reading comments
** Envisioning Information ** Many techniques about layering, visual hierarchy and colors laid out by Tufte are quite familiar to me as our common design principles. The chapter about narratives of space and time has offered many new insights for me. The rich variety of formats which people came up with visualizing space-time data is remarkable, each with its own creativity. The example of David Hellerstein’s “The Slow, Costly Death of Mrs. K_,”, though appears in the chapter of layering and separation, is certainly also a great narrative. Reading it is almost like reading a novel. Showing details that we don’t normally pay attention or have access to has an astounding effect. The power of data is revealing in this case.
** The Spatial Humanities ** The way David Bodenhamer envisions the future of humanities is definitely worth pursuing. But just like he said we still have long way to go in terms of perfecting the integration of multiple views in history and culture into our spatial data representation. GIS nowadays is probably still not good enough for fulfilling such vision. After all it’s just one tool. In fact, I’m not sure if it is the right effort to criticize such tool with obvious limitation. In some way it almost feels like accusing a pencil for not being able to produce colors. If there’s a problem that must be how we use it rather than what it is. What we should pay more attention to is what we’re representing. With all the limitations in mind, maybe one we’ll come up with the right tool for all the envisioned subtle representations.
-
Commentary 4
Tufte I don’t have much commentary to add to Tufte except that it is a very useful guide for thinking through visual design. It certainly reads as prescriptive at times, and Tufte likes to make universal claims as if they are unequivocally true (and perpetually quote dead white men to back them up). It would certainly be worth exploring the social construction of these design principles—Tufte tends to apply a scientific/essentialist lens to determining their origins. Still, this text is extremely valuable as a guide, rules of thumb that could then potentially be broken, once understood.
I mentioned dead white men flippantly, but I genuinely enjoyed the historical information and references in the Tufte. For example, I didn’t know that symmetric layout convention preceded asymmetric (83). The footnotes also provide excellent examples such as the Chou Pei Suan Ching Chinese mathematics book on 84. And Tufte makes clear the influence of the Bauhaus on today’s design language through repeated references. So I think even a historically-minded design scholar would find use in the references and footnotes here.
Bodenhamer First of all, great definitions of epistemology, ontology, and positivism (18-19). Second of all, I had no idea about the contentious history of GIS in academia—though I’m not entirely surprised as it typically reflects an extreme form of positivism.
I like the phrasing: “The real question is how do we as humanists make GIS do what it was not intended to do, namely, represent the world as culture and not simply mapped locations?” (23) He doesn’t reject GIS or accept it unconditionally, but instead asks how it might be a tool for critical humanities. He posits some examples, and I could see the “deep mapping” concept especially useful for some of the group work in this course, as a frame and a guiding principle. Though I wish there were more thinking through solutions to this postmodern challenge.
Side note, it’s interesting that this article is all about space and geography, and yet it uses “the West” as a monoculture. Especially curious is the contrasting of “Western” and “American Indian” approaches (20)—I suppose in this conception, indigenous culture in the Americas is neither Western nor Eastern.
-
10/22 Reading Commentary
Color and Information: I found this reading to be incredible interesting, because unlike some of the other readings we have done thus far, the topic is more present in out everyday lives. One doesn’t have to be an academic or a researcher to have some experience with conveying information with color. Yet, despite how salient this topic was, I found myself a little surprised by the observations Tufte laid forward. It’s really easy to see a graphic or image and this that it’s bad, but not be able pinpoint exactly what it is that is confusing, but Tufte was able to bring to light several rules that I never thought about greatly impacted how effectively information was shared. For example, when
Narratives of Space and Time: Reading this chapter, I think I could appreciate some of the challenges inherent in trying to represent 4 dimensions in a 2 dimension space, however, I was not altogether convinced that some of the alternatives Tufte offered were better. For example, his rework of the bus schedule inspired by Ybry’s work , while a little more visually appealing than a standard schedule, made it a little difficult to tell precisely when a bus was supposed to arrive. Sure it had times broken down into 10 minute intervals which was sufficient when looking at when the bus was supposed to leave Hudson Place Terminal, but the arrival time at the New York Port Authority Bus Terminal, and especially the arrival at 14th and Washington are further down the page away from time labels making it difficult to trace the timing. For something like a schedule where I think the typical user wants to know precisely when their bus is leaving to plan around it, I don’t think this a better practice.
The Potential of Spatial Humanities: As with many of the texts that we have read this semester, I was a little surprised by how much this piece challenged my current way of thinking about mapping. Mapping was one thing that I thought was pretty straight forward and not very contentious because I always thought that 2d representations was fairly representative of the real world without much room for argument. But as Bodenhamer explains, I didn’t realize how much information can truly be lost and never really gave much thought to how other cultures, and even just other people interpret what is being shown.
-
Reading Response for 10/22
I really enjoyed the Bodenhamer reading and thought it was very relevant to my group’s project. Already on page one, he writes “We recognize our representations of space as value-laden guides to the world as we perceive it, and we understand how they exist in constant tension with other representations from different places, at different times, and even at the same time” (14). This claim is central to our project, so reading a piece that works with that claim was very valuable to my thinking about it. Later he writes “All spaces contain embedded stories based on what happened there. These stories are both individual and collective, and each of them link geography (space) and history (time). More important, they all reflect the values and cultural codes present in the various political and social arrangements that provide structure to society” (16). I think this idea of both embedded values and embedded stories, as linked concepts, is what we want to express, although we maybe haven’t thought critically enough about all the ways we want to accomplish the storytelling part of that project. Near the end of the selection, Bodenhamer wrote about deep maps, which are “meant to be visual and experiential, immersing users in a virtual world in which uncertainty, ambiguity, and contingency are ever-present, influenced by what was known (or believed) about the past and what was hoped for or feared in the future” (28). I love this idea, although it seems somehow far-off, but perhaps it is something we will approach in our work!
The Tufte reading was very helpful- I hadn’t thought too much about a lot of the design principles he laid out. I feel like I typically make graphics hoping to express an idea but don’t think too much about the side effects of the graphical choices I make. I definitely will think about color differently, and the 1+1=3 idea was also very influential. I did think it was funny that he wrote “words and pictures belong together, genuinely together. Separating text and graphic, even on the same page, usually requires encoding to link the separate elements” (116), as the separation of graphic from text was the most confusing part of this reading for me. Of course, it all worked out, but it was sometimes difficult to understand what he was referencing when graphics were on different pages from the description text or when related/compared graphics were across several pages. Otherwise, I found the reading very useful, and I’m sure it will seem even more fundamental when I actually implement some of the techniques and ideas he presented.
-
Reading response 10-22
Part of the “Colour and Information” apply to all visualization and had threads in common with photography and videoography, which are also very influences by art theory. For example the eye will be drawn to the lightest object in a frame, the brightest colour and sharp obects over blury objects. In this way some of the principle he discusses in mapping are similar to the ways artists manipulate the eye in a painting or photographers manipulate the eye in a photograph. Poorly done the eye is drawn to the wrong object in the photo or none at all. It was interesting to think of the way colour theory can aid or fail the process of displaying information. I think prior I only though of colour on maps as either pleasing or not pleasing. Not a “missing the point” or distracting from information.
I think there is a similar thread in the design piece, but while colour is an almost universal problem. ( except for the nuanced differences we see) With design having an audience skilled at interpreting the design also matters, because not all design is intuitive. In the mapping of the moons from Galileo’s simple recording to the modern one which has much more detail, but requires skill to “read.” When visual representations work they are faster to understand than texts, the tension is in displaying information meaningfully. Some of the examples here remind me of the earlier reading wherethey become complicated to decode and more “art than information” in Will’s words.
-
Envisioning Information Response
As I was reading this text and looking at the illustrations, I was particularly compelled by how Tufte leaps between images whose function is to represent information and images that are more designed to be received as art. The act of presenting these works in parallel allows the reader to see the fundamentally artistic qualities that are involved in representing information and the kinds of considerations of an audience that is always involved in creating these images. The design decisions involved in representing information on a flat plane are not entirely aesthetic, nor are they entirely functionalist, but are somewhere in between. And conversely, this text surprised me with how much functional value one can draw from pieces that would typically be considered art-objects.
I was most drawn to the chapter on Layering and Separation, particularly how Tufte extrapolates Josef Albers’ piece “One Plus One Equals Three or More” because I have often found myself thinking about different acts of layering information at different moments in my life. As a game designer, I also have to think a lot about layering information. Obviously interactive games are a completely different environment than what Tufte is working with here with layering, but I think that there are some relevant connections between how game designers deal with information, and how we imagine similar kinds of representation in the digital humanities and beyond, as they are both interactive mediums.
Designing a game that’s meant to be played in the first-person means that you have to constantly ensure that a player can see the information that you want to present them, whether that information is instructions, hints, representations of the player’s health, values of time, maps, etc. At the same time, you need to consider that presenting the player with any visual information will inherently obscure other aspects of gameplay, and thus needs to be represented very strategically. To approach this kind of information presentation, there are a few different registers to play with to layering data: diegetic, non-diegetic, and meta.
Representing information diegetically means putting it in the gameplay space, while non-diegetic information is represented in the two-dimensional space of the screen and is separate from the “world” – like UI pop-ups, or instructions that are presented via text that doesn’t move with the player. And meta representations are like non-diegetic ones but maintain the fantasy of the game. To make this more concrete, consider representing a player’s health value. A diegetic strategy could involve the player walking over to a doctor character who tells you your health status. A non-diegetic strategy would be a simple health meter overlaid on top of the screen. And a meta representation of health would be that the screen starts to tint red when a character’s health gets low, which presents the information in non-diegetic space but maintains the game fantasy.
This was a very long-winded way of describing that I think there could be some valuable lessons to borrow from this kind of thinking as we develop interactive work in the digital humanities. When should we layer information or other interfaces on top of representations of data, both non-diegetically and in ways that are meta? When should the representations of data be interactive themselves, in a mode that is more diegetic? How does each of these ways of representing and interacting with information create the “1+1 = 3 or more” effect differently?
-
Commentary: Rosenberg & Grafton
These readings make me reflect on my own education, because the historically and socially situated representations of the timeline and the map were so prevalent. It definitely points toward a positivist tendency, as Rosenberg and Grafton bring up on p21. Especially when it came to older historical events such as the American Revolution and World War I, we focused a lot on maps and dates—-data from measurement of time and space. These data silently normalized colonial naming conventions, and filtered our thinking through the lens of Euro-American ethnocentrism. It wasn’t really until we got to JFK and Vietnam that any sort of human subjectivity was explicitly offered in our history education. And I have to imagine that’s partly because that’s when my teachers or their parents were alive to experience it firsthand. It’s weird how cultural memory works–at least in classroom settings, history gets flattened and loses its emotional intensity after a few generations. I personally could not stand history class because I hated memorizing names and dates. But historical knowledge is so much deeper and more interesting than that.
I think Rosenberg & Grafton are a little bit too in love with the time-map idea. It provides only some additional affordances. However, I did really enjoy the breakdown of the matrix style representations towards timelines and then towards time-maps. I’m especially fascinated in Chapter 4 at these ideas to condense all of history into these single charts. It was a much less specialized time in terms of academia and historical study. But it’s obvious how much is lost in such a condensation.
-
Commentary - Cartographies of Time
Cartographies of Time (Chapter 1 & 4)
The historical charts presented in the book are quite fascinating to look at. I especially like the diagram of Napoleon’s march. The chart is clear and informative, with no redundant information, and even has a sense of the modern minimalism aesthetics. It’s certainly difficult to precisely represent history with time and space in a single graphic. Some attempt to do that may have been limited by the tools and technology at that time, but some are conceptually innovative and effective to look at even now. It must have taken elaborate efforts for the original authors to calculate and draw the graphics all by hand, and to build the supplementary tools for examining the charts. It would be interesting to think about how we can utilize the ideas and rebuild the concepts digitally with contemporary tools. Seeing the layouts of the charts, I can almost imagine the intention of the original authors and visualize some of the interactions that I find strongly indicated by the charts. The New Chart of History presented at the last part of chapter 4 is a great concept to represent the world history as a ‘whole’, yet I wonder if the accuracy is enough for actual reference at such condensation. The area occupied by different empires seem to be disproportionate to some extent. But such layer of subjectivity also makes it interesting in the way of interpretation.
-
Cartographies of Time [ Chapter 1+4 ]
The argument for both of the chapters can be summarized using the term “time is a social construct”, and Rosenberg and Grafton were trying to push forth the concept of using motives a social development to inform the cartographical decisions when constructing timelines, and how the construction and representations feedback into each other.
Although this might be slightly off-topic, the relationship between geography and time echos with a lot of the ancient philosophical discussions around categorial conceptualism and the “discovery” aspect of our access to intrinsic division and categories that govern human understanding and cognitions ( a.k.a. Kant & Kantian Conceptualism ). They’re both concrete, some-what “absolute” categories that are currently universally defined by people, yet when discussed from a humanities / digital humanities perspective, they’re the core to subjective interpretations. Time is constantly moving, and geographical identities contribute to (and are affected by) social-culture movements. It was interesting when they talked about accuracy associated with non-linear narratives and mapping potentials. On one hand, they’ve become “laws” that any representation and visualization created with data relevant to these two aspects should not be doubted or questioned in terms of graphical integrity. On the other hand, they are limiting our interpretations.
We use time as a method of interpreting the reasoning behind the history, to map our current position within a proposed narrative. Not only the cartography of it is influenced by our focus on what is important to consider in history, but also how technological advancement creates new discoveries that push the existing boundaries of time into new dimensions. Essentially, more is different.
Discus chronologicus’s pivoting arm considers the time from a multidimensional perspective (somewhat analogous to timezones). We’re used making correlations between time, geography and humanities in a fluid manner, but for the audience of these chronographies from ~150 years ago, the concept of understanding time as a tree with multiple branches rather than a single list of events requires these types of figurative and interactive interfaces, and reveals some rather interesting things about their method of justification and precision of spatial information.
-
Zucker_ Reading Commentary 10/08
Cartographies of Time
Let me just start by saying I found the chapters from this book, written by Daniel Rosenherg and Anthony Grafton, to be fascinating. So much of the work I have been focused on the past year or so has been in the nature of spatializing histories of marginalized communities. Most known histories, while limited in terms of representation, often exist only in textbooks as dates or locations. The theoretical and academic exploration of the visual representation over time posed by the two chapters we read expose the harsh realities of linear timelines, lost narratives, and technical limitations. The fixation of Western historians, as argued by Hayden White, to think of chronology as a merely ‘rudimentary form of histography’ is important to note as cultural importance is lost. My work in archives has revealed the dire need for the critical role of chronology to be explored in Western histories, particularly when applied to larger social movements in the US in an effort to explain how marginalized communities have existed and fought over time. Further reiterrated by the Eusebian Model before the 18th century which, for the first time, allowed scholars to ‘facilitate the organization oand coordination of chronological data from a wide variety of sources. It provided a single structure capable of absorbing nearly any kind of data and negotiating the difficulties inevitable when different civilizations’ histoires, with their different assumptions about time, were fused’ [pg 16].
This point is crucial as it calls for a need to create a universal method for combining diverse histories that otherwise exist as prescribed by the systems or instituitions that are telling their histories. In other words, when marginalized communities do not have access to their histories or their histories have not been formalized, they often remain unknown or hold lower value in terms of legibility. By applying a universal method/matrix for spatializing historic events, people, and places over time, we may finally begin to understand how certain groups have occupied space over time and explain their movements in cities due to urban renewal or gentrification. It is my belief that by overlaying histories and cartography with public realm plans, we may begin to show how our communities have existed over time and shift the narrative from relying on marginalized communities to exploit their narrative to prove they have always been here. Because in doing so, the maps would speak for themselves.
-
Reading Commentary 10/8
I enjoyed reading the history of the timeline as presented by Rosenberg and Grafton. As they implied, I didn’t really think much about the story of cartography, so it was good to get that background, although it honestly wasn’t as shocking as they implied it would be. It was interesting to consider the extent to which technological innovation has influenced cartography and influence our perception of history and time in doing so.I liked the idea of an accessible but informative chronology- on 117, they write “Priestley designed his charts for the curiosity and pleasure of a general reader, but they were also meant to serve the scholar—and Priestley believed that the two aims were well served by the same approach.” It feels that sometimes graphics are produced with only one of these aims in mind, so I appreciate the goal of keeping both in mind from the very beginning. I’m not sure it’s always doable, and depends on the data that is being used, but it’s a nice idea. I also was sort of disappointed in a similar way that I was with the Drucker reading- I was looking forward to seeing new possibilities for displaying information and didn’t really see those. That said, I imagine that’s mostly due to the fact that we read historical chapters! I’d like to see more of what’s being done now.
-
Commentary on Cartographies of Time
As Rosenberg and Grafton point out, it is true that the linear representation of time has been so normalized in today’s society that it is difficult to imagine another way to represent it. I thought it was interesting that even in the case of a digital clock, it is argued that the numeric representation of the display undergoes a subconscious mapping in our minds onto a linear representation. Since our perception of time and the qualities of a line seem to align almost perfectly, it appears to me that it is almost innate for us to find this relationship with such ease. The reading includes the opinion of Sterne, who was a skeptic of the linear representation of time; I understand that a straight line is usually not the most representative way to draw relationships between points in a narrative, but I don’t understand what the problem is with using a line if the goal is to illustrate the flow of time. It makes sense to me if a line is paired with some other graphic representation of the relationship between different parts of a narrative.
-
Cartographies of Time Commentary
There were a couple of points that I found particularly interesting in the excerpts of Cartographies of Time. I was curious about the point during the 19th century where chronography and the linear timeline were becoming more solidified alongside the simultaneous development of new imaging technologies. Film is of particular interest here, and I think one could ask “chicken and the egg” kinds of questions with these early timelines. There are a few material qualities of film that I think could be relevant when considering timelines. First, and perhaps most importantly, I think the new ability to look at frozen moments of time not just individually but as but separated fragments in a sequence could enforce the positivist tendencies towards linear thinking about time. Relatedly, I wonder if the ability to be able to play film recordings backward had any effect on considering time as linear. Rosenberg and Grafton only mention this point briefly, but I am curious if there is any written evidence of the influence of film on the concretization of timelines, or alternatively, of visual timelines influencing the creation of film.
I also found myself fascinated by Olaf Stapleton’s timeline in First and Last Men. Was this the first use of a historical timeline projecting into the future, even if a fantastical and speculative one? Could one trace a history of the correlation between linear timelines and equally linear projections of human progress?
-
10/8 Reading Commentary
One thing I found interesting is that by adopting a timeline, our focus is much more narrowed, and we aren’t directly comparing what is happening in different places at the same time. This is particularly interested becuase a lot of our focus on history is not just what events occured, but what led to them happening, and it’s often necessaryy to contrast one event with what is happening elsewhere that may have influenced an event. Yet, the adoption of timelines as a standard removes a dimension that would make these comparisons more apparent in certain situations. This reading seemed to echo a bit of what we talked about with big data, especially when comparing timeline with Priestly’s Chart of Biography. In choosing to use timeline, we now are focusing on overarching themes and lose some of of specificity and insight that looking at short time periods can provide.
-
Reading Commentary
Six Provocations for Big Data One of the analogies made by the author, which I find to be extremely apt, is that Data is like air, with both the oxygen we breathe and the pollution we exhale. It addresses the urgency and criticality of stop taking data as mere facts with inherent legitimacy, and the awareness of great caution. The provocations warn us about the naivety in dealing with data, such as the inevitable subjectivity within interpretation, or assumptions too easily made about correlations of different factors. We need to know how to read properly as well as think comprehensively to avoid such thing from happening. Another question raised that worth further thoughts is the uneven distribution of power around data. As the author pointed out the massive amounts of data are not easily accessible to all and every one of us, but mostly to the big companies that generate the data. A dystopian view would be a future with a monopoly of companies and technological elites owning and manipulating the data, exacerbating segregation in society. To take precautions against such future requires holistic understanding of what’s at stake. Yet question remains whether our political and social mechanism allow drastic measures to be taken when necessary.
Humanities Approaches to Graphical Display The importance of the differentiation of data and capta didn’t come intuitively obvious to me when I first read the terms, nor was the distinction between realist and constructivist approaches to data visualization. It’s only when the author talked about the distance between the phenomenal world and its interpretation that the subject becomes illuminating. Relating to the similar topic of inevitable subjectivity during data interpretation in Six Provocations for Big Data, this article takes it further with illustrating how such subjectivity can’t be ignored but instead needs to be taken seriously with its value. The preface is very engaging in a way that’s making readers curious about what the principles of humanistic approaches are in this case, and what the visualization graphics would look like when such approaches are implemented.
Data as Capta The rest of the paper of Data as Capta engages us further with illustrated cases where information is inflected with humanistic aspects. The summarized functions of temporality and spatiality are especially demonstrative. Time and space as factors of contextual variables emphasizes the priority of human factors, but also implies to some extent that scientific methodologies are still valuable. This makes me start to imagine if there could be a way to improve the tools we have and integrate the humanistic approaches to serve a better purpose. The promise of data has always been, at least in our ideal concept, an utter clarity and rationality of evidence, and the ability to reveal truth where it’s not immediately visible. On the way of generating, gathering and disseminating data we are sort of carried away, and even intimidated by the overwhelming amounts of data that is already out of our control, and somehow lost sense of what’s more important. The tools at hand for data analysis are still primitive. Only by acknowledging the problem, we’re one step further towards a better understanding of how to live with data. Having humanistic approaches as a starting point and as guiding principles along the way may be a safeguard against deviations brought by unpredictable advances in technology.
-
Commentary 10/01
Data as Capta
It’s definitely interesting to dive into the definition of data (especially within humanities data and data humanism). However, I’m not too confident about the difference between the “taken” and the “given”, as both data and capta can be representation as well as interpretation, if we consider beyond just the numbers and observation themselves but also look at the methodology of collection, types of analysis and filtering as well as the marks and channels of which the data presents itself to the researcher as well as the reader. Whether or not in the field of humanities / ditigal humanties studies, one should always be considering the presentation integrity, data-ink ratio, cognitive association, etc. throughout the analysis (something like Edward Tufte’s principles), so that raises a question: when Drucker wrote “Humanistic inquiry acknowledges the situated, partial, and constitutive character of knowledge production, the recognition that knowledge is constructed, taken, not simply given as a natural representation of pre-existing fact.”, does she take accuracy into consideration, or is she simply looking at knowledge as a personal gain based on the interest and opinion of the researcher? When we consider capta (knowledge “taken”), we are suggesting that knowledge should never be something taken for granted and there shouldn’t be assumptions of the “truth” of data. However, what allows us to make the additional moves to process the data into capta, and what then supports out claim that the “new data - capta” is still representative of the native / raw data? I agree with Drucker’s approach, that in info-vis / data-vis, we should be selective and mindful of what information we process and present. Yet, I think there’s a lot of ambiguity and objectivity that she didn’t necessarily mention in the literature.
Six Provocations for Big Data
Large data v.s. smart data was the focus for last week’s reading, and the Six Provocations for Big Data folows the discussion and going deeper into the context of these types of data, and the limitations, espeically on what big data alone can not answer within humanities research. I agree with Boyd and Crawford on the fac thtat numbers do not speak for themselves. However, similar to the response to Drucker above, to what extent should how we think data should be presented as well as what we think within the larger dataset matter be considered in the evaluation of infographics and data visualization? Data filtering is almost always subjective, as it neglects components in the data that we often define as completely obsolete based on personal experience or generalized claims. Also, I’m curious regarding their view on data accessibility. What contributes more to the authenticity of data, is it smart filtering of a speciifc population, or an overall consideration of the large population? and what about in different humanities research approahc (idiographic / normative)?
-
Zucker_Reading Commentary 10/1
D. Boyd, K. Crawford: Six Provocations for Big Data
Boyd and Crawford begin this article by questioning the validity of Big Data and the potential for misinterpretation of data as some researchers believe they can view data from a ‘30,000-foot view’. Furthermore, the patterning of networked data interpreted through the process of aggregation and data scraping often yields biased results. The following statement sums up this point quite accurately in my opinion:’Data is increasingly digital air- the oxygen we breathe and the carbon dioxide that we exhale. It can be a source of both sustenance and pollution’ [Boyd Crawford, 2]. While access to Big Data sets is increasing, there are many concerns about who is gaining access, how they are using and interpreting the data, and the ethics behind publishing findings. The authors provide six provocations for discussing the issues of Big Data which I will speak to a few.
The first being the presence of Big Data and the lack of historic context. Due to the newness of Big Data, it tends to focus solely on data collected in the recent decade rather than include historic data. While the task of data input is tedious, the lack of historic context is problematic when discussing social networks for instance and trying to make an argument for how millenials are more connected than ever. It would be impossible to determine as we do not have comparable data for previous generations nor do we have the methodologies for comparing social networks pre- and during the onset of social media applications such as Twitter and Facebook.
Big Data has always been and will continue to be ‘subjective’ until we have an accurate understanding of how humans function socially. In addition, the high concentration of gendering in the field further leads to bias when determining what questions are being asked and of who. How can we begin to understand marginalized populations or diverse identities when they are not being represented by those who are interpreting, cleaning, and publishing data? One of my greatest concerns with Big Data is just this, the role of bias and how Big Data is being misinterpretted yet it is this very data that is used to determine how we design the built environment.
-
Response to Drucker, boyd and Crawford
Drucker:
I agree strongly with Drucker’s central argument: that data are not simply collected but created and interpreted, and that a critical lens is rarely applied to data visualization–often in the rush to adopt digital platforms and make eye-catching visuals. However, I take issue with many of the alternative visualizations she uses as examples to illustrate that argument. Trying to demonstrate migration, “gender ambiguity,” and cultural constructionism in one chart seems unnecessarily ambitious. And the image itself communicates little without its long accompanying text (15). In contrast, the “hours as a function of time pressures” (19) chart doesn’t suffer these problems and is perhaps the best and most concise illustration of Drucker’s argument. The publishing timeline is also great (24).
Of course, these are not thoroughly researched projects but rather quick sketches to make a point. But I could easily see that being better accomplished by citing others’ successful work in this area, such as Native Land or the Zuni map project. As Drucker’s writing suggests, critical data visualization is not just about messing with the plot and the coordinate system. It is also about making interventions into the way we think about data collection, what rhetorical modes we unconsciously adopt, and how we treat the materials (and people) we work with. And in chapters 3 and 4, Drucker shows us that it is also about deconstructing some fundamental ontologies. That’s much more experimental work that actually benefits from her more unconventional techniques.
One other note: I also think Drucker’s usage of the word “affect”(/affective) is a bit confusing, or maybe it is just too different from how I usually see it used. She writes that it “is used as shorthand for interpretative construction, for the registration of point of view, position, the place from which and agenda according to which parameterization occurs” (25). I prefer the following from Shaka McGlotten paraphrasing Eric Shouse:
affect is related to but distinct from emotions and feelings. As Eric Shouse parses these distinctions, feelings are personal and biographical, emotions are socially performed and circulating forms of feelings, and affects are pre‐subjective or pre‐personal “experience[s] of intensity.”
McGlotten, Shaka. 2013. Virtual Intimacies: Media, Affect, and Queer Sociality. Albany, NY: State University of New York Press, page 64.
boyd and Crawford:
As for the boyd and Crawford piece, it’s interesting to see what’s changed in only the past 8 years. They were ahead of their time in many respects, as the public seems much more wary of Big Data now, especially after high profile breaches, leaks, and scandals. Their point number 1 bothers me a bit because it seems to discourage programmatic approaches to knowledge construction. Following Wolfgang Ernst, I would rather say that researchers should build their own research tools or be highly critical and skeptical of the ones they are relying on. But computational approaches aren’t all of one piece.
Point number 2 seems to be related to the Drucker piece. Point 3 and 4 still offer valuable methodological insights, especially the idea of “small data.” Points 5 and 6 get at issues of data ethics and data justice before those terms were coined, or at least popularized. It seems like this is what everyone is talking about right now, so it was quite prescient in 2011.
A piece I also enjoyed was Andre Brock’s response to this article, which can be read here: Brock, Andre. 2015. “Deeper Data: A Response to boyd and Crawford” Media, Culture & Society 37 (7): 1084–88.
PS:
Note, I definitely wrote way more than what’s necessary. So don’t feel like I’m setting an example you have to follow… everyone has been writing the proper amount so far!
-
Reading Commentary 10/1
D. Boyd, K. Crawford: Six Provocations for Big Data I thought this article accurately established problems that arise when we focus too much on big data as a goal. I especially appreciated the attention to ethics near the end, as that is not something I have paid enough attention to in the past, and I think it’s very important to consider in the future. I also liked the way in which the article tackled specific issues, such as the difference between articulated networks and behavioral networks, as well as broader theoretical questions, such as how knowledge has changed. It was nice to read alongside Drucker, as her piece made me feel confident that at least some aspects of some of these provocations are being addressed in smart ways.
Johanna Drucker, Humanities Approaches to Graphical Display and Data as Capta I thought Drucker did a very compelling job of challenging existing ideas about data, and especially about data in the context of the humanities. Already on the first page, she writes “So naturalized are the Google maps and bar charts generated from spreadsheets that they pass as unquestioned representations of ‘what is.’” I definitely struggled with moving beyond these unquestioned representations, although I really agree with most of her ideas, especially when she introduced capta. I certainly agree that a ‘humanistic approach to the qualitative display of graphical information’ is necessary, and I think it makes sense that the interpretive nature of knowledge is emphasized. That said, while I was really intrigued by Drucker’s ideas, I couldn’t stop thinking about how they seem to make graphics less accessible than they are currently. I could certainly see them as useful and understandable in some of the contexts; for example, figure 6d represents well the difference between the time of telling contrasted with the narrated time for a document. That said, I struggled more with other examples. Perhaps, I only see typical bar graphs or other charts as more accessible because my brain has been trained on them, but I still feel that it would be very difficult to make graphical representations of the type that Drucker is suggesting that would be accessible to a wider audience. I definitely agree with the premises that led to the graphics, but when they were put into practice, I often found them difficult to understand at first. I’d like to think that if such practices were to be implemented more widely, people would come to understand them, but I don’t know enough to know if that’s true.
-
Commentary on Big Data Readings
Boyd & Crawford
This essay challenges the misconceptions that naive advocates for “big data” may have. These misconceptions seem to stem mainly from incorrect assumptions about the nature of the data, as well as biases that other types of research tend to be prone to as well. One point of the essay that I found particularly thought-provoking and not immediately obvious is the fifth provocation, that “just because it is accessible doesn’t make it ethical”. I wonder what standards people can adopt to make sure that the data they are using is ethical to use, since the essay also mentions the shortcoming of standardized rules such as from the IRB.
Drucker
Drucker makes the insightful claim that what we commonly refer to as “data” is in fact better described as “capta”, since the mere action of capturing data requires at least some sense of purpose when deciding to capture it in a certain format. The extent that I read up to was the end of “Humanities Approached to Graphical Display”; it makes me wonder if Drucker will provide convincing examples to justify their skepticism of our enthusiasm for digital visualization, as I didn’t feel that this argument was sufficiently supported at least in this part of the reading.
-
10/1: Interpreting the Graphical Representations of 'Capta'
Drucker begins by describing all data as capta - that the act of recording is inherently a process that is constructed in relation to the uses and expectations of the recorder. And relatedly, graphically representing this knowledge is inherently tied to its production. In this piece, she outlines a series of new approaches that use humanities principles to define capta and imagine its display, primarily focusing on graphical representations of experiences of temporality and spatiality. Fundamentally, I agree with her overarching argument and added terminology.
However, I found myself wondering where “validity” fits into this frame. In saying this, I am not referring to any objective validity that would pretend to be “truth,” which I agree is always constructed and relative when referring to data, particularly humanities capta. But I do wonder if there a limit to the degree to which ambiguity can be accepted in regards to these subjective graphical representations in the humanities.
I suppose what I’m trying question is this: by embracing this shift from certainty to ambiguity or performativity in displaying graphical models, by what metrics does an observer judge them? For example, I am looking to Figure 8 in the section on spatiality, where Drucker draws deformations in the standard Cartesian coordinate system to show the transformation of the beach in terms of “attention-getting”. Were this to represent a real event, how might one determine the limits by which one could deform the plane before the representation ceased to represent the actual occurrence? Is that entirely subjective to the viewer or creator? I think that setting limits or having some metrics by which one is to interpret these graphical representations would be useful in establishing their usefulness, as they are meant to have some function beyond an infinitely interpretable art object. Of course, setting any metrics to judge these representations, aesthetic or otherwise, would pose a serious challenge. Who sets these standards? The community of practitioners working in the Digital Humanities?
I hope that in looking for some method for interpreting these graphical representations, I am not misunderstanding Drucker’s point here. But I do think that given that these graphical representations are situated somewhere in between the rendering of statistical information of the natural sciences and the far more openly interpretable of the visual arts. We are taught strategies for reading and evaluating a graph, just as we are taught strategies for reading and evaluating a painting. In this way, if we are to form a new mode interpretable production of capta in the humanities, I think one must propose new strategies for reading and interpreting.
-
Commentary - Big? Smart? Clean? Messy? Data in the Humanities
I found this reading to be a fascinating examination of how data has changed our understanding of the humanities, and raised several points about data’s role in the humanities that I never considered. I never thought that some that the challenges surrounding understanding large amount of data in more techincal fields were also paralleled in the humanities My assumption was that increased access to the data only contributed to our understanding of the human condition. Yet Schöch argues that Big Data is contributing to a paradxical shift where we are more concerned with with distant reading and comparing trends among collections, than with close reading. I disagree with Schöch’s opposition to big data; while I think in the future we need to improve out capacity to draw insights from big data, I think there is some value in using it to understand broad trends and ideas in the humanitites that close reading doesn’t always allow us to so easily.
-
Reading Commentary 9/24
Reading Commentary for 9/24 1) “Big? Smart? Messy? Clean? Data in the Humanities” by Christof Schöch
I was surprised by how much this piece challenged my existing thoughts. Even in the first page, I had to rethink which resources and objects I consider ‘data,’ which I found to be a valuable exercise. I’m also excited to read more about Drucker’s idea of ‘capta,’ as it seems necessary for the framework we’re establishing here. That said, I didn’t find it entirely necessary for the two ‘kinds’ of data to be spelled out as separate ideas, as the eventual conclusion was that there needs to be a type of data that can be bigger and smarter. I do see how it is important to consider those two aspects of data, and which aspect we prioritize when we create a tool or work with data. I especially liked, in the elaboration of big data, when he wrote “What really counts, however, from my point of view, is less the volume than the methods used for analysis. And these can be successfully applied to smaller sets of data as well…” This point helped to emphasize the actual important aspects of big data. When he writes “I believe the most interesting challenge for the next years when it comes to dealing with data in the humanities will be able to transgress this opposition of smart and big data,” I was curious about whether this is a challenge specific to the humanities, or whether Schöch would consider this a necessary step for other fields as well.
2) “Humanities Data: A Necessary Contraditiction” by Miriam Posner
I appreciated the way in which Posner felt very honest about the ways in which humanists work- for example, she writes “humanists have a very different way of engaging with evidence than most scientists or even social scientists. And we have different ways of knowing things than people in other fields.” She elaborates that humanists are often able to draw conclusions by deep immersion in source material, instead of with traditional notions of ‘data.’ Frequently, I’ve found that humanists will try and explain their work in other fields’ terms or in terms that make their work appealing to others in their fields. Of course, Posner made her case for humanistic inquiry very well, but doing so did not necessitate portraying her work through the lens of others’ fields. I really appreciated her very accessible example that centered around silent film; I think it demonstrated very clearly how humanists (tend to) think, which I feel points to her conclusion (about another example) that “it’s quantitative evidence that seems to show something, but it’s the scholar’s knowledge of the surrounding debates and historiography that give this data any meaning. It requires a lot of interpretive work.” I also found it important that she emphasized not only the strengths humanists can provide to data science, but also the help that they will need- not only from data scientists, but also from librarians! I thought she very clearly and accurately laid out (some of) the problems facing humanists who work with data, and I’m curious to see if we consider any of them to be addressed in any way in the last three years or solvable in the future. Her final paragraph begins with “It requires some real soul-searching about what we think data actually is and its relationship to reality itself; where is it completely inadequate, and what about the world can be broken into pieces and turned into structured data?” I think this is an important question for us going forward.
-
Commentary on Schoch Reading
Schoch’s paper about the paradox of smart vs big data in humanities makes some important points about current limitations in data collection and analysis. While smart data provides more specific points of research, the methods of generating such data are specialized to the purposes that it serves, making it difficult to generalize these methods. On the other hand, big data can be used for a wider variety of purposes, but requires more advanced analysis to draw meaningful conclusions from it. As Schoch argues, there is a need for data to become both smart and big.
I found it interesting that although this challenge in data exists beyond the field of humanities, it is particularly a challenge to humanities research because of how difficult data collection is in this field. For example, data used in humanities research often involves texts that come from inconsistent sources (eg. books, newspapers, online blogs), and so it is challenging to create a structured method of data collection. Other fields also struggle with the challenge of smart vs big data, such as in HCI research. However, the challenge in these fields lies in creating technology that can better structure and analyze data. In the humanities, however there exists the additional challenge of creating more efficient data collection methods in the first place.
-
9/17: Associative Trails and Data
I thoroughly enjoyed reading Vannevar Bush’s “As We May Think,” and found it a nice opening reading for this course, as Bush’s imagined future mode of knowledge work feels very “digital” even while writing the piece in a pre-digital paradigm. Reading about Bush’s imagined Memex machine, as it creates these “associative trails” which thinkers can return to later, feels like a mode of performing research that we have still not reached. I felt myself longing to create the kinds of linkages that Bush describes as I work through multiple texts at a time for my own research. One could draw parallels between how Bush treats reading and moving between texts on the Memex to how one might move through online databases today, or even landing in “Wikipedia holes” of fascinating associative chains of connection. Regardless, I can’t think of any contemporary methods that exist to store associative chains like this in a format that would make links between texts useful for later research.
I’m interested in exploring Bush’s project in Digital Humanities terms, and specifically in relation to the forms of “data” as were described by Christof Schöch. In what way might associative chains be considered data, particularly “smart” data? Are there data analysis projects in the humanities that specifically focus on intertextual linkages? Would these associations need to be gathered manually, like the (quite literal) manual linking that Bush imagines on the Memex, or are there forms of analysis that could automatically find references between texts? Certainly texts like Wikipedia that contain hyperlinks could be analyzed for association with relative ease. Could one perform this kind of analysis on visual media?
-
Example Commentary
Here’s an insightful response to the assigned reading from Digital_Humanities - etc., etc. If you edit this post in Prose and click the Meta Data button, you’ll see it’s been given the Digital_Humanities tag. Other readings will show up as available tags too, as we get further along.
(By the way, here’s the url for the open access edition of the book: mitpress.mit.edu/books/digitalhumanities-0)