Commentary

  • Cool Data Uses

    A cool application of using Flickr image extraction! Using CurateMe, if there was a filter option you could choose to explore all the street art in a given location.

    Art effects on London property prices (http://rsos.royalsocietypublishing.org/content/3/4/160146)

    And the data visualization from the NYT I showed in class for reference: NYT visualization

  • Understanding the world through sound

    The article on sonification explored how sound has become an integral part of our technology. From OS sounds when clicking buttons to alerts when receiving new notifications, sounds have made their way into all aspect of computer software. The widespread use of sound in computer programs shows that sound is necessary to create a complete experience for users. In our physical world, we use sound to enhance the perception of our surroundings more than we may realize. Even typing on my keyboard, for example, I expect to hear the click of the keys; If I don’th hear these sounds, I would assume something is wrong and look down at my keyboard to inspect the problem. Thus, sound is necessary to create a feeling of immersion and perception for users. This has important implications for software engineers. A product may feel incomplete without sound. Although that is obvious for products such as video games, sound can also add a sense of completeness to virtual reality products, webpages, and even office software. Designers should always ask themselves how sonification could make their product more useful and engaging.

  • The Future of Sonification

    I find sonification to be an interesting field that has a lot of potential to be explored. I feel like many of the current applications of sonification try to map data values and quantities directly to another aspect of music that is quantifiable, such as pitches. However, in many cases these sonifications of data can turn out slightly arbitrary or irrelevant, and while they might be artistically expressive, they could turn out to be unhelpful in terms of providing another perspective or answer regarding the data. As a musician, I feel particularly attached to sounds and music – hearing certain background noises can recreate an atmosphere (such as a coffee shop or the T station), and hearing certain songs and voices can trigger poignant memories and emotions. I think there is still so much potential in sonification, and I can see it particularly adding a very valuable emotional aspect to data that is currently lacking in today’s data representations.

  • Sonification

    I thought that the way Sterne and Akiyama use the term “sonification” to mean making non-sonic information audible has many parallels to the way that non-visual data visual. When we visualize information it makes certain patterns pop out and it makes answering certain questions easy. When we sonify information I think that different patters emerge and it allows different questions to be answered. Both visualization and sonification have there merits and I believe that method of representing data should depend on analogy between the data and patterns you are looking for and the types of information we are used to either hearing or seeing.

    One of my favorite examples of sonification is the radio active orchestra, which convert the pattern of low energy radioactive decay into music by mapping different decay rates and energy levels to musical notes. Radio active decay is both non-visual and non-sonic information and I think that using sonificaiton to be able to experience and gain intuition about a phenomena that is normally unknown to all of our senses is a compelling use case for sonification. While there would have been some benefit to visualizing this information visually, I think that in this case being able to here the data gives you a better experience because sound is a closer analogy particles that are being emitted all around you then an image .

    Radio active orchestra: http://www.foxnews.com/science/2012/11/14/radioactive-isotopes-used-to-create-live-music.html

  • Sonification, Transformation and Art

    In the reading, the authors argue that for sonification to be successful it’s necessary that “the listener must believe that the sound produced is, for all intents and purposes, made of the same stuff as the object it is meant to represent”. They bring up the point that the mapping is flexible - a creator must make a decision about how the object will be mapped to sound - but don’t treat it as something big. I think the problem of arbitrary mapping is something that should be considered more seriously. The rules of transformation are as important, if not more important, that the object being transformed. The same object transformed using different rules can produce wildly different sounds. Two different objects transformed using different rules can produce the same sound. If an artist is free to choose any mapping, they are able to produce whatever sound they want from any input object. The “Aesthetics” section of the reading mentions the “Life Music” project that “sonifies DNA sequences by assigning pitches to amino acid sequences”. In order to do this, the authors had to decide which amino acids correspond to which sequences. If they chose a different mapping, the resulting track could have a completely different mood and convey a different message. Of course, “Life Music” is an interesting and thought-provoking project. But does it in any way, artistically or scientifically, represent the underlying DNA data that was used as a source? I don’t believe it does.

  • Adding Sonic Understanding to Our Repertoire

    Sterne and Akiyama’s essay is filled with fascinating insights, but one line in particular stood out to me: “data are fluid and not necessarily tethered to any one sense.” This is a crucial idea as we often associate certain data with individual senses, versus understanding and utilizing the potential interactions between the senses in fully appreciating and comprehending data.

    When you receive a text, you likely absorb the data and metadata about the message through a variety of senses. Tactile feedback through the phone’s vibration and auditory information from a notification beep communicate metadata that a message has come through. The content of the message itself is typically absorbed visually. Taste and smell are utilized very infrequently in textual data comprehension.

    What if we could learn to absorb data through a variety of senses? Sonification is a great place to start as people are in general familiar and receptive towards auditory cues. Many forms of important data are already communicated via sound. A straightforward example is Geiger counters. The concept of a Geiger counter is simple—as levels of radiation increase, the rate of clicking increases. This communicates radiation data in an easily understandable way, that further has the benefit of not requiring apt attention to absorb. It would be fascinate to expand this idea for use cases other than alerts.

  • Seeing is Believing, Hearing is Feeling

    Coming into this reading, I had a very different idea of what “sonification” meant, and after learning that the direction of flow is from nonsonic information to sonic, I had a hard time understanding how data could be better represented through sound. Sitting in class, listening to a Professor teach, I have an extremely hard time internalizing any knowledge that is said but not written on the board; it takes a conscious effort for me to listen and write down what is said, and if I don’t write it down, I will surely forget. Sound, to me, is for music–for enjoyment. I don’t particularly enjoy receiving concrete information through sound. When I read about the three categories of sonification, however, I realized that sound is in fact an important means of signifying information–more important than I thought when first introduced to the idea of sonification.

    Stern and Akiyama discussed the importance of the ear for its ability to “detect anomalies that might ‘pop out’ of a continuous stream of information” and the potential for signification as an art form, but I was not convinced that sonification is a useful tool for scientific interpretation and manipulation of datasets. The musical DNA and Atmospherics projects seem like intriguing art pieces, but as Polli’s statement makes clear, the primary benefit is gaining an emotional understanding of the data: “In my artwork, I have tried to develop strategies for the interpretation of data through sound that has both narrative and emotional content because I believe that an emotional connection with data can increase the human understanding and appreciation of the forces at work behind the data.”

    In short, I believe that sound as a medium is best to convey emotion, while visualization is best to convey more “concrete” (for lack of a better word) information. Seeing is believing, hearing is feeling.

  • The Cobweb: Archiving the Internet

    I’m torn regarding how I feel towards the Internet Archive. On one hand, I think it is incredibly important and useful to keep some collection of past versions of web pages, because if we don’t then we are essentially letting a significant part of history (in the form of online content) slip through our fingers. On the other hand, I am concerned that the Wayback Machine, along with any other internet archive, will mainly become a dump of information that is nearly impossible to wade through. I agree with Lepore’s point that if everything on the Internet is saved, then there might be too much of it for anyone to make sense of any of it. This problem is mainly exacerbated by the fact that currently the contents of the archive are not sorted, indexed, or catalogued, and hardly searchable. If even researchers find it too cluttered to put to good use, then that should be a warning that perhaps a change should be made to the organization of the internet archives, by placing higher emphasis on its utility rather than simply crawling for more and more information.

  • The Cobweb and Decentralized History

    When reading Jill Repore’s “The Cobweb”, I remembered following the Ukraine conflict and the MH17 story live. I also remembered seeing Strelkov’s post on VKontakte, as it had started spreading virally before it was taken down. Since I saw it with my own eyes I am certain it is actually legitimate. However, this got me to think about how the story would have turned out if the post hadn’t spread. What if only a few people had accidentally noticed it, and later, in a few days, someone found the post archived in the Wayback Machine? Would people believe it was true? Because the post was reported on the Russian news before it was taken down, even Russian people found it difficult to doubt its validity. In the hypothetical scenario, would Russians have believed it? Would other non-Western countries?

    Currently, the history of the Internet is decentralized. It’s collected by national libraries, non-profit organizations that might or might not be politically related to their countries’ governments, and for-profit organizations that, even if they aren’t politically influenced, can be bought. Moreover, none of these organizations hold the authoritative historical record - as Niels Bruegger writes in his essay, each of them either have different pieces of a website, or separate imperfect copies of it. Bruegger considers this an opportunity for future historians to investigate multiple sources and come up with a conclusion. This is a valid approach in history, where uncovering actual, ontological truth is not as important as getting the right conclusions. However, in the legal system the former is absolutely crucial, and a mechanism to uncover the truth currently doesn’t exist.

    In the past few years, the tech world has been embracing decentralization. So far, the only successful means to enable that have been purely technical. Bitcoin enables decentralized money, OpenBazaar enables decentralized businesses, Etherium enables decentralized governance and other interesting applications. They all achieve this by using mathematics and cryptography. Maybe it’s time to look at web archiving the same way, by developing a decentralized system that provides formal, mathematically provable guarantees of truth and accuracy.

  • Internet and Offline Archives: Apples and Oranges

    It seems to me that trying to compare internet archives and physical archives is somewhat like comparing apples and oranges. In the end, I think that it is important for both to exist, as they play different roles in history and pose different sets of advantages and disadvantages.

    As Lepore pointed out, so much of the internet is junk that gets turned over within months. The internet poses an easy way to broadcast any idea that comes to mind, no matter how stupid. While internet archives allow information to be stored, organized, and retrieved more easily, I don’t think this overcomes the fact that most of the information on the internet is junk. That’s not to say that the junk shouldn’t be archived–I just think that the accessibility of internet archives is at risk of being bogged down by the content.

    On the other hand, publishing a book or another publicly viewable document is relatively hard to do. You can’t be the only person who thinks your ideas are worth writing about–at least one other person has to think so as well. The information we find in books, though constrained by physicality of the book, is tame enough to process relatively easily. In other words, I think that the constraints of the book are rescued by its curated, polished content (ideally).

  • Do we care about preserving our data?

    I found the article “The Cobweb” to be an interesting treatment of an important issue–how we can prevent the data on the Internet from being lost. The dilemma of how to archive information on the Internet is yet another consequence of the shortsightedness of not making the Internet more government-regulated when it was created. Furthermore, who knows if a non-profit organization like archive.org will still be able to archive the Internet 50 years from now; the article stated that just in the last two years the size of the archive grew from 10 to 20 petabytes.

    Perhaps a good solution would be for governments, instead of ICANN, to be in charge of top-level domain names and to be in charge of keeping accurate archives of all sites created using those domain names. Or perhaps before a page can be hosted on the Internet, it should pass through a central authority that will register it on the archive to start being recorded. Of course, any solution will be a tradeoff between liberty and effectiveness, but this is an issue that is worth addressing if we care about preserving the history of our civilization.

  • Do We Want to Remember Everything?

    Jill Lepore’s piece, The Cobweb, is extremely interesting because it contradicts many of my assumptions about the internet. In the Age of the Internet we’re reminded over and over again to be extremely cautious of what we post online, because anything what goes into the ether is there for good. Lepore points out how in many cases this widespread assumption is simply false. Fascinatingly, the average life of a web page is only about 100 days. This short existence is not constrained to memes, short blog posts, or social media posts, but scientific paper and online evidence presented in court cases also disappears quickly.

    This issue must be given thought, because references are a key component of knowledge, and if a referenced work can disappear with such ease, we’re in danger of substantial components and links of our knowledge base. I knew MLA and AMA citation formats discourage the use of web urls in citing online resources, but didn’t full understand the rational behind this convention until reading The Cobweb.

    I’m extremely curious how efforts like the Wayback Machine are viewed in the context of the right to be forgetten debate. There is a difficult dichotomy to these two sides. On one hand, there are clear benefits to archives and the preservation of knowledge. It’s challenging to be realiant on information that is changing beneath your feet, and the burning at the library at Alexandria is a tragedy still remembered. On the other hand, there are privacy issues associated with this type of mass collection and retention of information. Under what circumstances is the Wayback Machine required to remove records? As an example, what if a nude photo of an underage girl gets posted on a web page on the internet, presumbly the Wayback Machine would be required to remove this page from its archive? But who makes and executes these decisions?

    Hopefully technology like the Wayback Machine will become less necessary for things like personal data preservation in the case of a hosting website shutting down. It seems the world should be moving towards a model where people have more control of their personal data, but it’s unclear to me how a technology like the Wayback Machine fits into this world.

  • The Flight From Conversation

    Recently, my friends and I have started a new game whenever we go out to dinner: we all take out our phones and stack them in the middle of the table. If anyone reaches for their phone during the entire dinner, then they have to pay for everyone’s meal. Now, we were definitely not the first people to come up with this idea, but ever since we started doing it, everyone has noticed what a positive impact it has made on our conversations. Putting our phones away forces us to maintain more real, more human face-to-face interactions, which my friends and I all appreciate the importance of. It gives us a chance to have longer, uninterrupted conversations without the constant interruptions of texts and notifications. Technology obviously has given us countless benefits, but there is definitely a point to be made about lifting our heads up from our screens and taking in the “real world,” like Turkle writes.

    Another part of Turkle’s article that I found interesting was her statement that “We think constant connection will make us feel less lonely. The opposite is true. If we are unable to be alone, we are far more likely to be lonely.” I strongly agree with this – the way we use current technology, we constantly feel some attachment to our online messages, texts, and social media. Through my own experimentation, I’ve found that cutting off some of these (such as deactivating Facebook for a while) can feel suffocating/deprived at first, but after a slightly longer period, it feels extremely liberating to not have to depend on or even think about that other online, digital world.

  • Reading response: Why can't we all just unplug?

    As someone who hates having conversations over text and can’t stand it when the person I’m talking to whips out their phone mid-conversation and cringes watching people at an event spend more time posing for selfies for their new profile pictures than talking to other people or having a good time, the article by Turkle was like music to my ears. I like the idea of trying to have more “sacred” spaces or times that are meant to be device-free, but I do wonder how realistic this is to impose as technologuy develops and how quickly we will find ourselves in a world where no one is ever unplugged.

    In some situations, I find that having my phone out during a conversation facilitates it: with my family, we are always talking about movies and using our iPhones to find out more details or interesting tidbits to discuss with each other. These kinds of interactions make me think twice about being so cynical about technology, but deep down, I don’t think I’ll ever get over all the points Turkle makes about our flight from conversation.

  • Is Google Making Us Stupid?

    I thought that the article “Is Google Making Us Stupid?” raised a number of interesting points. I have personally experience the phenomena that the author described of spending most of my reading time skimming and seeking information rather than deeply reading a single source. Unlike the author, I am not convinced that this is always a bad thing. Before the Internet it was much more difficult to gain exposure to such a broad amount of information. When people only had access to a few books at a time and information was typically recorded in the form of lengthy books and articles it was both easier and more important to read written texts cover to cover. Today, both the medium and the content of written information has changed. Although readers today spend less time reading each article, they are able to gain a much wider exposure to different types and sources of information. Also, articles have changed to better suit this reading style by concisely conveying information. This allows readers to gain exposure wider variety of perspectives on a larger amount of information faster, which can help accelerate learning. While it is true that something is lost when readers don’t take the time to fully delve into a topic, something is also gained when readers are able to expose themselves to more faster. I also agree with the authors observation that the medium that is use to write, whether it is a pen and paper or a computer keyboard, effect how you express your thoughts. I have found that my language and writing style vary depending on if I am, texting on my phone, typing on my computer keyboard, or writing using a pen and paper.

  • How Google, Facebook, etc. Could Become Big Brother

    Should search algorithms be moral? A conversation with Google’s in-house philosopher

    In a recent research study, sixty percent of Facebook users had no idea that their newsfeed was filtered. This is a scary phenomenom. It means that a majority of people are unaware that the information they see is a biased subset of the total information available to them through their network. This effect is dubbed the “filter bubble”, and there’s a great TedTalk about it.

    In December 2009, Google began customizing its search results for all users. This new era of personalization changes our online experience, and in particular, how we learn and what information we’re exposed to. In an effort to please users, the news we see is often pleasant and fits our beliefs. The filtering done by Google, Facebook and the like is virtually invisbile, leaving us unaware of what we’re missing.

    Left unchecked, this hidden filtering can begin to morph into a Big Brother-like online world in which large tech giants selectively choose what information to show to you, and subsequently, what you identify as the truth. In Feb. 2012, the Pew Internet and American Life survey found that “73% of search engine users say that most or all the information they find as they use search engines is accurate and trustworthy.” The coupling of these two phenomena, a faith in returned search results, and results that are both personalized and have no gurantee of truthfullness, results in people having a false sense of confidence in their world view and events happening around them.

    As a simple example, say you search for nearby synagauges, and have purchased a menorah on Amazon. Google’s algorithms then classify you with a high level of confidence as a Jew. Say that then when you search Google for the West Bank, all you see are anti-Palestenian news articles. It could be hard to formulate a comprehensive, informed opinion on the issue, and you might not even realize that your news is being filtered. This is extremely dangerous. On a similar note, while visiting Hebron (a PA-controlled town in the West Bank), I saw this shirt in the market:

    IMG_0059.jpg

  • Comment on "Is Google Making Us Stupid?"

    I have definitely noticed the effect Nicholas Carr describes in his article in my reading behavior. I consume the majority of the content on the Internet. I read tweets, Facebook posts, news and scientific articles, pdfs and audio books. I do notice I often find it difficult to concentrate on a single piece of writing, especially if I don’t find the text all that interesting. However, I’d like to think it’s not because my deep reading skills have completely atrophied. There are still some texts - from short articles to 700 page books - that I find captivating. When reading them, I unconsciously ignore the notification panels popping up on my screen, only to realize after an hour that I had three urgent emails come in. However, I don’t come across such texts that often. Most of the time, I skim through the first few paragraphs and ‘evaluate’ the article to decide if it’s worth concentrating on.

    While the shift is definitely happening, I don’t completely agree with the assertion in the article. Google isn’t making us stupid - we’re adapting to a world of information abundance. In the past, if you had a good book in front of you, you would read it because there wouldn’t be many alternatives of things to read - or if there were, they wouldn’t be popping up every few minutes. In the age of the Internet, there are hundreds of ways for a person to discover new content: friends emailing you links, tagging you in a Facebook post, tweeting links to articles, content aggregation platforms filtering the texts and using machine learning to suggest content you would like. Every time you’re committing to reading an essay, article or a book, you’re saying “no” to thousands of other texts which can be more interesting and otherwise valuable than the one you chose. There simply isn’t enough time to practice deep reading on every piece of content you find on the Internet - you need to prioritize. And while there is no doubt that this makes our deep reading skills worse - after all, we’re using them less - it improves our comparison and estimation skills and makes it easier for us to see connections between various pieces of knowledge. And even if deep learning is somehow unique in creating deep thoughts, I don’t see an alternative. Either you need to completely trust someone to curate your content, or you have to “waste” time skimming and comparing. There is no other choice.

  • Becoming more objective-minded citizens

    The article “Should search algorithms be moral?” presents the dilemma of whether search engines should strive to present the most factual information rather than the most popular (or rather, “search-engine optimized”) content.

    I believe that the author’s suggestion that search engine companies should censor information they deemed non-factual is a dangerous one. The reason for this is that practically everybody is convinced that what they believe is the truth, and shunning articles that are deemed non-factual would simply promote whatever happens to be “the truth” from Google’s point of view. Let us not forget that just over a century ago, scientists had “proven” it was physically impossible to create an airplane, or that the scientific consensus on the origin of life, abiogenesis, is a phenomenon that we have failed to replicate in the lab despite decades of attempts to do so.

    It is true that the Internet is filled with (from my point of view, at least), non-factual information, such as the belief that dinosaurs and humans co-existed or that vaccines cause autism. But it is shortsighted to assert that what one believes must be unequivocally true, regardless of one’s academic qualifications.

    If we are worried that the mind of the public could be contaminated with false information on the Internet, the solution is to train the public to be objective-minded and ready to research their questions down to first-hand evidence. Furthermore, we must work to make such first-hand evidence, especially artifacts, fossils, and geological findings, readily available on the Internet. Censoring articles that we don’t deem to be true will only work to hamper the intellectual progress of our society.

  • Manovich -- Keeley

    The Interplay Between Reductionism and Ornate Augmentation

    Manovich’s, The poetics of augmented space, presented a very interesting dichotomy between a modernist desire for reductionism, and our desire for an ever more comprehensive, ubiquitous amount of information. It is interesting to consider our desire for both minimimalism, and having all information available immediately at our fingertips. The most modern of current cities display an interesting mix of intense sensory information, with lights, sounds, and images, and minimilistic architecture, with stainless steel, glass and concrete.

    I am curious to see in which direction architectural design goes, and what balanace is achieved between data overlay, and a relief from information overload.

  • The Dark Side of Augmented Space

    The Manovich paper explores how technology has overlaid physical spaces with a technological landscape and how hardware and software is beginning to integrate the physical world with a virtual one. I believe this is an important discussion to have because of the grave security concerns that it poses. As more of our environments (such as home security systems, commercial billboards, and government surveillance systems) are integrated with technology, they become vulnerable to the threat of network intrusion. They also affect individual privacy and give more power to those in command of technological architectures. It is important to consider the ramifications of Augmented Space and to establish principles to abide by; otherwise, we will be forced to deal with the problem once it is too late to make easy changes.

  • Surveillance and Augmented Reality

    It was really interesting to read about Manovich’s insights about the duality of augmented reality and the surveillance that is necessary to populate it with data. As the original essay was originally written in 2002, the author could only see surveillance creeping in in very specific cases - video camera footage, cellphone conversation interception and some other examples. Nowadays, there are many times more examples of this surveillance.

    In addition, what Manovich calls “surveillance” isn’t necessarily only that when looked at in contemporary context. By that he refers to third parties collecting information that is necessary to provide their services/information to their clients through augmented reality objects and while traditional surveillance is a big part of this, I think the use of the word ‘surveillance’ does not account for a much bigger part of information sharing - information that we voluntarily choose to share in various kinds of social networks: our employment information on LinkedIn, our location information on Foursquare, our life events on Facebook. This is concerning because by actively using augmented reality in architecture as well as personal lives, we both as a society and as an individual implicitly consent to our information being used this way. We care very little about the safeguards when sharing this information, and we probably will for a long time in the future. Augmented reality is just so useful and compelling.

  • Lev Manovich Commentary

    Humans have always tried to augment their environments to be easier to understand, navigate, and interact with. As Lev Manovich mentioned we have always signs and other static labels to add a layer of information to our environments. With recent advances in augmented and virtual reality I think that our environment has started to matter less and less as we spend more time processing information that is part of the new virtual layer instead or the physical layer. With new augmented reality devices physical spaces can be augmented to suit any purpose from a movie theater to an art gallery or a classroom. In general, I think that this will cause our physical worlds to be come less important and simpler while our augmented worlds become more exciting and complex. Virtual reality takes this to an extreme by completely replacing the idea of having multiple layers of reality and creates an entirely new reality independent of your physical environment.

  • Augmented spaces, AR, and why tech is leading

    It was interesting to read about what Manovich defined to be augmented spaces, including surveillance, cell phone media, and urban shopping spaces, because these are not usually the first things that pop into my mind when I think of augmented spaces or augmented reality. This is probably also because I have been accustomed to seeing these aspects for the majority of my life that it becomes blended with “physical reality.”

    The first thing that did pop into my mind when I thought of augmented reality and spaces was the incredibly fascinating work that Microsoft is doing with the HoloLens (an augmented reality device). If you haven’t heard of it, here are a few short videos that show some of its potential. I would highly recommend checking them out:

    Microsoft HoloLens - Transform your world with holograms

    Hololens Holoportation: Virtual 3D Teleportation in Real-time

    This is perhaps one of the most extreme examples of augmentation, but in my opinion one of the most exciting and promising. The examples Manovich mentioned are much more subtle in comparison. Also, the videos highlight more functional aspects of the technology rather than the cultural and aesthetic purposes, which Manovich chooses to focus on.

    Specifically, Manovich writes, “In a high-tech society, cultural institutions usually follow the technology industry… Can this situation be reversed? Can cultural institutions play an active, even a leading, role, acting as laboratories where alternative futures are tested?” I think it is possible, to an extent. There is definitely a huge amount of potential in expanding the current applications of augmented spaces for more cultural and aesthetic purposes. But ultimately, the reason that tech is leading is that creating extremely new augmented spaces/reality is technically challenging. Advancing this frontier, in ways that technology like the HoloLens is doing, requires several years worth of significant technical knowledge and research.

  • What's wrong with actual reality?

    I really enjoyed Lev Manovich’s “The poetics of augmented space” because I think it pointed out some of the good and bad of augmentation. I found his discussion of augmentation in art galleries, airports/train stations, contemporary architecture, retail spaces, etc. interesting because it made me think about how much augmented realities add to our lives. I also found it interesting to make the connection between augmentation and monitoring, and I don’t disagree with Manovich’s statement that augmentation means surveillance.

    What I think, however, is that surveillance isn’t the only bad thing about augmentation nor is it the worst thing. What frightens me about things like wearable technology isn’t the privacy issue but the idea that reality isn’t good enough–that reality needs to be augmented at all times. There are times when augmented reality is powerful and effective, like in the “audio walks” mentioned in the article or other museum/education type experiences, but I think that there are times when reality should not be augmented. Even a cell phone augments your reality by connecting you to people who are far away, and many people can’t live without their cell phone within reach–hence the market for the Apple Watch. It should be concerning to cell phone users that their constant dependence on their phones means constant monitoring, but I think it should also be concerning as a mark of how detached from the physical world they need to be to just make it through the day.

    I recently had a heated discussion with an artificial intelligence researcher who believed that we will, in the future, be able to upload our brains into a computer and live forever as a machine in a simulated world and that we need to make this happen to save people from death. I know that augmented reality is different than virtual reality, but sometimes talk like this makes me think, “What’s wrong with actual reality?”

  • Rosenberg

    Rosenberg – Keeley

    I found Cartographies of Time an extremely interesting read, as it made me contemplate how I view time. I had never thought about how I visualized time, and didn’t appreciate how engrained the linearity of time is in Western culture. Mitchell’s insights into how spatial analogies permeate temporal insights is fascinating, in particular, his focus on the language we use when discussing events in time – intervals, long, short, before, and after. These adjectives all refer to a linear representation of time. When I began to appreciate how engrained this representation is, I tried to visualize time in some other way. Instead of thinking of time as a linear process, I tried to think of it as cyclical or flexible. I found this virtually impossible, and thought Reynold’s gave a convincing argument of how our cultural background significantly effects how we view time (http://consultingsuccess.org/wp/?page_id=1204).

  • Databases + Georges Adeagbo

    Further follow-up on The Database and how we present information

    Over spring break, I was lucky enough to get a chance to visit Israel. While there I went to the Israeli museum, and one exhibit in particular stood out to me. The exhibit featured Georges Adeagbo, a Beninese sculptor known for his work with found objects. Walking through his exhibition, I immediately thought of this class, and in particular, the reading on databases. When originally reading the chapter on databases, I was quite convinved that most everything is now arranged in a hierarchial fashion, whether it be information or photos or stories. Adeagbo has a much differnet take. At first sight, his work is comprised of a seemingly haphazard assortment of found objects. A New York Times article described his work well in saying, “He’s a cultural anthropologist and a storyteller, who uses objects to reveal parallel and intertwined narratives”. I thought it was incredibly interesting how Adeagbo manages to tell a convincing story in such a non-hierarchial manner.

    foto_1382521467616142944.jpg

  • TimeWimey

    Timelines

    “Time is like wax, dripping from a candle flame. In the moment, it is molten and falling, with the capability to transform into any shape. Then the moment passes, and the wax hits the table top and solidifies into the shape it will always be. It becomes the past, a solid single record of what happened, still holding in its wild curves and contours the potential of every shape it could have held.” - Welcome to Night Vale (EP 21)

    The topic of time is what I consider a ~creepy~ topic; looking back at it, there is nothing that can be done to change what has already happened. The quote I put above is from a popular podcast I love to listen to called Welcome to Night Vale, which is a mixture of NPR meets the twilight zone. Time doesn’t 100% work normally in Night Vale (“Clocks and calendars don’t work in Night Vale. Time itself doesn’t work.”), which leads to some interesting analyses of the function of time and what time represents to society. Days are literally canceled and just don’t happen (“Wednesday has been cancelled due to a scheduling error.”) Some people grow old, while others just stay the same age for, what we would consider, years.

    Now to tie this to this weeks reading. Imagine if we our perception and understanding of time was like that in Nigh Vale. While we would be used to it, it would still be super confusing. Timelines, while occasionally may be lacking details like the one on page 12 of the reading, are useful for allowing society to percieve our past. If you look at someone’s personal planner, the way they organize their time may be different from yours, but eventually they will all be able to fit into a consistent representation of how time was/plans to be utilized on a time line.

  • Rosenberg

    This reading really made me think about how we generate visualizations for concepts and data and how those visualizations then shape how we think about the data. Since I have always been exposed to timelines that are linear, with a clear scale and one event follows the next, I haden’t really thought to ask how else time can be visualized. I found it fascinating that the visualization of time as a line is a fairly recently invention.

  • Line of Time

    The reading assignment explains that the visual representation of chronology as a line is a relatively modern invention. I found this to be interesting. It is natural to think of time as a line or arrow since time does not diverge, cycle, or move backwards. It is also not surprising that the linear representation of time is a recent invention since the representation of mathematical functions using a pair of axis was not always a technique that humans employed, although today we take it for granted. I believe the representation of time as a one-dimensional axis is the most useful and appropriate, but it would be interesting to explore more novel forms of representing chronology.

  • Cartographies of Time, Chapter 1: Time in Print

    I have to admit that my train of thinking was originally similar to that of the Western historians, who the author claims “think of chronology as [nothing] more than a rudimentary form of historiography.” Thus, it was interesting to see the author’s deliberate emphasis on the importance of lines and chronology.

    The example of Annals of St. Gall depict the variance that can occur in even the most basic forms of documentation. I appreciated that they highlighted how the form “closely calibrated to both the interests and the vision of their users,” or in other words, how the form of the annals revealed the “forces of disorder” mindset during that time.

    Another interesting idea mentioned in the reading was the inherent connection between time and space (which is then extrapolated to the line). It makes me wonder, is it even possible to separate the two concepts in our minds? I’m not sure if it is if we continue to regard time in a linear frame of mind.

  • Annals and Indie Films

    What really stuck out to me in this reading was the discussion of disorder, scarcity, and seeming randomness in written timelines. When I first saw the timeline on page 12, I admit I wrote it off as badly written and difficult to follow. When I read the reasoning behind these sparse, disjoint events–a collective reflection of the chaotic and disorderly times–I understood the importance of these annals in addition to the ones that tell more classic narratives. I think that this disorderly, random quality of the world is still present now–maybe just less so than it used to be.

    With this thought, I couldn’t help but think of a parallel to movies. Indpendently produced films (often known as Indie films) can usually be distinguished from mainstream movies because their narratives feel raw and unpolished–more realistic. The films are usually comprised of two types of scenes: those meant to move the story along and those meant to give life to the story by depicting random bits of everyday life that are often left out when recounting a story. These films are less enjoyable to people who like narratives to be polished, tweaked, even engineered to create the most compelling or thrilling story–like they often are in mainstream movies. I am not an Indie film fanatic, but I do appreciate them. Thinking of annals as the Indie films of the timeline world made me appreciate them more.

  • Thoughts on “Six Provocations for Big Data”

    2. Claims to Objectivity and Accuracy are Misleading

    I mostly agree with this claim. Often big data is made out to be almost a magical source of information and analysis, as if it provides the truth simply because of its sheer quantity. However, like the paper says, “a dataset may have many millions of pieces of data, but this does not mean it is random or representative.” In other words, just because we look at a large dataset, this large sample could still inherently be biased toward a certain subset of the population (such as in the case of public tweets or tweets that have been filtered). The paper also advises that “researchers must be able to account for the biases in their interpretation of the data.” I do think that big data is still a very useful tool as long as those who use it are aware of the assumptions and implications of the biases in their analyses.

    6. Limited Access to Big Data Creates New Digital Divides

    I found this point interesting as it is something I’ve never consciously thought about before. While I don’t necessarily disagree with the issues that are raised regarding the digital divides, I wonder why this is being highlighted especially for big data, when it seems like it could be applied to almost any kind of specific technical tool or methodology. It is true that there is “unevenness in the system” as the author states - some people have more access to data than others, and “new hierarchies around ‘who can read the numbers’”. But the same can be said for essentially all specialized types of analysis. What about statisticians who have been trained how to use R and SQL to run very powerful queries, or biologists who have access to the most cutting-edge equipment and technologies? The fact that not everyone has complete access to “all big data” does not imply that big data is not valid or useful. I do commend the author on bringing up this point, however, because it is something that I have overlooked, and perhaps we should do more to ensure that more people at least have the technical skills to do computational analyses on datasets, regardless of their career.

  • D. Boyd's Provocations - Karrie

    The first provocation “Automating Research Changes the Definition of Knowledge” is of interest to me because it is very relevant to the “new entity” at MIT now know as IDSS. Some of the things we want to know about require interdiscplinary work, so that the scholarly traditions of one discipline may clash against the norms and protocols of another. I think this is likely to happen for awhile as people attempt to make sense of everyday life from Big Data stemming from popular sources – is this sociology? huamnities? something in between?

    A lot of the way Danah Boyd expresses her point seems more urgent than understandable to me, but here’s what I get out of it: Once you start using really transformative research methods and tools, you need to rethink what it is to “know” something. When you move from cause-and-effect claims to statistical claims, knowledge has shifted a bit. When we start using methods associated with Big Data, what counts as scholarly rigor, what has to be transparent about methods and data collection, for something to count as knowledge? Her point about we need to be able to describe and understand the limitations of our data sources (twitter, google, consumer footprints on the web, etc) seems pretty straightforward as an exhortation – settling on norms will take some time.

    Not all data are Equivalent. When you move something from a more ethnographic, individual realm, to a very public source – meaning shifts. Her examples about behavioral and articulated networks, as compared with personal networks, is a good example. What I can even do in a massively connected virtual environment is so different than what I can do when limited by physical geography, that we need to be very careful not to conflate concepts across smaller and Big data.

    Enter text in Markdown. Use the toolbar above, or click the ? button for formatting help.

  • [by Nicole] thoughts on Big Data

    No. 5: “Just Because it is Accessible Doesn’t Make it Ethical”

    As someone who “deactivated” their Facebook years ago and has to repeatedly “re-deactivate” it after someone tells me, “Your Facebook got reactivated again!”, I understand the issues about having personal information in the hands of social media sites. Being a fairly private person, it bothers me that Facebook refuses to let me cut the ties and remove my profile from the internet permanently, but I also acknowledge the fact that I put my personal information on the web, a notoriously public domain, in the first place.

    I feel that people sometimes forget that the internet is public. There are very few spaces online, if any, that have any semblance of privacy. Any information you willingly put online, you should be prepared to share with the world. In other words, I am sympathetic to the researchers who have to tiptoe around the “ethicality” of accessing public, online information, as if the onus is completely on them and not the people who actually posted that information.

    Accessing information that was not voluntarily made accessible, like online shopping habits, is a little more ethically ambiguous to me. With data like this, I am more sympathetic to the “anonymous” subjects. It is still a little difficult for me to believe that the internet should be deemed a “private” place.

    No. 3: “Bigger Data are Not Always Better Data”

    The issue of understanding the inconsistencies of big datasets is particularly interesting to me as a computer scientist, because we are limited by the algorithms used to process the data. While I understand that it is important to understand what the data is telling us about the people who created it, I wonder how this is really a new problem. The authors state that social media data is skewed by overseeing organizations like Twitter, but when is public data ever not skewed? They also say that errors in data sources are magnified, but doesn’t this just make errors easier to find? Finally, they note that small data is sometimes best, but this does not mean that big data is not powerful. There are many cases where small datasets lead to inappropriate conclusions.

    I guess my point is: big data comes with new challenges, but isn’t this to be expected? I barely see this as a provocation, but a statement of the obvious.

  • Six Provocations for Big Data commentary

    Do numbers speak for themselves?

    Boyd and Crawford criticize the idea that numbers speak for themselves, but that’s a pretty over-arching statement. The question of whether or not numbers speak for themself doesn’t have a binary answer. Boyd and Crawford are correct in identifying that numbers do need interpretation, but even without human-driven data-wrangling, machine learning techniques have provided some incredible insights into the details and predicitions that can be made given enough data. The canonical Target example (http://www.nytimes.com/2012/02/19/magazine/shopping-habits.html) reveals how computers processing “Big Data” can reveal trends without human training or intrepretation. The Target example is interesting because it also ties into the question of “just because it’s accessible, doesn’t make it ethical.” Our shopping habits and behaviors are public to the stores we shop at, and people in general appreciate relevant advertisements, but where is the line?

    Is Big Data self-explanatory?

    On the question, I am inclined to agree with Boyd and Crawford. When unstructured Big Data is computationaly analyzed, it is necessary for human to provide some constraint and strucutre to the massive quantites of information. Any form of filtering is subjective, and even data “cleaning” introduces inherent biases. I worked on a project related to Twitter tweet analysis, and by the nature of tweets, the messages are messy and unstructured. In order to gain any meaningful insight I needed to clean the data, for example, eliminating “stop words”, and any words I included in this list were in one way or another subjective. This form of analysis further touches on Boyd and Crawford’s critique on limited access to Big Data creating a digital divide. Some companies have more access to data than others, and it is extremely valid to point out that research done using better datasets and more comprehensive data has an advantage.

    Conclusion

    Although I disagreed with many of the points brought up in the reading, the general message is commendable. More and more research is moving towards computational methods of analysis on “Big Data”, and researchers need to do a more thorough job of understanding how to perform this analysis, what the limitations are, associated ethical concerns, and what biases and values might be propogated through its exapnding use.

  • UntitledKarrie

    testing

    Enter text in Markdown. Use the toolbar above, or click the ? button for formatting help.

  • Six provocations for big data - David

    I agree with the point made in section1 “automating research changes the definition of knowledge”, that using big data for research and learning changes the way that we learn from information. Bigdata analysis is a powerful tool and as it becomes even more prevalent due to the increasing availability of rich datasets its strengths and limitations will influence the way that we learn from information. Just as how even simple statistics can be very misleading the results of bigdata analysis emphasize certain types of patterns and deemphasize others. Bigdata analysis may cause researchers to avoid fields that have little data already available or gain knowledge from outlier data points that are often omitted in bigdata analysis.

    I also agre with the points maind in the “Not All Data Are Equivalent” section. It is extremely easy to assume the amount of data that indicates a trend means that a trend is present with out considering the strength of each data point. The usefulness of a bigdata set is difficult to quantify. I have heard claims before from companies that use bigdata analysis to find trends in their data that their dataset is extremely useful because of the number of terabytes of data that they have. Often times they provide very little information about exactly what the terabytes of data contains. More data is definitely better when it comes to spotting trends in data, but the incitefulness of a dataset given its context is effected by much more than just its size. When analysing a large data you have to consider how meaningful each data point in the context of the question you are trying to anser.

  • Big Data Provocations - Commentary

    It was a little difficult trying to understand the exact message of this paper without knowing the context. I assume it was written as a counterpoint to the idea that Big Data is a wonderful and irreplaceable new tool. Maybe that idea was particularly strong and well argued, and there was a need for its critique but for me most of the points made in the paper looked obvious. Yes, when working with data, one needs to carefully consider its origin and what exactly it does and does not represent. Yes, most data needs pre-processing and that pre-processing might inject subjectivity into one’s analysis. Yes, using public or semi-public data for research without its authors’ explicit consent is sometimes ethically questionable. But the way I think about Big Data already takes these things into account. To me it’s just a single tool that can be used in social research along with interviews, polls and other more standard data gathering methods. When writing a paper that uses survey data as evidence, the authors are expected to comment on why they believe the data is accurate and representative, both qualitatively and using statistical arguments. The same should be true for Big Data.

    The thing I found interesting in this paper was the idea that, since using Big Data usually requires technical expertise, there is now a shift in what training researchers need to have and how this affects their perspective. While I can’t comment on the academia side of things, I have definitely noticed a growing culture among programmers/developers/hackers of working on small side-projects that use modern technologies to reflect on social sciences/art/humanities - ‘the real world’. An example of this is a project that uses machine learning and natural language processing to auto-generate a politician’s speech based on the transcripts of all their previous speeches. Projects exploring political speeches would traditional have been done by the academia. Instead, this was done by an independend developer in their spare time and published in Medium, a blogging platform. As someone who’s a part of the hacker culture, I see this as the democratization of humanities. However, I can definitely understand why people traditionally trained in humanities would find this suspicious - most programmers’ knowledge of social theory and analysis is as amateurish as most academians’ knowledge of deep learning techniques.

  • Analysis of Six Provocations of Big Data

    4) Not All Data Are Equivalent

    In this subsection, the author explained how researchers need to be careful when interpreting the results of big data, since it is not always obvious what statistics gleaned from large data tells us about the nature of personal relationships. I agree with the author’s analysis. It is not uncommon for individuals to be ‘friends’ with people on social media who they are not truly friends with in real life, either because these connections are professional acquaintances or relative strangers whose friend request was accepted out of politeness. A user may also choose to connect to someone on social media purely out of interest in following what that person publishes, rather than from having amicable feelings. Lastly, it is important for researchers to remember that there are many people who don’t frequent the internet and whose behavior may not be exhibited when big data is analyzed.

    5) Just Because it is Accessible Doesn’t Make it Ethical

    This section raises the question of how ethical it is to analyze data consisting of social content created by individuals who are not aware that their data is being studied. While traditional humanistic studies usually required researchers to obtain the consent of the research subjects, the sheer scale of big data makes it impractical to ask for the permission of each individual involved. Perhaps a solution to this problem is to make users more aware of the possibility that their content may be used for studies. While I assume that most social media sites have terms of agreement that specify this in fine print, perhaps it is their duty to be more direct with users by occasionally displaying infomercials and videos, in place of commercial advertisements, to increase user awareness of how their data may be used.

  • The New Trend for Museums

    The article “Curatorship as Social Practice” explained how museums have evolved in the modern day to become more focused on explaining the social meaning of their artifacts rather than simply displaying them. Museums are now more focused on engaging visitors in thinking about the displayed objects, and this has given new responsibilities to the curators of such objects. I think the recent trend towards emphasizing social context is good for museums. Using artifacts to educate visitors can only create a more informed people that is aware of their history and surroundings. Visitors to a museum may easily forget the objects they saw, but they will always remember some important lesson they may have learned during their visit. Additionally, the emphasis towards educating the audience makes the job of a curator more complex, giving them a chance to use a more varied set of skills to create a richer experience for museum visitors.

  • Curatorship as Social Practice - Commentary

    As someone who grew up in Vilnius, Lithuania, I was able to experience the shift in museum attention as it happened. Since the restauration of independence in 1990, Lithuanian museums have been trying to change and catch up with the western museum culture. In only 25 years they have successfully moved from ideological, bureaucracy driven institutions to social and educational spaces that resemble very closely the ‘new museums’ described in the text. Not every museum has adapted - in smaller towns you can still find places that are just rooms with things to look at. But such museums are usually empty or, in the best case, full of students that came here for a school trip. In the age of Internet, if you’re only interested in looking at things you can usually find everything you’re interested in online. Museums will have to offer something more - an experience, not just a view.

    The first museum visit I remember was to the Vytautas the Great War Museum in Kaunas. As a typical six-year-old boy, I liked warfare and guns. But guns - and other artifacts - are all I can remember from that visit: a set of medieval armor, a WW1 rifle and a model warship. I enjoyed seeing them but it wasn’t much different from just just looking at their pictures in some history book. I also visited the same museum just before I left Lithuania, in 2013. It was a totally different experience - there were still cool looking guns, but they were all presented in context. There were expositions about warrior cultures around the world throughout the history, pre-historic warfare, and Lithuanian inter-war diplomacy. I still enjoyed seeing the guns. But I also loved the stories they told.

  • Regarding "Curatorship as Social Practice"

    I have never been a huge museum goer. Even when I lived in Paris, a hot bed of interesting museums, I only voluntarily went when there were photography exhibits that I knew I would enjoy. I have also never really thought about why I am not a huge museum goer–I just assumed I was not “of the type”–until reading “Curatorship as Social Practice”.

    Most of my previous museum experiences are bland. I go to an exhibit, I look at some pieces that either hold my attention for a minute or two or do not. I glance over the plaque next to the piece, hoping for something that will contextualize the piece and draw me in–usually I am disappointed.

    After reading “Curatorship as Social Practice”, I think I finally understand why I have, in the past, not enjoyed going to museums. When I look at a piece of art, I want to learn something new or understand something about life just a little bit better. The kind of photographers I enjoy most (Leon Gimpel, Diane Arbus, etc.) are the ones that make me feel like I understand humanity better when I look at their work. In the outdated “object-oriented” model of museums/curatorship, teaching the viewer about something was not the goal of an exhibit. It is exciting to me to read that the idea of museums and curatorship is changing to focus more on the people behind the objects being displayed. To me, half the interest of an object lies in the story of its creation (I happen to think this about movies as well–despite the fact that the object, or movie, tells a story on its own!). I hope that as museums allow the people-focused aspect of an object to shine through, I will enjoy the exhibits more and more!

  • Zygi - Three Questions

    1. Do the scam letter narratives change over time? How much are they affected by major social and political events or the global media’s portrayal of those events?
    2. As the spam filters are getting more and more accurate, it is becoming much harder for scammers to reach their potential victims. What are the techniques they use to avoid having their mail flagged as spam? Do the scammers have to alter their stories, or are the changes mostly technical?
    3. While in almost all countries scamming people is a crime, there are some countries like Nigeria that are know for a huge number of scammers. What is the public perception of scammers in these coutries? Are they considered to be shady fraudsters or robin hoods?
  • Three Questions - Alan

    1) Where do the authors of scam emails draw their inspiration from? Could it be from personal experiences, from their creative imagination, or do they simply copy other well-known scam messages?

    2) If we were to perform an educational campaign to reduce the number of scam victims, what type of message would be most effective to send? In other words, what general guidelines would help people avoid becoming prey to such emails?

    3) Do you believe people fall victim to email scams because of a lack of wisdom or because of an emotional need? Do you think someone who is very smart and internet savvy could find themselves falling victim to such a scam because of emotional stress and insecurity?

  • Reading Commentary - David

    The reading section on “Distant /Close, Macro /Micro, Surface/Depth” describes how the seemingly at odds methods of analyzing a small amount of information up close and analyzing a large amount of information from far away actually complement each other. Both tools that can be used to carefully look at information and tools that can be used to get a high level view of a data set are valuable tools for analyzing humanities data. Traditional humanities techniques rely primarily on close readings of information and insights are thought of as being excavated from texts through repeated rigorous readings. In modern times we have created a number of tools that rely on computational power to aid researchers in gaining insights about high-level trends in previously unimaginably large datasets. While these tools may not offer the same kinds of insights as close readings, they can be used in conjunction with traditional techniques to identify regions of interest worthy of further study.

    I agree with the points made in this reading, in particular that a high-level statistical analysis of a data set is often not a substitute for traditional close reading techniques. I think that data analysis tools can be used very effectively to gain both insights into trends that couldn’t be seen at the macro level and to identify where in a large data set close reading techniques should be applied.

  • Fluid Textuality and New Challenges

    The “Augmented Editions and Fluid Textuality” section introduces the concept of fluid textuality in the Digital Humanities context. The digital works makes it easier than ever before to manipulate data, transform it into new forms, types and objects. Text can be copied and pasted, its visual representation can be changed by switching to a different typeface or color, it can be transformed into pictures, videos and sounds. The growing selection of open access works has made it simple to cut and mix other peoples’ word, creating digital “collages”.

    While this fluidity has brought many new opportunities, both in creating texts and analyzing them using digital techniques, I believe it also comes with a lot of challenges. As the authorial identity starts shifting “from the age of the individual voice to that of the collaborative, collective, and aggregated voice of the fluid text”, it’s becoming increasingly difficult to judge texts in context - only because there usually just isn’t a single context. The texts can be used by many different people of different backgrounds, beliefs, political optinions, eventually merging into a huge common, ‘globalized’ synthesis, and recovering its parts becomes just as difficult as analyzing the texts themselves.

  • Three Questions - David

    1. Are there good records of scam messages over time and if so where do these records come from? It seem like since scam messages are viewed as undesirable or even dangerous and are often automatically deleted it would be difficult to curate them.
    2. What trends are there in terms of changes in the types of people that are targeted by scam messages over time? How have those changes been reflected in the narratives of scam messages? Are those changes more reflective of a greater number of people becoming reachable by scam messages or are they more reflective of certain groups of people becoming more or less susceptible to scams messages over time?
    3. How do different genres of scam emails emerge? Are there a large number of unique scam message narratives that are unique to particular scammers or are there only a few unique scam message narratives that have been slightly modified and copied by a large number of scammers?
  • The Challenges and Opportunities In Data Analysis Created By the Internet

    The textbook section “Scale: The Law of Large Numbers” explains how new opportunities and challenges have been created because of the sheer quantity of data that is being produced on the Internet. Computers give us the ability to analyze this data faster than ever before, and the World Wide Web makes it possible for individuals to publish things that may have stayed private in earlier time periods. The challenges, however, lie in the fact that data is being produced faster than we are analyzing it and in the ethical dilemma created when we subject emotionally charged content through the ‘dehumanizing’ process of computer analysis.

    I personally welcome the challenges and newfound abilities that the internet has given us. Just as the Law of Large Numbers states that conducting multiple trials of an experiment leads you closer to its mean value, analyzing the large amount of data that we are producing can help us uncover truths about human needs, desires, and aspirations that we may not have discovered before. I don’t believe that algorithmically analyzing data such as Holocaust videos would upset many people. Perhaps what we should be more wary of is the possibility of a person or group using data extracted from the Internet to influence society in a malevolent way.

  • KarriePeterson-3Questions

    The “sameness” of so many stories surprises me - they aren’t very original, and once you’ve seen a couple there aren’t many surprises or even interesting stories. Why would that be? It wouldn’t be very hard to make different stories, even if the formula is still preying on victims in the same way. (Street people asking for $$ in Cambridge seem far more original to me!)

    Does the distance and anonymity of the internet change the sort of people who prey on individuals? Do societal ethics change - is it “less bad” to bilk a single person intentionally than to participate in unsafe schemes (like risky mortgages) that end up bilking hundreds of thousands of people?

    Does the data “match up” with other kinds of activities that are not illegal - such as marketing by appealing to or relying on people’s loneliness, lack of sophistication, pleasure at feeling special, willingness to help, etc.

  • Scam/Spam: Lingering Questions

    Question 1: What do you believe are the “deal-breakers” and “deal-makers” in a scam message? i.e. What kinds of information do you think give away the scammers, and what information really solidifies “trust” between the reader and scammer?

    Question 2: Is there a trend in terms of the times/places the most convincing scams are born out of? Have you noticed that times of true strife yield the most artful scams?

    Question 3: When you act out email scams, it gives them a certain life, yet the scammers still seem like they are speaking at a distance. If you have ever been approached in person by a stranger looking to scam you, you know the feeling of a more personal, bodily threat. What role do you think the email environment plays in the success/failure of these scam artists?

  • Scam questions

    1. Do you think dialogue with in-person scammers would depict the same underlying story as the scam emails you have compiled? What kind of character is the typical scammer/ is there a common voice?

    2. What was the “turning point” that led you to start further analyzing scam emails rather than just collecting them?

    3. From your work, how do you think “trust and the internet” has evolved so far? Where do you think internet scams will go in the future?

  • What E-mail Scams Teach us About Society

    The assigned articles present e-mail scams as something more than just criminal activities and claim that these scams reveal much about the society we have created. The book The Rumors of the World explains that e-mail scams appeal to the victim’s desire to get rich quick. While humans have always been attracted to fast rewards, the Internet may be aggravating this behavior. Internet users are able to get what they want faster than ever before by accessing information and entertainment almost instantly. The billions of dollars that victims lose to e-mail scams may be symptomatic of how the Internet has affected the mindset of our society.

    The articles also associate e-mail scams with characteristics of our economic and political landscape. The article “Dearest One, My Sincere Greetings To You, And How Are You Doing?” seems to promote the idea that e-mail scammers are no different than the corrupt bankers and businessmen that swindle the common folk and led to the recession of 2008. I personally do not know enough about the true level of corruption in the world’s dominant financial institutions so I am not prepared to agree with the author on this. However, I do concur with his claim that the political corruption of third-world countries like Nigeria creates a desperate and dishonest atmosphere that easily breeds the criminals who distribute scam e-mails.

    I also enjoyed reading about the film that Joana Hadjithomas and Khalil Joreige have created. It seems to be an effective way of making the viewer understand the way in which scammers emotionally connect with their victims through their message.

  • Visually Representing Scam Messages

    After looking at the scam messages on LMO and the two messages I found in my personal gmail account, I was definitely able to identify recurring themes. Many scam messages try to make the reader believe they have won a lottery reward that needs to be claimed. Other messages attempt to evoke the empathy of the reader by claiming family members have died or that they are fleeing political conflict. Another theme I noticed was that of a report of financial error, in which the scammer tries to make the reader think that they must urgently respond to the message or risk losing money from their bank accounts due to some bookkeeping error. Some of the countries that messages usually claimed to originate from were Sudan, Nigera, the United States, and even the United Nations. The monetary amounts at stake ranged from a few thousand dollars to several million dollars, although if the email asked the reader to pay an amount, it was usually a small amount in the order of $100.

    To visualize this data, one potential idea would be to create an interactive online map of the Earth to show where scam message purport to originate from. Each nation would be shaded to represent how many scam messages ‘originated’ there. When a user clicks on a nation, a pop-up would appear displaying how the scam messages were distributed among themes and narratives.

    Another idea could be to shoot photographs representing the situations depicted by these scam messages. For example, one could photograph a plighted elderly woman to demonstrate the emotions evoked by a scam message purportedly sent by a widow whose family has died. Perhaps the website displaying these photos could show the user these images, after which the user can choose to view the scam message it is based on.

    I could definitely imagine a spam story generator. Such a generator could have a set of pre-defined templates available. Users would then have to fill in the blanks left in the story, which could be missing names, countries, bank names, addresses, or dollar amounts. By creating these messages, users may be able to better understand the tactics that scammers resort to in their deceptions.

  • I Must First Apologize - Interpretations

    While I’m hesitant to comment on the readings without actually having seen the “I Must First Apologize…” exhibition itself, I found it surprising that both commentaries approached it from the same angle, by contextualizing the scam messages and their narrative with the contemporary neoliberal world - specifically, the “ghostly”, “neurotic” capitalism. While the angle is certainly interesting and thought-provoking, it’s not the only approach, and isn’t necessarily the most fitting.

    When comparing the increase in number of scam victims in China, Laura Marks writes: “What does this suggest to you? That the “Chinese dream” is quickly turning into a nightmare, for Chinese citizens are both scamming each other and buying into scams, despairing of surviving the new teeth-baring economy?” To me, it instead suggests that the growing Chinese middle class is now empowered to spend some money on investment decisions and as a consequence make more mistakes in these decisions. There obviously isn’t one correct answer - it would have been interesting to see other interpretations in either of the essays, especially because a different context would likely give new insights about the motives, narratives and ethics of the scam emails.

  • The Good, The Bad, and The Ugly of Scams

    To be honest, when I first heard that our topic of discussion was scam/spam data, I was confused. I thought that these spam emails deserved as much thought as I had previously given them–little to none. I was sure that the senders were simply crooks trying to make some easy money and that any person with half a brain could avoid falling victim to these scams.

    After reading these two chapters from “The Rumors of the World”, I find myself much more conflicted. I see a whole new side of scams–more accurately put, I see the many facets of scams–and I feel silly for not understanding that these scams reflect larger issues, like political unrest, socioeconomic divides, financial corruption and bullying, etc. Still, I have a hard time being wholly sympathetic towards these scammers.

    I agree that there is an art to scamming that is fascinating to study. I agree that scam/spam collectives provide insight into international cash flow that is not as “fair and square” as many people think it is. I agree that many scammers are victims themselves. What I don’t necessarily agree with are claims such as “there are no good or bad guys” or “fraudsters sound like modern Robin Hoods”. While many scammers have good reason to attempt to steal money, so do many criminals–that doesn’t make the act just. It seems to me that saying things like, “Advance-fee scams, then, are no more toxic and only slightly more dubious than the securitized debt schemes that demolished the global economy in 2008” diminishes the depravity of exploiting another person’s suffering to swindle someone else of their money. As Marks pointed out, it is difficult to determine the exact motivation and intention of every scammer, thus making it difficult to make an “ethical evaluation”. I agree with this statement and am potentially being too judgemental, but I would still say that these pieces are perhaps being too indulgent of scam artists. The fascination with scam/spam data, I understand. The coddling of scammers, I do not.

  • Example Commentary

    Here’s an insightful response to the assigned reading from Digital_Humanities - etc., etc. If you edit this post in Prose and click the Meta Data button, you’ll see it’s been given the Digital_Humanities tag. Other readings will show up as available tags too, as we get further along.

    (By the way, here’s the url for the open access edition of the book: mitpress.mit.edu/books/digitalhumanities-0)