It was enlightening for me to read about how high the percentage of disappeared content and “link rot” or “reference rot” is, which is surely troubling in legal and scientific spheres. While Perma.cc seems a promising solution, it will take time and effort to reduce and eventually eliminate the problem.

As universal as the Internet Archive seems and is, with its comprehensive collection, generally open access, and relatively high usage, I was quite disappointed to read that it does not completely archive or preserve a webpage, (the 2012 presidential campaign ad example) and that the archive is only recorded and sorted by specific URLs. This makes it highly inaccessible despite its openness which is really a shame, as it undoubtedly houses a vast quantity of information potentially useful.

This issue really highlights the role and importance of digital humanities as a field, to develop tools that enable and improve access and usage of digital material. However, it does seem that this endeavor would require much time and effort, starting with “[seeing] what [scholars] tried to do, and why it didn’t work” for example. The process can also be highly challengin because “it’s a chicken-and-egg problem ‘We don’t know what tools to build, because no research has been done, but the research hasn’t been done because we haven’t built any tools’” Nonetheless, accessing and integrating web archives into research is and likely will remain an area of intrigue and worthwhile venture.