I loved reading andypowe11‘s post about persistent identifiers on the Polaroid Blipfoto blog https://www.polaroidblipfoto.com/entry/465380. Once again it points out an issue that until now I hadn’t considered all that much. Sure, I am well aware that web domain names change and content gets taken away as quickly as it is posted, and I am positive everyone reading this has experienced their share of broken links. But for me that mostly just happens with websites that feel out-dated or a little junky anyway. So how do you ensure that search and find-ability persist for the really important and culturally valuable stuff (a minority perhaps?) of what exists on the web? The blogger’s use of what we can only assume is an identifier for a UK railway bridge is spot on. These systems work because there are people who know how to make them work, and http URLs have become something of a universal language that a majority of folks around the world now speak and understand. This blog post also made me think of a really interesting New Yorker story on the Internet Archive and the Wayback Machine that came out this week. It boggles my mind to think about how much the web will change in the same period of time in the future as we are now from when it started. Does that sentence make any sense?
Terry Jones makes some excellent observations in his blog post “The future of publishing is writable”. Side note: if you are on campus and get the chance to take Dr. MacCall’s CIS 656 Electronic/Contemporary Publishing class I highly recommend it! A lot of time is spent discussing these same topics. One thought that I had while reading this piece though is how traditional methods of packaging information are becoming increasingly irrelevant in the digital age and Web 2.0 environment. At some point in history, books, journals and newspapers became standards for distributing print information just as record albums (be they analog or digital) became standards for music. At present it feels like we are witnessing a somewhat violent end to many of these de facto standards. People read what they want to know through blog posts and status updates and care less about obtaining whole albums than they do about individual songs. It leaves me to wonder what the implications for metadata are with all of this disruption of old practices. Standardization seems like a really critical aspect for ensuring interoperable metadata and for effective librarianship in general. While certain characteristics make it possible to categorize different information types, lack of standardization seems to me like a major challenge for the road ahead. Do any of my classmates have any thoughts about this?
Rebecca Guenther’s presentation “Change in the Digital Age: Metadata Trends for Libraries” provided a solid overview of many of the topics we have been discussing in class and helped illuminate a few more points for me. While she spoke a good deal about descriptive metadata (our focus for the class), I really appreciated her explanation of other kinds of metadata (administrative, preservation, technical, and structural) that are equally vital to the access and use of a digital object. Her point about meta-metadata struck a particular chord. In my mind digital objects are characterized most by fragility, corruptability and impermanence. It is critical then to keep some kind of record of the information embedded within so that items can be recovered and traced. Guenther’s discussion of new trends was also interesting. Seeing as how the presentation is over 3 years old now, I am curious to find out how much progress has been made with the Bibliographic Framework Transition Initiative as well as how much more of a practice linked data has become.
Metacrap was a fun read for this week. Maybe I’m over thinking it, but it felt like all of the really dated references (Altavista, Palm Pilots, etc.) were inserted with some magical foreknowledge by the author just so that people like me would get to read this 15 years down the road and have a chuckle. The 7 problems that Doctorow introduces do come off as a little overly simplistic and are delivered in a delightfully snarky manner. Yet for someone new to this field I think it serves as a good primer on where many of the pitfalls and traps to watch out for are. It is amazing to me how much the concept of metadata has entered the general public’s consciousness in the aftermath of the Snowden whistle blowing and NSA surveillance scandal. While a meta-utopia is absolutely unattainable, I think it’s great that more people are showing interest in the topic and perhaps this strength in numbers will serve the practice well in the future.
This blog post is somewhat of a continuation of my last one, in which I brought up a few course readings for this week that deal with staking out the library’s relevance in a networked age and then reflected on my own informational future. Last night I got to see the movie Citizenfour here in Tuscaloosa at the Bama Theater, which is a documentary made by one of the journalists first contacted by Edward Snowden and who broke the NSA domestic spying news story a year and a half ago. I think like a lot of people this event forever changed my knowledge and perception of “metadata,” and that’s one of the reasons why I was interested in taking this course in the first place. During an interview from his Hong Kong hotel room Snowden explains that what motivated him to leak details about the government’s top secret monitoring and tracking program was a profound sense of disillusionment. He mentions that in the post-9/11 America self-policing is the norm and there is this constant expectation of being watched. What’s still so shocking to me now is that I remember at the time that this scandal came out thinking that this kind of extreme action by our elected officials wasn’t unexpected and that it is all just a part of our collective and growing lack of concern for our own privacy. Another lesson that I quickly learned from this whole event is that metadata in the aggregate basically is content, and that even if NSA spies aren’t listening in on every word of your phone call a lot of inferences about your life and behavior can still be made from just the metadata that’s collected over time. I know that for our purposes we are dealing with metadata about objects and not about people, but I think a lot of the underlying issues regarding rights and privacy are the same. So while we talk about how libraries can take advantage of big data and how soon they will be able to offer recommendation and predictive search services just like Google and Amazon, let’s not forget that the world of metadata is a really sensitive place right now. Libraries are supposed to promote free thinking. How much of this are we willing to concede?
I also wanted to mention that at one point in the movie the camera panned across Snowden’s hotel room and I caught a glimpse of the Cory Doctorow novel Homeland on his bed. I had never heard of him before this week when we read “Metacrap”. I might have to check him out some more now.
A few of the primary readings for this week (Dempsey, Torkington) were concerned with the future of libraries and their fate within an increasingly networked society. While the arguments are tough to refute, there is something about the timing and urgency of these types of pieces that never cease to irk me. Like while I’m reading I will inevitably begin to picture a room full of out-of-touch business men who are desperately grasping for ways to maintain their product and label’s coolness with the kids. Nevertheless, these readings got me thinking about my own “informational future” and existence within a networked world. I realize now that it’s kind of difficult for me to try to classify my online presence. I’d say that I occupy a weird border zone between generation x and gen y. I participate in social media but not enthusiastically. I’m a bit older than Mark Zuckerberg so I didn’t have a Facebook account until I was a good while out of college. I definitely recognize the value in being connected to FB. For me it’s great being able to see pictures of my brother and his family from where they live in Australia and I, like many people, get a fair amount of my news there. A lot of it depresses me though. Mainly because at the end of the day I see it as just a big time suck and brain drain (not to mention creepy and evil). I know though that there is no turning back and opting out isn’t really a very practical or wise option – consider the requirements for this course as an example. Alas here I am and I just joined Twitter and for the first time in my life have started a blog (this one). While It’s interesting to think about the impact libraries will (or won’t) have moving forward I think it’s equally imperative for us all to pause and reflect on our own online presence and responsibilities as we enter the future.
I really appreciated and enjoyed the discussion of how to be creative in metadata creation in the Diao and Hernandez Journal of Library Metadata piece. It drove home the notion that the purpose behind metadata is really aimed at enhancing access to an item, not simply describing it. This is something that was running through my mind earlier today but in relation to archival finding aids. Finding aids shouldn’t just document the archive but should also serve as an access tool for the user. In addressing metadata creation Diao and Hernandez state that “creative cataloging means doing users’ work for users, in advance.” While I agree and support this idea, it also leaves me wondering where exactly is the line drawn? There is no way to predict all the different meanings or uses any individual might extract from an item. Furthermore, getting overly creative and being too detailed could easily lead to bias. It seems to me that how much of the users’ work is done in advance really rides on the philosophy of the particular institution, especially in regards to description vs. access. As with everything, finding a balance is key.