Perspective on Google

I appreciated Ricky Erway’s post, “May I speak Openly about mass digitization?” for putting Google Books and mass digitization into perspective. The bottom line here is that the more that information becomes freely accessible the better off we all are. With all the paranoia resulting from the fear of a full-blown Google takeover, it’s easy to overlook the fact that Google is a business and approaches its projects with a pretty single-minded business mentality. I am reminded of another Managing Technology piece that I read by William C. Dougherty published January 2010 in which he addresses the question will Google Books make libraries obsolete? Dougherty points to the example that in Google’s schema Hamlet is classified under “Antiques and Collectibles.” The reason for this is because the company is using Book Industry Standards and Communications subject headings rather than Library of Congress. Libraries will always be relevant for serious research needs.

Advertisements

Why I avoid digital music

This post is inspired by Bruce Lazorchak’s blog regarding what’s being done to enhance metadata for the music industry, one of our reading options for the week. Growing up, music played a pretty big role in my life and before coming here to Alabama I spent over a decade volunteering with 2 college radio stations – WHPK in Chicago and WTUL in New Orleans. While digitization has made obtaining music a lot more accessible and convenient, I am (like a lot of radio people I know) one of those analog holdouts. I realize now though that it’s not just about sound quality, but the “aura” of the object, and one of the basic functions of that object is to communicate information. As Lazorchak points out it seems like this function is really lacking with digital music. Back in the day an album’s liner notes, song credits, and album art was how I learned about music. I followed certain artists’ careers and record label output through the info I gathered from the packaging as if it were linked data. I was happy to learn about the Recording Academy’s “Give Fans the Credit” initiative. It seems like this would be one small step in the right direction of improving the quality and robustness of metadata for digital music.

Metadata and Documentation Strategy

I really appreciated what Dr. MacCall said in class tonight about how metadata indexers aren’t interested in sharing content but  are much more concerned with sharing context. I think I was trying to make a point to this effect in my previous blog post on the foundations of resource description, and I am grateful that this idea can be summed up in such a clear and concise way. I also thought it was interesting that Dr. MacCall brought up the concept of “documentation strategy” which I remember learning about in Prof. Riter’s Archival Appraisal course when I took it last spring. I am inspired now to go back and brush back up on this approach, since I can see a correlation between gathering records from far and wide with the mission of documenting a certain subject along with establishing a protocol for indexing metadata for records that share a similar context. Now that I am thinking about it though I also remember that documentation strategy got a bit of criticism for being overly ambitious and not totally realistic. I guess as Dr. MacCall said it all comes down to scale. Anyways in case anyone else is interested, I recall this article as being a good starting point:

Thinking about the foundations of resource description

I enjoyed reading the D-Lib Magazine article on the foundations of Dublin Core, especially in light of my classmate Tonya’s post from a couple of days ago. I think that 20+ years of experience and practice with DC have influenced a lot of professionals’ and users’ opinions of the schema, and of course hindsight is always 20/20 and inevitably something will emerge that was under prepared for. Therefore the DC origins reading from 1995 really got my attention as it offered for me a fresh perspective on the subject, and I think it has helped me better appreciate how this all began and the rational and practical thought process of the innovators of systematized metadata. But I liked my classmate Tonya’s musings on why can’t all the schemas that have since developed to suit more particular needs be combined into one superpower schema? I guess it seems to me now that this was  how Dublin Core was originated, or at least it was initiated to describe as wide a range of electronic records as possible. Thinking back to my own limited experience dealing with DC in last semester’s Digital Libraries course I see now that through adding qualifiers I was able to make my metadata become very specific very fast, and this was both in the context of sports photos and images of digitized artwork and ephemera. I don’t really feel seasoned enough yet in this realm to say if this means that I’m a defender of DC as Tonya’s fantasy schema, but let it suffice that I have a clearer understanding and appreciation for the intention behind the schema.

OAIster and DPLA

It was interesting to learn about OAIster and the Digital Public Library of America, two retrieval services for digital material that I had never heard of before this class. OAIster was developed first, and by harvesting metadata from hundreds of repositories across the world strives to be the first union catalog for all varieties of scholarly digital information items. The particular article that I reviewed, “Looking for Pearls“, provides a pretty straightforward explanation of how OAIster works as well as some its “quirks” which, as I understand it, result mainly because of the scale that they need to operate on and the inescapable matter of dealing with non-normalized metadata entries. After comparing this synopsis to the DPLA, it seems like the DPLA aims to create the exact same type of service that OAIster offers however exclusively for institutions within the US. I am curious to know then to what degree is there collaboration between the two organizations? Is there any competition arising, or is one filling a niche or meeting a demand that the other is unable or not meant to fulfill? What am I missing here?

Uphill both ways!

The impetus for this post comes from a couple of my classmates’ comments on our assigned reading for this week of “How far should we go with ‘Full Library Discovery’?” I couldn’t agree more with Madam Librarian and MetaWhat! Data! but I thought I’d post my own comment for a little added emphasis. In reflecting on this piece I am reminded of this old cliché, “With great power comes great responsibility” (and after typing this phrase into Google I see that the quote is most known from Spiderman, so now we know that it’s from a reputable source!) But seriously, how many times have we asked the question, is this capability really smart and cool or is it really creepy, so far in this class? As the commenters on the original blog post note, there definitely needs to be transparency with how “full discovery” is enabled and a clear and easy way to opt out. Furthermore, I really think the author is on to something when they bring up the value of serendipity in the stacks. I feel like this is an idea that very quickly gets overlooked when all the too brilliant, creative, and innovative minds out there start fantasizing about all the ways computers and data can “make life easier” for us. While I don’t necessarily always agree that the hard way is the only good way, knowledge and understanding are certainly amplified when they are worked for and not passively received.