Soda or pop?

I began the reading for this week with Mary S. Woodley’s section in her Introduction to Metadata about crosswalks and harvesting: http://www.getty.edu/research/publications/electronic_publications/intrometadata/path.pdf. I must say that Woodley made these common practices seem a lot more understandable and practical than they were to me when I first began encountering them. “Mapping” can be understood as visualizing a theoretical link between different metadata schemas while “Crosswalks” are the actual side-by-side tables of relationships between schema that serve as keys to interoperability. All of this is conceptually very straightforward. However I was intrigued by Woodley’s assertion that there is a lot more than what appears on the surface here since often these practices are only thought of as having to do with metadata structure. What happens when different schemas use different vocabularies within their data values? How can like metadata be harvested or aggregated when there isn’t consistency with specific terminology and language? Woodley states, “Crosswalks have been used to migrate the data structure of information resources from one format to another, but only recently have there been projects to map the data values that populate those structures.” (pg. 5) I am curious to learn more about these recent efforts to develop common thesauri and controlled vocabularies for metadata schemas.

Advertisements

Microformats

The readings on microformats for this week were good, but I think the NY Times blog was the most helpful for me. It wasn’t really until I installed the Operator Firefox plugin per the author’s recommendation and started playing around with checking different microformats on different web pages that I got a clearer understanding of what these were all about. I encourage all web developing newcomers like me to check it out. Also, the SEJ blog post on hCard format tools was helpful as well. I’m a little reluctant to create my own hCard with my personal email, phone number, etc. included in it, but I can see how the microformat can be a handy resource – especially for folks looking to grow their personal learning networks!

Links:

http://open.blogs.nytimes.com/2007/12/05/the-magical-minimalism-of-microformats/?_r=1

http://www.searchenginejournal.com/tools-to-use-and-learn-hcard-format-learning-microformats/15875/

https://addons.mozilla.org/en-us/firefox/addon/operator/

Under the influence of Schema.org

I had the same reaction as WossaMetaU when coming across the Joho blog post about the Bogota Manhattan recipe. It brought my attention to schema.org and how there exists a standardized code language within HTML markup that allows programmers to highlight and categorize certain elements in order to make web browser search results more efficient in getting you the right information that you want. I’ve experienced marking up text documents with TEI in which it’s possible to similarly highlight certain contextual elements, but until now I had never realized that this capability can and will naturally be exploited by everyone that deals in the information business. Now it seems so obvious to me how I often will search Google for a recipe based on a couple of ingredients that I have, and then will be most drawn to the first one I see that has a “rich snippet” telling me a few key aspects of the recipe along with a positive review. While this discovery has opened my eyes to a whole new aspect of coding that I hadn’t really considered before, it also proves to me just how much influence a specific item can have over me due to a more sophisticated and standardized coding within.

Looking forward to more TEI

Last semester in Dr. MacCall’s CIS 656 course we had a few sessions with DH center staff and metadata librarians downstairs in the Alabama Digital Humanities Center that focused on TEI. We each completed a TEI assignment in which we did our own encoding for a Civil War letter preserved in Acumen and available to view here. The purpose was to electronically mark up the letter exactly as the handwritten text appeared and to offer contextual notes such as person, place and organization name, as well as dates and salutations. While I know that we only barely scratched the surface of what TEI can do through this exercise, I really enjoyed this activity in that it felt a lot like being a detective and maybe even a time bandit to boot. Here we were attempting to insert the exact content and meaning of someone else’s words from 150 years ago into this utterly unfathomable computation device, while at the same time doing our best to interpret and decipher terminology and language from a long ago era. I would be delighted to have the chance to do more of these kinds of exercises for this class, but I’m also wondering if anyone else in the course has experience with TEI and can maybe offer a different perspective?