Archive for the ‘library and information science’ Category

Error reporting: it’s easier in Kindle

May 9th, 2012

One thing I can say about Kindle: error reporting is easier.

You report problems in context, by selecting the offending text. No need to explain where - just what the problem is.

Feedback receipt is confirmed, along with the next steps for how it will be used.

By contrast, to report problems to academic publishers, you often must fill out an elaborate form (e.g. Springer or Elsevier). Digging up contact information often requires going to another page (e.g. ACM.). Some make you *both* go to another page to leave feedback and then fill out a form (e.g. EBSCO). Do any academic publishers keep the context of what journal article or book chapter you’re reporting a problem with? (If so, I’ve never noticed!)

Tags: , , , ,
Posted in future of publishing, information ecosystem, library and information science | Comments (0)

Karen Coyle on Library Linked Data: let’s create data not records

January 12th, 2012

There have been some interesting posts on BIBFRAME recently (noted a few of them).

Karen Coyle also pointed to her recent blog post on transforming bibliographic data into RDF. As she says, for a real library linked data environment,

we need to be creating data, not records, and that we need to create the data first, then build records with it for those applications where records are needed.

Tags: , , , , ,
Posted in information ecosystem, library and information science, semantic web | Comments (1)

Code4Lib 2012 talk proposals are out

November 21st, 2011

Code4Lib2012 talk proposals are now on the wiki. This year there are 72 proposals for 20-25 slots. I pulled out the talks mentioning semantics (linked data, semantic web, microdata, RDF) for my own convenience (and maybe yours).

Property Graphs And TinkerPop Applications in Digital Libraries

  • Brian Tingle, California Digital Library

TinkerPop is an open source software development group focusing on technologies in the graph database space.
This talk will provide a general introduction to the TinkerPop Graph Stack and the property graph model is uses. The introduction will include code examples and explanations of the property graph models used by the Social Networks in Archival Context project and show how the historical social graph is exposed as a JSON/REST API implemented by a TinkerPop rexster Kibble that contains the application’s graph theory logic. Other graph database applications possible with TinkerPop such as RDF support, and citation analysis will also be discussed.

HTML5 Microdata and Schema.org

  • Jason Ronallo, North Carolina State University Libraries

When the big search engines announced support for HTML5 microdata and the schema.org vocabularies, the balance of power for semantic markup in HTML shifted.

  • What is microdata?
  • Where does microdata fit with regards to other approaches like RDFa and microformats?
  • Where do libraries stand in the worldview of Schema.org and what can they do about it?
  • How can implementing microdata and schema.org optimize your sites for search engines?
  • What tools are available?

“Linked-Data-Ready” Software for Libraries

  • Jennifer Bowen, University of Rochester River Campus Libraries

Linked data is poised to replace MARC as the basis for the new library bibliographic framework. For libraries to benefit from linked data, they must learn about it, experiment with it, demonstrate its usefulness, and take a leadership role in its deployment.

The eXtensible Catalog Organization (XCO) offers open-source software for libraries that is “linked-data-ready.” XC software prepares MARC and Dublin Core metadata for exposure to the semantic web, incorporating FRBR Group 1 entities and registered vocabularies for RDA elements and roles. This presentation will include a software demonstration, proposed software architecture for creation and management of linked data, a vision for how libraries can migrate from MARC to linked data, and an update on XCO progress toward linked data goals.

Your Catalog in Linked Data

  • Tom Johnson, Oregon State University Libraries

Linked Library Data activity over the last year has seen bibliographic data sets and vocabularies proliferating from traditional library
sources. We’ve reached a point where regular libraries don’t have to go it alone to be on the Semantic Web. There is a quickly growing pool of things we can actually ”link to”, and everyone’s existing data can be immediately enriched by participating.

This is a quick and dirty road to getting your catalog onto the Linked Data web. The talk will take you from start to finish, using Free Software tools to establish a namespace, put up a SPARQL endpoint, make a simple data model, convert MARC records to RDF, and link the results to major existing data sets (skipping conveniently over pesky processing time). A small amount of “why linked data?” content will be covered, but the primary goal is to leave you able to reproduce the process and start linking your catalog into the web of data. Appropriate documentation will be on the web.

NoSQL Bibliographic Records: Implementing a Native FRBR Datastore with Redis

  • Jeremy Nelson, Colorado College, jeremy.nelson@coloradocollege.edu

In October, the Library of Congress issued a news release, “A Bibliographic Framework for the Digital Age” outlining a list of requirements for a New Bibliographic Framework Environment. Responding to this challenge, this talk will demonstrate a Redis (http://redis.io) FRBR datastore proof-of-concept that, with a lightweight python-based interface, can meet these requirements.

Because FRBR is an Entity-Relationship model; it is easily implemented as key-value within the primitive data structures provided by Redis. Redis’ flexibility makes it easy to associate arbitrary metadata and vocabularies, like MARC, METS, VRA or MODS, with FRBR entities and inter-operate with legacy and emerging standards and practices like RDA Vocabularies and LinkedData.

ALL TEH METADATAS! or How we use RDF to keep all of the digital object metadata formats thrown at us.

  • Declan Fleming, University of California, San Diego

What’s the right metadata standard to use for a digital repository? There isn’t just one standard that fits documents, videos, newspapers, audio files, local data, etc. And there is no standard to rule them all. So what do you do? At UC San Diego Libraries, we went down a conceptual level and attempted to hold every piece of metadata and give each holding place some context, hopefully in a common namespace. RDF has proven to be the ideal solution, and allows us to work with MODS, PREMIS, MIX, and just about anything else we’ve tried. It also opens up the potential for data re-use and authority control as other metadata owners start thinking about and expressing their data in the same way. I’ll talk about our workflow which takes metadata from a stew of various sources (CSV dumps, spreadsheet data of varying richness, MARC data, and MODS data), normalizes them into METS by our Metadata Specialists who create an assembly plan, and then ingests them into our digital asset management system. The result is a beautiful graph of RDF triples with metadata poised to be expressed as HTML, RSS, METS, XML, and opens linked data possibilities that we are just starting to explore.

UDFR: Building a Registry using Open-Source Semantic Software

  • Stephen Abrams, Associate Director, UC3, California Digital Library
  • Lisa Dawn Colvin, UDFR Project Manager, California Digital Library

Fundamental to effective long-term preservation analysis, planning, and intervention is the deep understanding of the diverse digital formats used to represent content. The Unified Digital Format Registry project (UDFR, https://bitbucket.org/udfr/main/wiki/Home) will provide an open source platform for an online, semantically-enabled registry of significant format representation information.

We will give an introduction to the UDFR tool and its use within a preservation process.

We will also discuss our experiences of integrating disparate data sources and models into RDF: describing our iterative data modeling process and decisions around integrating vocabularies, data sources and provenance representation.

Finally, we will share how we extended an existing open-source semantic wiki tool, OntoWiki, to create the registry.

saveMLAK: How Librarians, Curators, Archivists and Library Engineers Work Together with Semantic MediaWiki after the Great Earthquake of Japan

  • Yuka Egusa, Senior Researcher of National Institute of Educational Policy Research
  • Makoto Okamoto, Chief Editor of Academic Resource Guide (ARG)

In March 11th 2011, the biggest earthquake and tsunami in the history attacked a large area of northern east region of Japan. A lot of people have worked together to save people in the area. For library community, a wiki named "savelibrary" was launched for sharing information on damages and rescues on the next day of the earthquake. Later then people from museum curators, archivists and community learning centers started similar projects. In April we joined to a project "saveMLAK", and launched a wiki site using Semantic MediaWiki under http://savemlak.jp/.

As of November 2011, information on over 13,000 cultural organizations are posted on the site by 269 contributors since the launch. The gathered information are organized along with Wiki categories of each type of facilities such library, museum, school, etc. We have held eight edit-a-thons to encourage people to contribute to the wiki.

We will report our activity, how the libraries and museums were damaged and have been recovered with lots of efforts, and how we can do a new style of collaboration with MLAK community, Wiki and other voluntary communities at the crisis.


Conversion by Wikibox, tweaked in Textwrangler. Trimmed email addresses, otherwise these are as-written. Did I miss one? Let me know!

Tags: , , , , , ,
Posted in computer science, library and information science, scholarly communication, semantic web | Comments (0)

Support EPUB!

November 7th, 2011

EPUB is just HTML + CSS in a tasty ZIP package. Let’s have more of it!

That’s the message of this 3 minute spiel I gave David Weinberger when he interviewed me at LOD-LAM back in June. Resulting video is on YouTube and below.

Tags: , ,
Posted in books and reading, future of publishing, information ecosystem, library and information science | Comments (0)

Web of data for books?

November 5th, 2011

If you were building a user interface for the Web of data, for books, it just might look like Small Demons.

Unfortunately you can’t see much without logging in, so go get yourself a beta account. (I’ve already complained about asking for a birthday. My new one is 29 Feb 1904, you can help me celebrate in 2012!)

Their data on Ireland is pretty sketchy so far. They do offer to help you buy Guiness on Amazon though. :)

Tags: ,
Posted in books and reading, library and information science, semantic web, social semantic web | Comments (0)

Frank van Harmelen’s laws of information

November 1st, 2011

What are the laws of information? Frank van Harmelen proposes seven laws of information science in his keynote to the Semantic Web community at ISWC2011.1

  1. Factual knowledge is a graph.2
  2. Terminological knowledge is a hierarchy.
  3. Terminological knowledge is much smaller3 than the factual knowledge.
  4. Terminological knowledge is of low complexity.4
  5. Heterogeneity is unavoidable.5
  6. Publication should be distributed, computation should be centralized to decrease speed: “The Web is not a database, and I don’t think it ever will be.”
  7. Knowledge is layered.
What do you think? If they are laws, can they be proven/disproven?

Semantic Web vocabularies in the Tower of Babel

I wish every presentation came with this sort of summary: slides and transcript, presented in a linear fashion. But these laws deserve more attention and discussion–especially from information scientists. So I needed something even punchier to share, (prioritized thanks to Karen).

  1. He presents them as “computer science laws” underlying the Semantic Web; yet they are laws about knowledge. This makes them candidate laws of information science, in my terminology. []
  2. “The vast majority of our factual knowledge consists of simple relationships between things,
    represented as an ground instance of a binary predicate.
    And lots of these relations between things together form a giant graph.” []
  3. by 1-2 orders of magnitude []
  4. This is seen in “the unreasonable effectiveness of low-expressive KR”: ”the information universe is apparently structured in such a way that the double exponential worse case complexity bounds don’t hit us in practice.” []
  5. But heterogeneity is solvable through mostly social, cultural, and economic means (algorithms contribute a little bit). []

Tags: , , ,
Posted in computer science, information ecosystem, library and information science, PhD diary, semantic web | Comments (0)

The Legacy of Michael S. Hart

September 16th, 2011

ship sinking into a whirlpool near the Lone Tower

Sometimes people are important to you not for who they are, but for what they do. Michael S. Hart, the founder of Project Gutenberg, is one such person. While I never met him, Michael’s work has definitely impacted my life: The last book I finished1, like most of my fiction reading over the past 3 years, was a public domain ebook. I love the illustrations.

KENBAK-1 from 1971

The first personal computer: KENBAK-1 (1971)

In 1971, the idea of pleasure reading on screens must have been novel. The personal computer had just been invented; a KENBAK-1 would set you back $750 — equivalent to $4200 in 2011 dollars2.

Xerox Sigma V-SDS mainframe

Xerox Sigma V-SDS mainframe

Project Gutenberg’s first text — the U.S. Declaration of Independence — was keyed into a mainframe, about one month after Unix was first released34. That mainframe, a Xerox Sigma V, was one of the first 15 computers on the Internet (well, technically, ARPANET)5. Project Gutenberg is an echo of the generosity of some UIUC sysadmins: The first digital library began a gift back to the world in appreciation of access to that computer.

Thanks, Michael.

Originally via @muttinmall

  1. The Book of Dragons, by Edith Nesbit: highly recommended, especially if you like silly explanations or fairy tales with morals. []
  2. CPI Inflation Calculator []
  3. Computer history timeline 1960-1980 []
  4. Project Gutenberg Digital Library Seeks To Spur Literacy:
    Library hopes to offer 1 million electronic books in 100 languages
    , 2007-07-20, Jeffrey Thomas []
  5. Amazingly, this predated NCSA. You can see the building–Thomas Siebel–hosting the node thanks to a UIUC Communication Technology and Society class assignment []

Tags: , , , , , , , ,
Posted in books and reading, future of publishing, information ecosystem, library and information science | Comments (0)

Understanding Wikipedia through the evolution of a single page

August 26th, 2011

“The only constant is change.” – Heraclitis

How well do you know Wikipedia? Get to know it a little better by looking at how your favorite article changes over time. To inspire you, here are two examples.

Jon Udell’s screencast about ‘Heavy Metal Umlaut’ is a classic, looking back (in 2005) at the first two years of that article. It points out the accumulation of information, vandalism (and its swift reversion), formatting changes, and issues around the verifiability of facts.

In a recent article for the Awl1, Emily Morris sifts through 2,303 edits of ‘Lolita’ to pull out nitpicking revision comments, interesting diffs, and statistics.

  1. The Awl is *woefully* distracting. I urge you not to follow any links. (Thanks a lot Louis!) []

Tags: ,
Posted in books and reading, future of publishing, information ecosystem, library and information science, social web | Comments (0)

Forking conversations, forking documents

August 7th, 2011

When the topic of discussion changes, how do you indicate that? Tender Support seems clunky in some ways, but their forking mechanism helps conversations stay focused on their topic:

Forking with Tender Support

Lately forking has also been on my mind as the Library Linked Data group edits and reorganizes our draft report: wiki history and version control is helpful, but insufficient. What I miss most is a “fork” feature, where you could temporarily take ownership of a copy (socially, this indicates that something is a possibility, rather than the consensus; technically, it indicates provenance, would allow “show all forks of this”, and might help in merge changes back). Perhaps naming and tagging particular history items in MediaWiki could help address this, but I think really I want something like git.

I’ve seen a few examples of writing and editing prose with git; I’d like to get a better understanding of the best practices for making collaborative changes in texts with distributed version control systems. Surely somebody’s written up manuals on this?

Tags: , , , , , ,
Posted in argumentative discussions, library and information science, PhD diary, random thoughts | Comments (2)

091labs again!

August 4th, 2011

Yesterday, our local hackerspace/makerspace re-opened!

For awhile now, Fiacre O’Duinn has been talking about the shared purpose between libraries and these spaces:

The ideas that fuel hackerspaces, such as cooperation, resource and information sharing, self-directed education, and a diversity of views are concepts that are central to our profession’s ethos.

Not to mention the cool tech (3D printers, laser engravers, tool lending libraries, …) we’d like to see in libraries in the not-so-distant future.

It’s a conversation I hope to pick up with Willow & others (thx!)at CCCamp.

Tags: , , , ,
Posted in library and information science, random thoughts | Comments (0)