Re: "Third Order"--was Libraries & the Web

From: Corey A Harper <corey.harper_at_nyob>
Date: Wed, 16 May 2007 19:56:16 -0400
To: NGC4LIB_at_listserv.nd.edu
> Tim said:
> Ideally, copy cataloging would be deeply atomic and versioned—each
> record a fielded, forked Wiki. So not "get the MARC record for X" but
> "use me the MARC record for a small public library, and make sure to
> include any LCSHs assigned by Doug; he's a good guy."

To bring this back to Dan Mullineux's earlier statement about RDF, this
is an example of *exactly* what RDF is designed to do.  It has the
potential to reduce the granularity of bibliographic information,
and all sorts of other metadata, to the statement level.  It could allow
making other assertions about those statements, such as who contributed
them.  The framework for a system that allows algorithmically selecting
which bits of MARC data is already coming together.  I'd argue this
should be even more atomic than Tim suggests: down to the specificity of
data currently found in a subfield or a leader byte.

Additionally, it could be used to more effectively integrate data from
outside our walls into the library data corpus, and vice-versa.

Imagine if OCLC's database were built around this principle, and a
nightly SPARQL query could retrieve any statement Doug made about a
resource when he added a field or subfield, and add it to the data-pool
of a local catalog.  Imagine then that another query could pull in the
tags that Bob added to Library Thing and the reviews Sarah posted on Amazon.

-Corey

>
> Speaking of which, I really want to build this—a fielded wiki for
> bibliographic data. Anyone want to help me?
>
> Tim

--
Corey A Harper
Metadata Services Librarian
Bobst Library
New York University
70 Washington Square South
New York, NY  10012
212.998.2479
corey.harper_at_nyu.edu
Received on Wed May 16 2007 - 17:44:53 EDT