Quick reply to Tim's email..
>I really think it would be short-sighted to create a system and not pay
attention to the fact that centralized housing is what got us into this
mess in the first place, with both LC and OCLC.
Using peer-to-peer here strikes me as a technical solution to a social
and legal problem. There can be no question that the data must be
free, downloadable and forkable at will. Does baking that into the
technical structure really add that much, except complexity?
****
Someone in me says yes, it does. I agree that the data must be
free/downloadable, etc. What I'm thinking is that if you build in the
system a sharability, you don't need a "master" or repository. If every
node is capable of reproducing the whole via link-following of some
sort, it just strikes me as a more efficient use of space, power, and
processing.
I'm imagining this sort of workflow:
*****
Library A buys new book, and goes to put it into circulation
Entering into their circ system (a whole other conversation what that
should look like), their system says "How should I LCSH/tag/classify
this?" and then browses other systems chosen by the library...could be 3
or 4 other libraries of similar size, one really huge academic library,
LibraryThing and Random Database #4...the systems themselves don't
really matter, and can be swapped around at whim.
The local system builds a classification using the above query, along
with de-duping rules and such.
*****
I'm using P2P as shorthand for: No one central master repository. The
classification can be automated (to a degree) and driven by multiple
channels of metadata on the backend. The actual creation of a new,
distributed-effort LCSH system is related to the manner in which I see
the actual items being classified.
Or perhaps I'm missing something. :-) Which is often possible when I'm
out of my depth.
Jason
Received on Thu May 24 2007 - 10:23:00 EDT