> to online systems, we lost that collective-ness, and we began to justify
> local practice as if it were based on sound theory and not the accident of
> how we moved from paper to online systems.
So, I agree with this in the main, but I also worry that a really
complete copy-cataloging regime irons out interesting differences.
There's noise, but also signal in the noise.
For example, the Boston Athenaeum and North Hampton's Forbes Library
both have large collections in the original Cutter Classification.
They used different versions of Cutter, and didn't coordinate either
the data or updates to the classification itself. (Over time the
changes to the classification became so major that they amount to
different systems.) They made different choices along the way, choices
that took time and are messy. But, today, this mess of data has value.
I'm anxious to turn a clustering algorithm loose on it, as I've done
with the intersections of DDC, LCC and LCSH. If everyone had the same
data, the noise vanishes, but also some signal.
To take an example closer to me, LibraryThing doesn't centralize.
Members choose their record's source, and then edit it personally to
their heart's content, creating 200,000 private MARC repositories!
Ideally, copy cataloging would be deeply atomic and versioned—each
record a fielded, forked Wiki. So not "get the MARC record for X" but
"use me the MARC record for a small public library, and make sure to
include any LCSHs assigned by Doug; he's a good guy."
Speaking of which, I really want to build this—a fielded wiki for
bibliographic data. Anyone want to help me?
Tim
Received on Wed May 16 2007 - 16:56:16 EDT