Well, maybe they don't need to be full MARC, but what's the harm in
keeping the entire MARC record? They average about 1K, so storage isn't
an issue. The thing is, it's hard to determine ahead of time what you
can throw away. For example, I'd like to keep all of the language codes,
which are only in the 008 and 041, and I'd want to store the meaning of
those codes because I'd want to do a display that goes: "In English.
Translated from the German." That may be in a note in the MARC record,
but you can't necessarily find it among the notes. I bet that music
folks would like to see (or be able to search) the 048 data on the
number of musical instruments ("for 2 cellos, 1 piano, 1 bassoon" but in
a coded form). If you look through the MARC record there is some useful
stuff. I bet if we threw anything away we'd regret it later.
kc
Ross Singer wrote:
> Why do these have to be full MARC records for the sort of solution
> we're talking about? Isn't this just some sort of indirection service
> to get the user to the local library? Do we really need another MARC
> record brokerage service?
>
> What I would really rather see is this sort of uber catalog that
> associates useful value-add services (that would most likely be
> outside of a MARC record) such as summaries, links to reviews,
> syndicated TOCs, dust jackets, etc. That /could/ be a wiki and that
> way such a project wouldn't have to get bogged down with who can
> authoritatively edit a MARC record correctly.
>
> But, really, for the sorts of services that we're trying to capture
> (namely, google pagerank, wikipedia, etc.), would we need anything
> more granular than DC?
>
> -Ross.
>
> On 4/25/07, Hahn, Harvey <hhahn_at_ahml.info> wrote:
>> Casey Bisson wrote:
>> |Diane I. Hillmann wrote:
>> |> Why can't we just use OAI to pass them around? Much less overhead,
>> |> and the aggregators and services can develop on their own once the
>> |> records are available. ...
>> |
>> |A problem here is license. While many of us are creating some records
>> |and fixing many more, our catalogs are filled with records derived
>> |from any number of sources and controlled by a greater number of
>> |licenses. We can't legally share our records (especially into a copy-
>> |left collection) if we don't own them/control the license.
>>
>> The thing to do would be to get a significant number of large libraries,
>> both public and academic, to cull out of their local databases all the
>> bib records where they are the originating library (tag 040 subfield a).
>> (Since they are the originating library, their local record would not
>> have any possibly questionable shared-database additions or
>> modifications from a copyright standpoint but would almost certainly be
>> "pristine"--unless they redownloaded for some reason.) The LC bib
>> database could be included, too, since it's freely available to
>> Americans--but I believe there are restrictions to allowing access to
>> non-U.S. libraries, since it was created with American tax money and not
>> funds from other countries. (You'd have to check on how to deal with
>> that.) Anyway, that combination ought to give a good start to such a
>> cooperative project. Of course, other considerations come into play
>> beyond what have already be mentioned: if two or more libraries (in
>> different shared cataloging cooperatives) created bib records from
>> scratch for the same item, which record goes into the repository?
>> What's the process for correcting errors, and who has authority to do
>> so? Or is the repository like a wiki, where anybody can change records
>> willy-nilly? How would duplicate records, if any, be handled? Just
>> read any shared cataloging cooperative's manual(s) to get an idea of all
>> that's involved. It's easy to look at big pictures, but (in a digital
>> world) little pixels all have to fit together correctly for the big
>> picture to exist. I'm rather sure the wished-for repository is quite
>> possible, but it won't be large unless *lots* of libraries (especially
>> big ones) participate. For example, the current typical rate for local
>> creation (from scratch) of book bib records in OCLC is about 2% (AV is
>> around 10%-20%)--at least in a public library environment. That's not a
>> lot of books that any one typical local institution can contribute (for
>> us, that would be only around 7000-7500 titles). Compare that to the
>> 4-5 million or more records (I no longer know how many LC MARC records
>> exist) created by the Library of Congress since 1968. Without the LC
>> bib database, you're likely talking a piddly-sized repository--nothing
>> approaching OCLC's 70+ million bibs without all the other article
>> records using up numbers. FWIW.
>>
>> Harvey
>>
>> --
>> ===========================================
>> Harvey E. Hahn, Manager, Technical Services Department
>> Arlington Heights (Illinois) Memorial Library
>> 847/506-2644 - FX: 847/506-2650 - Email: hhahn(at)ahml(dot)info
>> OML & Scripts web pages: http://www.ahml.info/oml/
>> Personal web pages: http://users.anet.com/~packrat
>>
>
>
--
-----------------------------------
Karen Coyle / Digital Library Consultant
kcoyle@kcoyle.net http://www.kcoyle.net
ph.: 510-540-7596
fx.: 510-848-3913
mo.: 510-435-8234
------------------------------------
Received on Wed Apr 25 2007 - 17:52:02 EDT