Re: MARC structure (Was: Re: Ceci n'est pas un catalogue)

From: KREYCHE, MICHAEL <mkreyche_at_nyob>
Date: Sun, 26 Aug 2007 08:35:44 -0400
To: NGC4LIB_at_listserv.nd.edu
> Date:    Sat, 25 Aug 2007 07:27:49 -0700
> From:    Karen Coyle <kcoyle_at_KCOYLE.NET>
> Subject: Re: MARC structure (Was: Re: Ceci n'est pas un catalogue)
>
> Once again, from the "reality check" files.... ;-)
>
> KREYCHE, MICHAEL wrote:
>
> >
> > The 99999 limit is maybe the most burning problem with the current
> > MARC21 implementation. Other possible changes--length of
> tags, fields,
> > indicators, etc. just aren't going to happen because there is no
> > immediate practical need for them.
>
> There is a great need to expand the subfields. There are a
> number of fields where all or nearly all of the subfields
> have been assigned, but there are desired additions to the
> field. I ran into this in the 773 field when I was asked to
> modify it to be more in line with the data coming from A&I
> databases. (773 is where you would put a journal citation in
> MARC). There were not enough subfields remaining to create
> separate data elements for volume, number, part, and pagination.

I agree completely. I meant that the problem--or rather the
solution--isn't immediate enough to make it worth implementing in the
Z39.2/ISO2709 context. It will take a little time to agree on what the
changes should be and by then we should be firmly in an XML mindset, not
necessarily MARCXML--which is really just plain old MARC written in a
different hand; to my eye, in a trendy colored ink with a lot of extra
flourishes.

...

> However, once I get my mind around marcXchange, I start
> wondering about the field/subfield/indicator pattern
> altogether. This is what leads me to want to try the "RDA in
> RDF" approach, creating something much more atomistic and
> allowing systems to build data stores where rather than
> large, complex records you have interacting "entities." It's
> still a vague picture in my mind, and I don't know how it
> works for exchange.

Please keep thinking out loud!

...

> Then again, I'm beginning to think that we might want to
> think less about sending large amounts of data into many
> thousands of separate databases and more about sharing data
> that lives someplace we can all get to. Again, somewhat
> vague, but here are some examples: Do we want to import into
> our catalogs records for Google Books, or do we want a
> service that allows us to link to Google books from
> bibliographic records and from searches? Do we want to import
> tables of contents for all of our books, or have a service
> that allows us to access them from bibliographic records and
> from searches? Does saving all of this data redundantly in
> thousands of catalogs make sense today? That's what I'm
> thinking about.

The TOCs are a great example. Some of the records in our catalog have
the good old 505, vendor supplied TOC fields, and an 856 linked to the
LoC TOC data, not always marked as such. The 505 is pretty much obsolete
and all we should need is one link, either from LoC or our vendor.

As I've worked on my bilingual subject headings database I've come to
see that the redundancy issue is even more critical for authority data.
At first I was happy just to collect it; now I'm wishing the kind of
services you're talking about already existed. I've got some ideas of
building custom agents to keep my copies of the data updated, but for
the moment I'm resigned to keeping copies. On the other side, I plan to
be offering some services and will be seeking some input on this in a
month or two. Not sure what the data format will look like, but it won't
be Z39.2/ISO2709!

Mike
--
Michael Kreyche
Associate Professor
Libraries and Media Services
Kent State University
Received on Sun Aug 26 2007 - 06:24:57 EDT