Re: MARC structure (Was: Re: Ceci n'est pas un catalogue)

From: Karen Coyle <kcoyle_at_nyob>
Date: Sat, 25 Aug 2007 07:27:49 -0700
To: NGC4LIB_at_listserv.nd.edu
Once again, from the "reality check" files.... ;-)

KREYCHE, MICHAEL wrote:

>
> The 99999 limit is maybe the most burning problem with the current
> MARC21 implementation. Other possible changes--length of tags, fields,
> indicators, etc. just aren't going to happen because there is no
> immediate practical need for them.

There is a great need to expand the subfields. There are a number of
fields where all or nearly all of the subfields have been assigned, but
there are desired additions to the field. I ran into this in the 773
field when I was asked to modify it to be more in line with the data
coming from A&I databases. (773 is where you would put a journal
citation in MARC). There were not enough subfields remaining to create
separate data elements for volume, number, part, and pagination.

There's another problem, which is that the subfields have sometimes been
treated as mnemonic -- $u for the URL -- and people don't want to have
to work with a different subfield for the same data element in different
fields. What this tells me is that there are certain "universals" that
need to be included in the field definition (we already have that in
some of the numeric fields), and that it's easier for those doing input
if those common elements always appear the same. This is one of the
things that MODS is able to do -- by using terms rather than subfields
-- although I can think of other ways to do it, by defining the field as
having certain characteristics (linking, URLs, identifiers, etc.) rather
than treating those as subfields.

MARCXML, although it solves the length problems, carries with it all of
the limitations in the current subfielding. What intrigues me about the
ISO marcXchange format is that it is a step toward solving the
subfielding problem.

However, once I get my mind around marcXchange, I start wondering about
the field/subfield/indicator pattern altogether. This is what leads me
to want to try the "RDA in RDF" approach, creating something much more
atomistic and allowing systems to build data stores where rather than
large, complex records you have interacting "entities." It's still a
vague picture in my mind, and I don't know how it works for exchange.
Then again, I'm beginning to think that we might want to think less
about sending large amounts of data into many thousands of separate
databases and more about sharing data that lives someplace we can all
get to. Again, somewhat vague, but here are some examples: Do we want to
import into our catalogs records for Google Books, or do we want a
service that allows us to link to Google books from bibliographic
records and from searches? Do we want to import tables of contents for
all of our books, or have a service that allows us to access them from
bibliographic records and from searches? Does saving all of this data
redundantly in thousands of catalogs make sense today? That's what I'm
thinking about.

kc

--
-----------------------------------
Karen Coyle / Digital Library Consultant
kcoyle@kcoyle.net http://www.kcoyle.net
ph.: 510-540-7596   skype: kcoylenet
fx.: 510-848-3913
mo.: 510-435-8234
------------------------------------
Received on Sat Aug 25 2007 - 08:06:04 EDT