> .... One of our number mentioned that the system made very poor use of
> MARC fixed-length coded data elements for narrowing search results. The
> lead company representative then launched into a 10 minute diatribe, at
> the end of which she said "I can't imagine that *anyone* would want to
> limit by *that* [specific element]. How many in this room believe their
> institutions would find this functionality useful?"
>
> Thirteen hands went up.
Just out of curiosity, what were the elements in question, and were
the audience members high or just nerds? ;)
How many people need to limit things by whether they are a
Festschrift, whether the book contains illustrations, a bibliography,
or an index? How about by the language in the table of contents or
whether a portion of the work includes a translation? What about the
place of reproduction for a microfilm reel or the regularity it is
produced by? There are special codes for those things as well as a lot
of other crazy stuff that a system could use for limiting.
Meanwhile, publisher of all fields is not controlled and use of fields
that actually link related works is really sporadic...
IMO, the fixed field elements are among the more unfortunate things
found in a catalog record. Even ones that would seem useful are way
too unreliably applied or too vague to be useful. For example,
authorized headings in the 6XX's are a FAR better way to deal with
geography than GACs in the 043's.
One thing that has always left me scratching my head is why good
quality cataloging is believed to require a large number of fields
that are not used in any system, nor are likely to be. There is a
reason why the newer systems fall back to string parsing and
heuristics rather than relying on inconsistent and overengineered
metadata -- the former delivers significantly better results the vast
majority of the time.
If there is one thing that is very clear, it is that assigning a
separate field for every imagined (as opposed to real) use case is
unwieldy as well as unreliable at the encoding end cannot result in a
decent interface unless you normalize the heck out of everything --
which effectively requires you to dump large parts of the structure.
One thing that makes designing discovery interfaces tricky is that
people use the catalog for different reasons, and the quality of
metadata is highly variable. To help people find the good stuff
easily, you first have to be able to say in plain English what needs
to be done with the data that we actually have. Those instructions
must then be converted to code that can be used in a practical
environment.
kyle
Received on Fri Mar 06 2009 - 18:24:03 EST