The discussion so far has more than once meandered a bit and
run into less than fruitful controversies.
Let me try to outline where I think it should be going, i.e.,
what questions the debate should try to answer. These questions
can probably be discussed presently without too much speculation.
This list is not meant to be exhaustive or to reflect priorities.
1. Where are current LIS OPACs actually deficient? Can a
checklist be drawn up against which to match the specifics
of actual systems? (And post results to the vendors)
2. Are there ways to improve the situation significantly without
a revolution in rules and/or formats? Keeping in mind that
we can do precious little to improve the quality and content
of legacy data. Or can we? With what money?
2a. Is the current situation so bad we should consider a breach in
consistency of metadata in beginning something very new?
3. Will global collaboration and standarization help a lot?
Keeping in mind that it is the actual documents that readers
want, not the metadata, and that transborder ILL is a
costly and time-consuming option. It is the local collections,
the stuff that readers can at once lay there hands on, that
matter the most to them. Or is it not?
Projects like VIAF, however, aim at improving search options,
not at amassing more title data in ever larger files!
4. Can new partnerships be forged to open up new opportunities?
I mention here the Google-WorldCat alliance which might be
extended into more functions on the local level, to enhance
search options for local collections. Keeping in mind libraries
have no options to do significant amounts of local digitization
plus OCR plus full-text indexing themselves. GoogleBooks, on the
other hand, definitely could profit from inclusion of more
library catalog metadata. OPACs and G.B. could make great
complements, neither of them profits from going their way alone.
5. Can ToC harvesting and indexing be done collaboratively, sharing
the results, internationally, on a large scale - to provide local
catalogs with extra fodder for indexing without extra manual input?
Considering that ToC data are probably the next best thing to
full-text, and supposedly full of relevant terms for searching.
Arguably, even better than the full text?
(In fact, the occurrence of a term in the ToC is likely to
enhance ranking in GoogleBooks - or if not, it should.)
6. What new communication functions should catalogs be able
to support? Surely they ought to speak XML, but keep in mind
that XML is no replacement for MARC, only for ISO2709.
Is there an XML Schema that is likely to advance to standard and
thus worthwhile to invest in (to make the LIS "speak" and
"understand" it)?
7. Can AI products help improve legacy data and quality of searching?
8. Are there AI products that can provide new input for catalogs to
augment or replace human input (considering results of 1.)?
Can this input augment or improve or revolutionize authority
control in the near term? Classification? LCSH? Or something
new altogether?
9. Will catalogs of the future need index browsing as an extra
option? If yes, just for authority data (names, subjects) or also
for descriptive data (title strings, keywords, series titles)?
10. Will RDA be a step into the right direction? Will it be more
than that or less? Or is FRBR rather an academic concept with
on the whole not too much impact on real-world search situations?
Bernhard Eversberg
Received on Mon Sep 03 2007 - 09:26:24 EDT