Re: Link resolvers as loosely coupled systems for holdings?

From: Stephens, Owen <o.stephens_at_nyob>
Date: Fri, 14 Sep 2007 15:21:44 +0100
To: NGC4LIB_at_listserv.nd.edu
Thanks for this - definitely looks interesting

Owen

Owen Stephens
Assistant Director: e-Strategy and Information Resources
Imperial College London Library
Imperial College London
South Kensington
London SW7 2AZ


Tel: 020 7594 8829
Email: o.stephens_at_imperial.ac.uk


> -----Original Message-----
> From: Next generation catalogs for libraries
> [mailto:NGC4LIB_at_listserv.nd.edu] On Behalf Of Jonathan Rochkind
> Sent: 14 September 2007 14:52
> To: NGC4LIB_at_listserv.nd.edu
> Subject: Re: [NGC4LIB] Link resolvers as loosely coupled
> systems for holdings?
>
> Stephens, Owen wrote:
> > I wasn't making an argument particularly for standards to
> solve all the problems of recording serials holdings, but
> rather wondering if there was common ground between vendors
> for forming a standard for recording holdings in a machine
> parsable format.
> There are several, yes. I am particularly enamored of the new ONIX
> standard for Serials Coverage. While it is originally
> intended/presented
> as a standard for 'electronic' serials coverage, to my
> reading it would
> work wonderfully well for any medium of serial coverage.
> Certainly much
> better than what most of us have now.  It _does_ cover
> 'missing issues'
> and such, don't think because it's called 'electronic' it
> only takes the
> broad-stroke approach that most of our link resolver software takes.
> Here's a blog post I wrote about the ONIX standard:
> http://bibwild.wordpress.com/2007/06/21/onix-for-serials-coverage/
>
> Meanwhile, there are several other standards efforts going on too, as
> well as the existing standards (Z39.71; MARC Holdings), which have not
> served us very well so far.
>
> Jonathan
>
>
> > No doubt this wouldn't give us perfect systems, but it
> would tackle the issue that Ross raised about the risk of
> putting print holdings into current Link resolvers.
> >
> > I would agree that there are certain problems that are not
> going to go away (and believe me I in no way over estimate
> the ability of Content Providers to give us good data - that
> was one of the real benefits of outsourcing electronic
> journals data to other people - we didn't have to deal with
> the vagueries of reports from Content Providers any more).
> >
> > I wonder how good our machine parsable holdings have to be?
> In general the broad brush approach of stating a holding (we
> have everything published in this journal since 1985) works
> quite well for e-journals. Although there can be 'missing
> issues' this is less likely with e-journals, so, if you
> record 'everything since 1985' in general there isn't the
> need to add exceptions to that.
> >
> > On the other hand with local holdings of print journals, I
> guess missing issues are going to be slighly more common, and
> perhaps this increases the need to deal with easily stating
> exceptions, missing issues, etc. But I think there are some
> questions to be answered here about value for money.
> >
> > We've also got to be realistic about the information we get
> from OpenURLs - this is not always complete, and especially
> with odd issues/supplements etc. can be problematic. By far
> the best indicator is the year of publication, as we can
> easily say 'if it was published anywhere in this journal,
> during this year, it is available', and this can also easily
> be measured automatically, and the amount of metadata needed
> is minimal. Of course, I'd be naïve to think this solved all
> our problems - I'm just trying to think around the realities
> of the environment we are working in. Even if we have some
> way of stating exceptions (everything since 1985 except vol
> 22, issue 3, in 1990), then will the metadata we get from an
> OpenURL be good enough to measure this?
> >
> > I know I keep posing questions at the end of these
> discursive emails, but I still don't feel that we've got to
> the bottom of where we should be going here:
> >
> > Do people agree that we should be trying to express
> holdings in a machine parsable format? (I know that I've got
> a yes from Ross and Jonathan on this - so perhaps I should be
> asking, does anyone disagree, and if so, why?)
> >
> > How specific does the machine parsable bit need to be?
> (just thinking here, as a compromise could we have a broad
> brush machine parsable statement, and a more detailed human
> parsable statement where we want to be completely, 100%, accurate?)
> >
> > I'm still interested in the fact that some sites have gone
> to some length to be able to bring 'print' holdings into
> their link resolvers menus by writing code etc. rather than
> putting the print holdings directly into the link resolver -
> although I take the point that in MFHD you can express the
> exact details of your holdings more accurately, is it really
> less work for us, and our users, for us to grab this holdings
> data from our ILS on the fly, rather than duplicate data
> entry in the two relevant systems? (Witness the work and
> difficulties described in getting print holdings into the
> Umlaut menu - wouldn't it have been easier (possibly
> cheaper?) to just enter the details in the link resolver?)
> >
> > One of the ingenious things about link resolvers is that
> they recognise most of the time the user isn't interested to
> know if we have some parts of a journal, but whether we have
> the part they are looking for at that moment. Perhaps in our
> print holdings we have sacrificed functionality (for the
> user) for completeness?
> >
> > Owen
> >
> > -----Original Message-----
> > From: Next generation catalogs for libraries
> [mailto:NGC4LIB_at_listserv.nd.edu] On Behalf Of Eric Hellman
> > Sent: 12 September 2007 17:48
> > To: NGC4LIB_at_listserv.nd.edu
> > Subject: Re: [NGC4LIB] Link resolvers as loosely coupled
> systems for holdings?
> >
> > On Sep 12, 2007, at 4:41 AM, Stephens, Owen wrote:
> >
> >> Is there any uniformity to how existing commerical link resolvers
> >> handle
> >> this? Do they (as with SFX) generally work on a broad brush
> >> approach or
> >> do any of them make serious attempts to tackle the issues
> (excuse the
> >> pun) of missing issues and extra supplements etc. (and how
> much detail
> >> is required in a holdings statement?) Are there any
> standards either
> >> inside or outside the library sphere that we should be looking at?
> >> (Anyone familiar with ONIX for Serials - does any of this
> cover this
> >> area?)
> >>
> >
> > Owen,
> >
> > There are lots of issues associated with, for example,
> missing issues
> > and supplements, and a great variation among vendors. For example,
> > there is a popular vendor that doesn't even try to consider journal
> > enumerations in their knowledgebase, let alone consider supplements
> > and missing issues.
> >
> > Knowledgebase vendors like us are to some extent at the mercy of
> > content providers to provide accurate holdings statements. A common
> > problem is that a supplement has no representation online,
> or perhaps
> > an ejournal which the provider represents as having fulltext online
> > has only the abstract of supplements online. Less common is the case
> > where a supplement (either print or electronic) contains only
> > abstracts of a conference and the abstract gets indexed as if there
> > is fulltext somewhere. In this case, the link server has performed
> > perfectly but there is user disappointment. Content providers often
> > do not report that the e-Version is missing supplements. When
> > supplements are present, they often link differently than non-
> > supplements. We strive to make these links work where ever possible
> > (I'm sure other vendors do the same) but there are some provider
> > sites that handle supplement linking poorly- we do the best that we
> > can. Supplements also tend to have non-standard page and issue
> > numbering which can lead to a higher incidence of
> transcription error
> > in the indexing and citation information chain.
> >
> > Missing issues can strain the representational ability of
> > knowledgebases- in ours these are handle using multiple records for
> > continuous spans, but the more serious difficulty is that sometimes
> > new issues are not sequentially mounted due to provider production
> > flows. This can result in an issue being missing one week and found
> > the next. Provider process quality is not always as good as
> you might
> > hope (always seems to be humans in the chain, and dang if they don't
> > sometimes do stupid things) and sometimes issues get missed.
> >
> > Standards are designed to help, but it is a mistake to look to
> > standards to solve your problems, especially the hard ones. A vendor
> > that relies completely on standards for interoperability is a vendor
> > who does a lot of blaming other people for not following or
> > implementing standards. More important in the world of electronic
> > information is the engineering of the the support process and
> > infrastructure. Problems in linking of electronic information will
> > ALWAYS occur, so vendors need to build their systems to efficiently
> > capture problems, fix the problems, and to implement the fixes. Take
> > a look at the tools you use that don't work as well as you would
> > like. Chances are, there will be no way for an end-user to report a
> > problem. Do you think that's a coincidence?
> >
> > Eric
> >
> >
>
> --
> Jonathan Rochkind
> Digital Services Software Engineer
> The Sheridan Libraries
> Johns Hopkins University
> 410.516.8886
> rochkind (at) jhu.edu
>
Received on Fri Sep 14 2007 - 08:24:25 EDT