Karen wrote:
>>And in terms of load, isn't the issue that searches will be done not
at
>>the point of retrieval of the digital item, but at the point display
of
>>every potential "hit." So if you want to show your users which of your
>>library's books is available in google books, you have to look up
every
>>book that is displayed in the catalog. Like the use of book covers,
>>positive hits can be cached, but since both the library catalog and
>>google are moving targets, some ongoing querying must take place.
I think what you describe is a similar problem to the (frequent) request
that we should display 'Full-text' links in A&I databases, rather than
just an OpenURL link. Although this is possible (by using a Link
Resolver API to decide if we can link the user (in their current
context) to the full text of a journal article, and then displaying the
appropriate link within the user interface), if the user is at a brief
results screen with 10,20,etc hits on it, then there tends to be a
significant delay in the display.
Is this an inevitable consequence of working with distributed systems?
Is it a matter of just improving the response times on the systems we
are working with? Or do we need to consolidate our systems as much as
possible to avoid this 'lag'?
The article describes 4 possible approaches, one of which involves
grabbing the metadata for local indexing - this would probably be the
way to stop this 'lag' for respository data (although how much do we
want to replicate this metadata across many sites?)
Owen
Received on Tue Sep 11 2007 - 10:43:47 EDT