Sometimes I think we need to be making our search interfaces
"smarter", and ironically, at the same time, maybe we don't need to
think about them very much at all.
I have been playing with my Alex Catalogue of Electronic Texts. As of
right now this is a collection of roughly 14,000 ebooks. Each ebook
is wrapped in rudimentary HTML. Each ebook includes Dublin Core
metadata, and the subject fields have been automatically generated
using a number of computer programs. Google (as well as a number of
other indexers) has crawled the site. I have also created a full-text
index of the content accessible via a Web interface as well as an SRU
interface. See:
http://infomotions.com/alex/
The local search engine includes only one box and one button. As
queries are entered results are returned. Based on the type of query
submitted and the number of results returned, the results pages
suggest and create alternative queries for the user. Limit to author.
Limit to subject. Boolean and. Truncation. Spellings. Synonyms. Etc.
In this regard I have made the search interface "smarter".
As I watch the Web server logs I see that about 80% of the hits to
the collection come from Google; most of the traffic to the
collection does not come from my local index. Why should I spend all
that effort indexing my content when nobody is going to use the
search interface?
All is not lost. There are many links to the Alex Catalogue from
around the world. Each ebook points to the Alex Catalogue as well. As
I watch people use the local index I see them getting few hits but
then using the suggestions to find others. In that way I am improving
access.
Here in academic libraries where fewer and fewer people physically
come to libraries to acquire information for their learning,
teaching, and scholarship we might think more about putting some of
our expertise into our Internet interfaces.
--
Eric Lease Morgan
University Libraries of Notre Dame
Received on Fri Apr 13 2007 - 06:58:00 EDT