You have your entire library catalog indexed by Google?
And it actually works? Book plus your libraries name is actually
working as a google search? Can you show us any examples? Do you have
any sense of if google is really indexing every single one of your
record detail pages?
I could add a sitemap creator onto Blacklight, maybe I will, if this is
actually useful.
IF your discovery software is actually constructed in a sane way (which
is a BIG if for actual library software), this is not a difficult thing
to add on to it, and requires such small resources that a lot of people
won't bother asking for ROI justification. But, again, that is a very
big if. With Blacklight, or I'm sure VuFind, it would be pretty easy to
add a feature to create a google sitemap. .
Jonathan
Shirley Lincicum wrote:
> Gilad,
>
> Thanks for providing this overview. I'm aware of these methods. I guess what
> bugs me is that many libraries aren't yet doing this, and I think it's
> largely because it's very challenging to do, and it's difficult to
> demonstrate return on investment to library directors and other
> administrators. Also, in the current budget climate, coming up with
> thousands of dollars per year to fund subscriptions to valuable services
> like SerialsSolutions products, Primo, etc. makes it an even harder sell,
> despite the fact that it's highly inefficient, if not impossible, for
> individual libraries to aggregate sufficient metadata to expose the majority
> of the resources they own, license, etc. and provide links via link resolver
> services, which require yet more investment in system infrastructure and
> metadata management.
>
> I have ideas about what I'd like the final product to look like: search
> results from a mainstream search engine that match up with library
> availability and access metadata in a seamless way. I'm not sure about the
> extent to which the mainstream search engines currently provide a technical
> infrastructure to support this. I'm also incredibly confused about what
> roles the various library "discovery" systems/products/tools can play, and
> what combination of these (if any) would enable libraries to expose their
> content most effectively for the least amount of money.
>
> My bottom line is that libraries really need to push the metadata they have
> regarding availability and access to content out to where the users are.
> Primo, Summon, WorldCat, OpenURL resolvers, etc. are all valuable and
> important, but they still don't seem to be enough.
>
> Shirley
>
> On Thu, Sep 2, 2010 at 2:27 AM, Gilad Gal <Gilad.Gal_at_exlibrisgroup.com>wrote:
>
>
>> Shirley,
>>
>> I think what you mean is to enable users to search for your library's
>> material through Google. We do this with Primo.
>>
>> Search engines (like Primo) create a normalized format of all the local
>> material, which is very beneficial for discovery, because you prepare
>> exactly the information you want to expose (we have a whole publishing
>> platform intended for this). On the other hand, search engines create
>> the pages (SERPs) dynamically upon request, which makes it impossible
>> for crawlers (like Google's crawlers) to index the material.
>>
>> The solution is a standard named 'Sitemap' (sometimes refered to as
>> 'Google Sitemap', but all major search engines use it), which provides
>> the crawler with the list of relevant URLs. Primo supports this standard
>> and allows libraries to publish their material to web search engines.
>>
>> The result is that when using Primo you can expose your library's
>> material to end users through web search engines. In that way, a user
>> can search in Google for a book title, include the library's name in the
>> search string, and get the item in the library as a result. It is also
>> useful for exposing unique collections the library may have that do not
>> exist elsewhere on the web and for that reason will appear high on
>> Google's result list.
>>
>> Best regards,
>> Gilad
>>
>>
>
>
Received on Thu Sep 02 2010 - 15:37:03 EDT