Re: linked data

From: Harden, Jean <Jean.Harden_at_nyob>
Date: Tue, 11 Mar 2014 20:13:01 +0000
To: NGC4LIB_at_LISTSERV.ND.EDU
Interesting. I input a name with initials, and it seems to have found the records with the full name, which would be in a cross reference in an authority record.

But with that name I found almost 400,000 resources. The name I searched was likely to find a large result set like that, and I wanted to see how it would be handled. I couldn't find any way to limit the set. Also, it almost certainly would have included multiple expressions of the same work, and there didn't appear to be a way to navigate by work and choose the expression, one way to make such a large set usable. Are there any plans to expand the capabilities of the catalog along these lines? Having just a flat list of results isn't very useful unless your search only pulls up a handful of things.  

In case it matters, I am at home using an iPad. Not the ideal platform, I'm sure, but one that many patrons will use.

Jean Harden
University of North Texas

Sent from my iPad

> On Mar 11, 2014, at 12:56, "Eric Lease Morgan" <emorgan_at_ND.EDU> wrote:
> 
>> On Mar 11, 2014, at 1:34 PM, Jeremy Nelson <Jeremy.Nelson_at_COLORADOCOLLEGE.EDU> wrote:
>> 
>> I just did a soft release of a new catalog based on a design from Aaron Schmidt of Influx Design at http://catalog.coloradocollege.edu/(code repository is available on github at https://github.com/jermnelson/tiger-catalog). I'm using both BIBFRAME and schema.orgvocabularies along with MARC in a semantic storage backend (a combination of MongoDB, Redis, and Solr). This catalog, part of what I'm calling a catalog pull platform, is under active development and I would be very interested in getting feedback from this community.
>> 
>> --
>> Jeremy Nelson
>> Metadata and Systems Librarian
>> Tutt Library, Colorado College
>> Colorado Springs, CO 80903
>> (719) 389-6895
> 
> 
> Interesting.
> 
> Jeremy, when you did this, what was the overall strategy? Something like this:
> 
>  * export MARC records
>  * transform then into RDF
>  * load RDF into triple store
>  * index triple store
>  * provide search engine against index
> 
> —
> Eric Morgan
Received on Tue Mar 11 2014 - 16:13:20 EDT