Re: The next generation of discovery tools (new LJ article)

From: Beacom, Matthew <matthew.beacom_at_nyob>
Date: Mon, 28 Mar 2011 16:31:44 -0400
To: NGC4LIB_at_LISTSERV.ND.EDU
Karen,

I don't see how the evidence David provided or Jonathan's analysis would lead us to conclude that ranking is a crapshoot. The 1st ten in any half-sensible ranking of a half-sensible search will not be as likely to be relevant as the 10th ten (they will be more likely to be relevant), which is what I think you meant by "a crapshoot." 

The rankings are crude approximations of relevancy, but they are often pretty helpful. And a savvy searcher, who is after more than the first likely thing that comes up, may be able to sort through what rose up in the rankings to perform more suitable searches or re-sort the results by another vector or reduce the results by some facet or facets.

Matthew  

-----Original Message-----
From: Next generation catalogs for libraries [mailto:NGC4LIB_at_LISTSERV.ND.EDU] On Behalf Of Karen Coyle
Sent: Monday, March 28, 2011 4:20 PM
To: NGC4LIB_at_LISTSERV.ND.EDU
Subject: Re: [NGC4LIB] The next generation of discovery tools (new LJ article)

Thank you, David. This confirms Jonathan's analysis, that the set is  
compared to itself and therefore does not flatten out tail-like as I  
expected. That said, the most important part of what Jonathan said was  
that there is no particular correlation between Solr's determination  
of ranking and what the user experiences when looking at the results  
in a linear fashion.

Can we just conclude that, with a few exceptions, ranking is a crapshoot?

kc
Received on Mon Mar 28 2011 - 16:32:13 EDT