On 29/08/2011 14:31, Meloni, Julie (jcm7sb) wrote:
<snip>
Training is done in a custom system using already-rated search results;
if you get X number correct, you can move on in the process. All ratings
have several sets of eyes on them, and even more if the ratings differ
(say between a 3 and a 5 on a 5 point scale). There is room (and a
requirement) that you argue for your rating in that situation. In my
experience, fellow raters were educated, tech-savvy individuals with the
ability to make logical arguments; they look for people with broad
knowledge since you have to be able to rate results for Lady Gaga,
tsunamis, cricket results, and space exploration equally well (as an
example).
</snip>
That is very interesting! You mentioned "if you get X number correct".
In your opinion, was it pretty clear what was "correct" and what was
"incorrect"? Although I am only imagining since I haven't seen it, it
seems as if it would easier to figure out if one is "incorrect" instead
of "correct". For instance, a query of "Mona Lisa" that retrieved a
resource on herding reindeer in Finland could be labelled incorrect
pretty safely. But determining what would be "correct" would seem to be
more difficult: the painting or the song, or perhaps some words from a
poem. For example, when evaluating the search "Mona Lisa" how would a
high ranking of a page about Nat King Cole be considered? Or is this not
the way it works?
I find this fascinating, by the way!
--
James Weinheimer weinheimer.jim.l_at_gmail.com
First Thus: http://catalogingmatters.blogspot.com/
Cooperative Cataloging Rules: http://sites.google.com/site/opencatalogingrules/
Received on Mon Aug 29 2011 - 08:59:13 EDT