Hi,
Jim W. wrote:
>Google does not allow any kind of "guaranteed" or "standardized"
>access--just the opposite. If the results vary for you and me, and
>even vary for ourselves depending on where we are searching from,
>plus it is tweaked almost twice a day, I think the public could possibly
>understand the argument for a more standardized means of access.
I think personalized is better, from the perspective of most patrons. If you're doing research in medicine, you probably want to privilege recent stuff over older stuff. However, this doesn't mean that the metadata needs to be personalized. The underlying data needs to be standardized, but that doesn't mean the presentation of the data (including search result ranking) should be one-size-fits-all.
Why does Google tweak their algorithm constantly? Lots of reasons, I'm sure, and not all of them would be comforting to us. But I do think that they've shown an ability to produce useful results. So I'd argue against aiming at standardized access for all patrons. Returning personalized results sends a message to the patron - "we're trying to help you." In many cases, our standardized results tell the patrons "We think we have the answers, and one of those answers is that there's a whole skillset that you need to learn before you can do what you thought you wanted to do."
I've had reservations about the value of social innovations relating to library technologies. Having someone who uses the library all the time "friend" us gives us a good channel for communicating with them, but the library is not their friend. (Unless you waive fines for people who "like" your library on Facebook :D). However, if we could allow people to work with our data, with their (self-identified) colleagues, there's probably a lot of value in that. The first one that comes to my mind is the return of serendipity - in addition to finding related stuff on the nearby shelves, find related stuff through your associations with people that you know are working on things that interest you.
The best part of the video was its emphasis on big-time systematic testing and evidence-based decision making. One guy mentioned that for every time a certain feature didn't work, they wanted to be sure it worked 50 times. I suspect there's no sound reason for that ratio, it's just a practical line in the sand that they can shoot for.
How can we get that kind of production testing?
Joe M.
Received on Tue Aug 30 2011 - 14:07:37 EDT