Re: The "A" in RDA

From: Karen Coyle <lists_at_nyob>
Date: Mon, 29 Jul 2013 16:53:58 -0700
To: NGC4LIB_at_LISTSERV.ND.EDU
Alexander, I'd like some references on that, as well as some examples. 
E.g. a photo with no related text that can be found in a Google search 
by subject matter, or a photo that can be found by subject even though 
that subject is not in the pages related to the photo or image.[1] I 
realize that once images are retrieved, they can be analyzed for 
patterns that imply nudity - this is fairly standard stuff. You can also 
do a "more like this" once you have retrieved images, and you get ones 
with the same subject terms (applied by humans) and similar image 
patterns (color, lines, major areas). Their facial recognition algorithm 
detects that there is a face, not whose face it is. [2] That's why when 
you do a search for a person you get lots of images that aren't of that 
person, but that were on the same page as that person's name.

What I see is that images go through an interpreter for certain 
characteristics (lots of flesh color, looks like a nipple, there's a 
face in here), but that isn't subject indexing.

kc

[1] https://en.wikipedia.org/wiki/Google_Images
[2] 
http://arstechnica.com/uncategorized/2007/05/facial-recognition-slipped-into-google-image-search/

On 7/29/13 3:54 PM, Alexander Johannesen wrote:
> Hiya,
>
> So, um, to play the part of the sour-puss;
>
> Karen says;
>> Google uses the text provided by web page creators to interpret
>> the meaning of the images; it doesn't interpret the images themselves.
> Just a quick correction; this was probably true a couple of years ago, but
> nowadays that's simply not true. All (well, most; there's filters that
> apply) images you find through Google are indexed after an image
> interpreter have gone through. This won't tell you what's going on in super
> details, of course, like interpret the meaning of some scene, but it can
> detect people, recognize them, tag them, detect female nipples (important
> to the US for some reason :) ), some attributes about the weather (sunny,
> raining, etc.), similarities to other pictures (yesterday I uploaded a
> bunch of pictures of our local classical musical ensemble, and similar
> pictures were automatically merged to form animated samples, for example,
> in addition to automatically tag faces it recognized), dominant colors,
> some shape recognition and a few other bits. And note; this is only the
> beginning.
>
> I find it odd, though, to use pictures as an example of how Google isn't as
> good. Does the library truly go better? Last I remember, the library, too,
> didn't interpret pictures.
>
> Now, if I walked into the library and asked for pictures of happy people
> playing in the sun, could you give that? No, not a chance. Or a picture of
> a cat that looks like Hitler? (in your face, Library!) Could you show me
> all existing pictures of Wittgenstein? Pictures of Einstein hanging
> washing? Or the engine block of a Volvo V70 2001 model? There are far more
> instances of Google giving me the right answer with pictures than not, and
> I can't for the life of me understand what service you actually think
> you're bringing to the table anymore.
>
> And of course, if Google is doing this to images now (and voice; have you
> tried the latest voice search? Pretty cool), what else are they also doing
> in the written text area? We all know about citation in Google Scholar.
>
> And you guys are still talking about the "A" in RDA? Good grief.

-- 
Karen Coyle
kcoyle@kcoyle.net http://kcoyle.net
ph: 1-510-540-7596
m: 1-510-435-8234
skype: kcoylenet
Received on Mon Jul 29 2013 - 19:54:30 EDT