Weinheimer Jim wrote:
>
> at least on my machine in Italy, I only get the metadata. But if you go through a proxy server, e.g. Sly at http://www.slyuser.com/, copy and paste the URL for this same book and you can see it, obviously because the proxy machine is in the U.S.
>
While it works on that example, it doesn't in general. Just checked 10
titles from my list - none was available full text.
> I am still not sure why Google does that for books that are clearly in the public domain. And, *I believe* that whenever we see only the metadata in Google Books, the scans are always there, only we are not allowed to see them. When the Google agreement goes through, all of these books should be viewable. Someone please correct me if I am wrong!
> In that case, only 37 out of 500 books (barring problems with editions) would not be available (or 92% availability if my math is correct). That seems like a pretty reasonable rate even for a normal library, and especially for a non-US institution looking for non-English language materials.
>
Well, I'm afraid they have zillions of metadata with no full text behind
them. If it is true that they obtained and indexed all of OCLC's titles,
then it must be so. I found some examples of records that, via
"Find in a library" brought up just one copy in WorldCat, and that was
ours. (Some of our copies, conversely, were not found though they
received the data.)
Thus, the high hit rate in GBS results from the century-long cataloging
efforts of countless libraries/ians. Who are now eager to get rid of
their catalogs...
B.Eversberg
Received on Mon Mar 29 2010 - 05:03:21 EDT