Re: Are MARC subfields really useful ?

From: Laval Hunsucker <amoinsde_at_nyob>
Date: Sat, 5 Jun 2010 09:12:35 -0700
To: NGC4LIB_at_LISTSERV.ND.EDU
> . . ., and I'm not sure how optimistic I can be
> about us having them before it's "too late",
> . . . . 

I have, myself, been working for quite a while 
now on the assumption that it already *is* too 
late [ sic : no quotes ], for better or worse. I am 
sure that I can be pretty optimistic that I won't 
be disabused, later on.

Anyway -- that seems, today, a more productive 
( because realistic ) basis on which to prepare 
now for the future.

And can't we, in fact, be grateful that 'total 
feasibility from a technical standpoint' has 
never been an assurance ( or justification ) 
that something will actually get done ?


- Laval Hunsucker
   Breukelen, Nederland




----- Original Message ----
From: Jonathan Rochkind <rochkind_at_JHU.EDU>
To: NGC4LIB_at_LISTSERV.ND.EDU
Sent: Sat, June 5, 2010 1:01:24 AM
Subject: Re: [NGC4LIB] Are MARC subfields really useful ?

"My professional bias is toward fine granularity... But looking at how much time my (rare books) catalogers are spending on marking some details only for presentation, with detriment to the subject headings, for instance, I doubt the cost effectiveness."

Part of our general issue is our broken cooperative cataloging infrastructure.

To me, a little bit of human-created metadata, with the appropriate structure and granularity for machine use, is better than a LOT of metadata without that appropriate structure and granularity.

In my utopian cataloging/metadata universe, a record would possibly start out with just a little bit of "description" -- at whatever level the body doing the cataloging thought was appropriate cost/benefit for _their_ needs. But the metadata would be recorded _well_, with the proper structure and granularity (unlike what we have now). Then someone else might come along and add another data element or two or a dozen -- with the proper structure and granularity -- and when they did, everyone else sharing that record would automatically get their additions with little human intervention (because human intervention is cost).  And that "someone" could maybe even be a "patron" or "general public" -- you can get "good enough" data out of the general public if you have the right data model and the right software. Neither of which we have now.  And without that, you don't get good enough data even out of trained professionals.  Even with that, trained
 professionals might give you _be!
tter_ data -- and in my ideal world, if a trained professional improves data (because their employing organization thought the improvement was justified by the cost-benefit tot heir orgnanization), those improvements would also immediately be automatically shared with everyone else with little or no human intervention.

This sort of universe is totally feasible from a technical standpoint -- it just takes the will to create it, resources put into it's creation (including skilled people to invent, design, and create it), and coordinated action by the library community toward the goal.  None of which we seem to have, and I'm not sure how optimistic I can be about us having them before it's "too late", and the library cataloging/metadata tradition essentially dies through neglect and the unsustainability of a current system whose cost-benefit is, as Kyle suggests, NOT effective or sustainable.

Jonathan


      
Received on Sat Jun 05 2010 - 12:13:51 EDT