What I am doing is developing a programming / scripting language for
creating and managing semantic maps and data maps of any size and
structure. I see it being used in library catalogs, but also in many other
applications and outside of libraries too. It descends from an open source
OPAC that I wrote for small libraries and which actually got some users. I
saw the underlying idea I was trying to get at in that project and kept
working at it on and off for several years. Lately I saw that yes, this
thing wants to be a programming language for managing semantic and messy
data structures, so now that is the direction I am taking it.
On Tue, Jul 30, 2013 at 4:12 PM, john g marr <jmarr_at_unm.edu> wrote:
> On Tue, 30 Jul 2013, Stephen Paling wrote:
>
> My working hypothesis is that exploitable document structures and
>> internal metadata are of more use to members of the literary community than
>> the mainstays of the metadata we provide, i.e, author, title, and subject.
>>
>
> Definitely as step forward, since our traditional "mainstays" (catalogs)
> were designed only to act as finding guides to whole entities using the
> most general descriptive terms. But how much more expensive would the
> process be than simple cataloging, and would there be obstructions to its
> funding (probably not, since literature is a relatively non-threatening
> field to predators :))?
>
> Is the purpose of your proposed software to automatically both capture
> the "exploitable document structures and internal metadata" *and* place
> them in a comparative context (e.g. "every mention of an animal, every
> instance of a speech by a character"), or would a great deal of human
> intervention be required to objectively identify the metadata and feed it
> to the software?
>
> One of my "working hypotheses" is that manipulative speech (containing
> such things as fallacies of reasoning, glib rhetoric, emotional
> manipulation, callousness, distortion, misdirection, projection,
> self-obsession, etc.) can be objectively described, permitting computer
> analysis (filtering?) of verbal and written statements.
>
> The idea would be to develop objective ways of "scoring" such statements
> as to *likely* veracity and constructiveness. Interestingly, manipulative
> criticism of the scoring system would also be identifiable.
>
> Sounds like your algorithm concepts might work well for those purposes.
> The "objective" elements themselves would come from collaborations between
> discourse analysts, psychologists, logicians, philosophers, historians,
> etc., as well as input from people who have experienced the feeling of
> having been manipulated.
>
>
> As for the rest, I re-invoke Godwin's Law
>>
>
> In Godwin's sense that the lessons of history should not be forgotten?
>
> and the concept of bikeshedding
>>
>
> Forgive me for this: "We could develop tools that defuse the
> effectiveness of manipulative rhetoric in controlling modern societies and
> perhaps human predation itself. OK, let's start by studying common threads
> in fables." :)
>
>
> Cheers!
>
> jgm
>
> John G. Marr
> Cataloger
> CDS, UL
> Univ. of New Mexico
> Albuquerque, NM 87131
> jmarr_at_unm.edu
> californiastop_at_hushmail.com
>
> ** Forget the "self"; forget the "other"; just
> consider what goes on in between. **
>
> Opinions belong exclusively to the individuals expressing them, but
> sharing is permitted.
>
Received on Tue Jul 30 2013 - 17:20:07 EDT