Information Retrieval List Digest 006 (December 1989) URL = http://hegel.lib.ncsu.edu/stacks/serials/irld/irld-006 IRLIST Digest December 1989 Volume VI Number 6 Issue 6 *************************************************************** Continued from Volume VI Number 5, Issue 5 *************************************************************** IV. PROJECTS: Initiatives and proposals / Bibliographies Abstracts / Miscellaneous C.2. Dissertation Abstracts IV.C.2. Fr: "Susanne M. HUMPHREY" Re: dissertation abstracts [ AN University Microfilms Order Number ADG88-15305. AU UGALDE ARIAS, LUIS ALBERTO. IN University of Minnesota Ph.D. 1988, 258 pages. TI EFFECTIVE INFORMATION MANAGEMENT IN FORESTRY: AN APPLICATION TO FUELWOOD AND MULTI-PURPOSE TREE SPECIES RESEARCH IN CENTRAL AMERICA. DE Agriculture, Forestry and Wildlife. AB The main goals of this study were: (a) the development of a methodology to collect and organize silvicultural and environmental information in forestry research on fuelwood and multi-purpose tree species (MPTS) production, and (b) the design of a Management Information System (MIS) to supply decision-support for different end-users. This study was supported by a project of USAID for fuelwood research in six Central American countries. Uniform standards and guidelines for implementing fuelwood and MPTS experiments were established in order to permit global exchange and transfer of information on MPTS research. These standards for data collection and field measurements, and minimum data sets were developed in coordination with scientists to ensure the collection of useful information and to gain acceptance of these standards. The minimum data sets were developed to reflect what can be achieved at a reasonable logistic expense with an acceptable degree of consistency. The approach emphasized flexibility and simplicity in the database implementation to allow for added complexity as the demand for information and models increases. A menu-driven retrieval process for the information was developed using a microcomputer. For the implementation of the database KnowledgeMan/2 software was used. Establishment of MPTS information databases will permit improvement in all phases of forest management, including seed procurement and species selection for environmental zones. The database can support the decision on what, where, and how to plant trees. Assessments of trade-offs among various production systems in terms of different types of biomass components are also possible. It should elucidate climatic, biophysical, and social constraints in tree seedling production and establishment, and improved nursery operation and stand-management techniques. Finally, the MIRA (Manejo de Informacion sobre Recursos Arboreos) system developed in this study can be considered as a pioneering effort in database and MIS technology in the tropical regions of the world for the application of silvicultural research. It is also a first in terms of the cooperative effort carried out by six Central American countries to develop and use standardized data collection procedures for better coordinated research in MPTS. ] [ AN University Microfilms Order Number ADG88-19418. AU MORRIS, ANDREW HUNTER. IN Texas Tech University Ph.D. 1988, 237 pages. TI SUPPORTING ENVIRONMENTAL SCANNING AND ORGANIZATIONAL COMMUNICATION WITH THE PROCESSING OF TEXT: THE USE OF COMPUTER-GENERATED ABSTRACTS. DE Business Administration, General. Information Science. AB This research proposes a model text-based decision support system designed to support the activities of environmental scanning and organizational communication by actively filtering and condensing text. To filter text-based information requires the use of automatic routing schemes; to condense text requires the use of computer-generated abstracts or extracts. A key element in the model system is the ability of the computer to condense text by generating short abstracts of documents. Two approaches to condensing text have been proposed: (1) using natural language processing techniques to construct a knowledge base of the document contents, from which to write an abstract, and (2) employing algorithm-based extracting systems to generate extracts of important sentences and phrases. Systems using natural language techniques are still being researched; most are successful only in limited domains. Systems using extracting algorithms have been researched, but have not been applied to the problem of information overload in an organizational decision-making context. These two approaches were tested in a laboratory setting with student subjects. An algorithm for generating extracts was developed based on the combined work of previous researchers, and tested against an expertly written abstract such as might be constructed by a non-domain specific artificial intelligence system if one is developed in the future. Results of the study indicate that there was no difference in comprehension of the documents when the information was presented with the full text, by extract, or by abstract. These results demonstrate that an algorithm for computer-generated extracts can be successfully applied to text, reducing reading time and document length without significantly reducing comprehension of the information contained in the original text. [ AN University Microfilms Order Number ADG88-23547. AU LEITHEISER, ROBERT LEO. IN University of Minnesota Ph.D. 1988, 490 pages. TI AN EXAMINATION OF THE EFFECTS OF ALTERNATIVE SCHEMA DESCRIPTIONS ON THE UNDERSTANDING OF DATABASE STRUCTURE AND THE USE OF A QUERY LANGUAGE. DE Business Administration, Management. Information Science. AB Business organizations are increasingly relying on data resources that are stored in computerized databases. Utilization of these resources requires knowing (1) the logical organization of data in the database and (2) a language for retrieving data from that organization. The logical organization of a database is described by a database representation. The purpose of this research is to investigate the effects of different features of database representations on the learning and use of database systems by business end users. Three features are examined: (1) type of concept used, (2) type of symbol used, and (3) use of explicit representations of associations between major database objects. These features define different approaches to representing database organizations. A two stage experiment was performed that compared four representation approaches. Each approach used a combination of the three representation features listed above. In the first stage, subjects (MBA students) learned one of the representation approaches and then applied what they had learned to three non-language database tasks. In the second stage they learned the SQL query language, and performed three database tasks with the approach and the language. The principal findings were that the approach that used semantic concepts took less time to learn than any of the storage concept approaches and led to higher performance on one of the "without language" performance tasks. The same approach resulted in longer times for learning the SQL language and poorer performance on "with language" tasks. No major differences were found in learning and performance that were due to the type of symbols used or the use of explicit association representations. These findings were taken as evidence that the approach that used semantic concepts has some advantages over the storage concept approaches. Unfortunately, these advantages are lost when the popular SQL query language is learned and used. The study calls for further research to develop more appropriate languages for semantic representations and to further explore the effects caused by specific representation features. ] [ AN University Microfilms Order Number ADG88-19327. AU PYO, DONGJIN. IN University of Maine Ph.D. 1987, 168 pages. TI SELECTIVE DATA REDUCTION IN INFRARED SPECTROSCOPY. DE Chemistry, Analytical. AB The coupling of chromatographs to various types of spectrometers has led to the development of a number of extremely powerful instrument systems for the analysis of complex mixtures. Gas chromatography-mass spectrometry (GC/MS), and more recently, GC-infrared spectrometry (GC/IR) are probably the two most widely used examples. As GC/IR becomes routinely available, methods must be developed to deal with the large amount of data produced. We demonstrate computer methods that quickly search through a large data file, locating those spectra that display a spectral feature of interest. Based on a modified library search routine, these selective data reduction methods retrieve all or nearly all of the compounds of interest, while rejecting the vast majority of unrelated compounds. A greater degree of selectivity is observed than with chemigram-type routines. ] [ AN University Microfilms Order Number ADG88-19930. AU CHEN, HUNG-PIN. IN The Louisiana State University and Agricultural and Mechanical Col. Ph.D. 1988, 173 pages. TI QUERY PROCESSING ON THE ENTITY-RELATIONSHIP GRAPH BASED RELATIONAL DATABASE SYSTEMS. DE Computer Science. AB An ERG (Entity-Relationship Graph) can be used to provide a semantic structure to a relational database system. An ERG is defined by local regions. A local region contains two nodes of entity types and a node of relationship type. The semantic constraints of the database represented by the ERG (Entity-Relationship Graph) can be used to enforce the global integrity of the database system. A query is mapped onto the ERG to obtain an ERQG (Entity-Relationship Query Graph). This mapping can be specified by the user by navigating the database or automatically allocated by the system via a universal relation interface. The ERQG representation of a query can be semantically decomposed into a sequence of Local Regions. These Local Regions can then be processed according to their order in the query. The ER-semijoin operation is introduced to process this sequence of Local Regions. Using this approach, architectures of database systems are proposed--two-phase interface and one-phase interface. An implementation of a user interface is also discussed. ] [ AN University Microfilms Order Number ADG88-22961. AU CHI, SHAN. IN Northwestern University Ph.D. 1988, 81 pages. TI A THREE-PHASE QUERY PROCESSING TECHNIQUE FOR INDEFINITE DATABASES. DE Computer Science. AB A new method, called the compile-access-prove (CAP) algorithm, is proposed for query processing in indefinite databases. A database is logically represented as a set of clauses among which the non-Horn clauses represent indefinite information. Physically the database intension, containing view definitions, is compiled into access rules and the database extension, containing elementary facts, is stored as relations on disks. Each access rule is a procedure consisting of relational operations. In general, the indefinite elementary facts need to be processed with a theorem prover. By storing all elementary facts (including indefinite ones) into relations, it is possible to replace the theorem proving steps with more efficient relational operations. However, this process changes the semantics of the database. At query time, the related indefinite elementary facts are collected and sent to a theorem prover to recover the original semantics. The CAP algorithm has the following advantages: (a) it is capable of answering queries for recursive indefinite databases, (b) the theorem prover involves only the indefinite facts related to the query, (c) updating the database extension does not require the recompilation of the database, and (d) the techniques developed for Horn databases can be used in the algorithm. ] [ AN University Microfilms Order Number ADG88-18780. AU EPSTEIN, RICHARD GARY. IN Temple University Ph.D. 1988, 631 pages. TI INFORMATICS CALCULUS: A GRAPHICAL, FUNCTIONAL QUERY LANGUAGE FOR INFORMATION RESOURCE SYSTEMS. (VOLUMES I AND II). DE Computer Science. AB This dissertation presents the functional information resource model, a data model for multi-media data bases which is an extension of the functional data model. The significant contribution of this distribution is in the presentation of a graphical, functional query language for this data model. This query language is called the informatics calculus. The data model can be viewed as a data model for hypertext systems. Viewed in this light, the informatics calculus can be viewed as a proposal for an interface for hypertext systems which will enable such systems to capture the computation power of traditional databases. The informatics calculus is a data base language in which the user works in a workspace of objects. An object has a type and a state. The type of an object determines its functionality. The emphasis in this dissertation is upon query objects, objects which extract information or generate applications from the information resource. All informatics calculus objects consist of mosaics. A mosaic consists of tiles denoting entity classes, subclasses, conditions, functions, functors, functionals and other types of operations provided by the model. The spatial arrangement of these tiles determine a unique functional expression. For example, horizontal juxtaposition is used to denote the functional combinator of Cartesian product and vertical juxtaposition is used to denote functional composition. Each informatics calculus expression denotes a particular arrangement of information. The final chapter of the dissertation discusses extensions of the informatics calculus which includes an update language and a language for querying the system catalogue. None of the proposed extensions to the informatics calculus departs from the functional framework or the graphical framework of programming by constructing mosaics. The amenability of the informatics calculus for parallel execution is also discussed. ] [ AN University Microfilms Order Number ADG88-13974. AU LYNCH, CLIFFORD ALAN. IN University of California, Berkeley Ph.D. 1987, 247 pages. TI EXTENDING RELATIONAL DATABASE MANAGEMENT SYSTEMS FOR INFORMATION RETRIEVAL APPLICATIONS. DE Computer Science. Information Science. Library Science. AB This thesis studies the use of relational database systems to construct large, high performance information retrieval systems such as online library catalogs or citation retrieval applications. The major problem areas in relational implementations are query execution costs, poor space utilization, and functionality deficiencies both in query processing and in query languages such as SQL. Analytic and simulation methods are applied to quantify these problems. Proposals extending earlier work on user-defined operators for relational query languages and accompanying secondary index support allow both efficient query formulation and the definition of space-efficient relational bibliographic databases. When column values follow distributions typical of bibliographic databases (Zipf distributions), a key performance problem is inaccurate selectivity estimation. A framework for incorporating user-defined selectivity estimators into a relational query optimizer is established, and methods are given to construct highly accurate selectivity estimators for bibliographic databases. Relational query optimizer extensions are specified which incorporate query execution plans that use TID list manipulation algorithms for evaluating single-relation queries into the optimizer's vocabulary. With these extensions a relational system can outperform an inverted file retrieval system on bibliographic databases. Also explored are query planner extensions to implement nonmaterialized relations (allowing both partially deferred evaluation of queries and inexpensive iterative query construction) and preexecution identification of queries that will be costly to evaluate or will produce very large results. Both of these features are important for public access information retrieval applications. Finally, the thesis examines difficulties that arise in using a relational query language to support advanced information retrieval techniques such as ranking and weighted retrieval, and develops query language extensions that would significantly improve the performance of such searching techniques in a relational setting. ] [ AN University Microfilms Order Number ADG88-15220. AU REDDY, MARY ANN. IN University of Pittsburgh Ph.D. 1988, 149 pages. TI SEARCH STRATEGY SKILLS: A TWO METHOD COMPARISON OF TEACHING CD-ROM BIBLIOGRAPHIC SEARCHING TECHNIQUES. DE Education, Technology. Library Science. AB This study was designed to test the mastery of CD-ROM online bibliographic searching skills through the comparison of two methods of instruction: CAI and the traditional lecture method. One hundred and two ninth grade students were the subjects of the study: fifty-one students comprised each group. Both groups received instruction in online technology, online technology, the Reader's Guide, CD-ROM and search strategy skills. For the online and Reader's Guide portion of the study, the CAI group, using nine Apple IIe computers, received its instruction from three Combase, Inc. computer software programs: Online Retrieval I; Online Retrieval II; and the Reader's Guide to Periodical Literature, Level II. The classroom group received the identical lessons through the traditional lecture method using the overhead projector as an aid. The testing instrument used in this study was WILSONDISC, a CD-ROM database that contains the Reader's Guide to Periodical Literature and utilizes an IBM PC/XT computer and CD-ROM player. The study began August 31, 1987 and was completed on November 20, 1987. The overall design of the study was a Pretest-Posttest Control-Group design. Two t-tests and two Chi square tests were used to measure the results of the study. At the.05 level, the statistics yielded no significant differences in learning CD-ROM online technology regardless of the method of instruction. However, several interesting factors emerged from the data collected. First, the Attitude Survey revealed that the classroom students were far more enthusiastic about CD-ROM than the computer group. Secondly, the computer group did not excel as predicted by statistics from other studies, perhaps because 68% of the computer group was female and because of class scheduling the initial computer lessons were presented during an interrupted time frame which might have impeded the concentration of the group. ] [ AN University Microfilms Order Number ADG88-16874. AU PROBERT, JOHN ELLWOOD. IN United States International University Ed.D. 1988, 250 pages. TI A SURVEY TO DETERMINE REASONS FOR LOW LEVEL COMPUTER USE BY LAW ENFORCEMENT INVESTIGATORS. DE Education, Vocational. Education, Adult and Continuing. Sociology, Criminology and Penology. AB The problem. Many investigators were not using the computerized inquiry system to its fullest potential. The purpose of this study was to investigate the relationship between the use of computerized centralized criminal history files with the following variables: age, attitude, and the amount of computer training the investigators possess. Method. A correlational study was conducted. Fifty-two investigators from seven San Diego County area law enforcement agencies, eight departmental administrators and 8 computer trainers were given questionnaires designed to determine why the computerized inquiry system was not being used to its fullest potential. Results. The first hypothesis, which predicted that age inversely correlates with the use of computers for information retrieval, could not be supported. The second hypothesis, which predicted that adequate terminal and computer program training according to the needs of each investigator will have a positive relationship on the use of computerized information retrieval system, could not be supported. The third hypothesis, which predicted a positive correlation between favorable attitude towards the computer and frequency of the use of computerized information retrieval system, could not be supported. Investigation revealed first that the experience of using the computerized criminal history files and not the training or administration's attitude created a highly favorable attitude towards its use. The investigators frequently being overloaded with assigned cases to investigate went to manual files for data because this procedure was quicker. Second, two messages were emanating from the departmental administrations. The first was encouragement to use the computerized inquiry system, and the second message was to cut costs. The investigators perceived the messages to mean that they should reduce the use of the computerized inquiry system because of the high cost of using it. ] [ AN This item is not available from University Microfilms International ADG05-63684. AU EZIGBALIKE, INNOCENT F. CHUKWUDOZIE. IN The University of New Brunswick (Canada) Ph.D. 1988. TI LAND INFORMATION SYSTEMS DEVELOPMENT: SOFTWARE AND MANAGEMENT CONSIDERATIONS. DE Engineering, Civil. AB The development and management of information systems have been studied in various fields of research including: database management, distributed processing, software engineering, management information systems, and information resource management. By identifying the similarities between land information processing and these fields, the techniques and procedures developed and proved for other information processing applications can be adapted for the land information system (LIS). This thesis examines the activities that process parcel level information in New Brunswick, and proposes a conceptual structure for an LIS to deliver information to the processes, and strategies for developing an LIS and managing a land information environment in a provincial jurisdiction. The thesis recommends that a query management strategy by adopted to integrate the various departmental systems into one logical distributed system, that an independent management function be established to manage the LIS as a corporate resource of the provincial government, rather than as departmental property, and that the system be developed from existing systems by a phased prototyping approach. ] [ AN University Microfilms Order Number ADG88-09547. AU BOUAZZA, ABDELMAJID. IN University of Pittsburgh Ph.D 1986, 154 pages. TI USE OF INFORMATION SOURCES BY PHYSICAL SCIENTISTS, SOCIAL SCIENTISTS, AND HUMANITIES SCHOLARS AT CARNEGIE-MELLON UNIVERSITY. DE Information Science. AB This study investigated the frequency of use of information sources in general and for research and teaching purposes in particular by physical scientists, social scientists, and humanities scholars at Carnegie-Mellon University. Out of 390 subjects, 240 answered the questionnaire, making the response rate 61.53 percent. Data were collected by means of a questionnaire and analyzed using descriptive (Means, standard of deviation, and proportions) and inferential (One-way ANOVA, Two-way ANOVA, and the Scheffe Test) statistics. The null of the three hypotheses of the study were tested at the.05 level of significance. The results obtained in this study showed that the three hypotheses were partially supported. It was found that physical scientists, social scientists, and humanists differed only in their use of informal sources of information in general, in data collection phase, and when developing a new course. No difference was registered in their use of formal sources of information for the same purposes. The impact of the variables tenure and experience on the use of information sources by the subjects has been investigated as an auxiliary factor and found nonsignificant. The findings of this study pointed to the importance of exhibitions, concerts, performances, A.V. materials, and the library resources to humanists. The same information sources were found of negligible importance to both physical scientists and social scientists when conducting a research project. The importance of using personal files by the three groups was observed. It was found that journals were especially important to physical scientists and social scientists. Also, it was found that the use of information sources by respondents varied from one phase of a research project to another. Thus, physical scientists, social scientists, and humanists tended to rely heavily on personal contact in the proposal phase and data analysis and interpretation phase, whereas this reliance appeared to decline in the data collection phase. Other findings were: the importance to respondents of personal contact and personal files as a stimulus for ideas in research; physical scientists and social scientists rated the use of journals for obtaining new ideas in research higher than that of textbooks; similarly, physical scientists and social scientists rated the use of textbooks as sources of new ideas in teaching higher than that of journals. ] [ AN University Microfilms Order Number ADGDX-82159. AU DANIELS, PENNY JANE. IN The City University (London) (United Kingdom) Ph.D 1987, 167 pages. TI DEVELOPING THE USER MODELLING FUNCTION OF AN INTELLIGENT INTERFACE FOR DOCUMENT RETRIEVAL SYSTEMS. DE Information Science. AB Available from UMI in association with The British Library. This research forms part of a larger project, the eventual aim of which is the design and implementation of an intelligent interface for document retrieval systems. A number of functions which must be performed by the human intermediary in order to successfully interact with the user have been identified. The research presented here is concerned with one function in particular: the user modelling function, which aims to describe and model various aspects of the user's background, personal characteristics, goals and knowledge. An assumption underlying this research is that an intelligent interface should simulate the functional behaviour of a competent human intermediary. Therefore the ways in which human intermediaries carry out user modelling and employ these models, have been investigated. The primary method was to make audiorecordings of seven human user/human intermediary interviews in online search service settings, and to subject the transcripts to detailed functional discourse analysis. This analysis produced a specification for the User Model, and identified its components and the knowledge resources that are needed by the intermediary, whether human or automatic, to carry out the function of user modelling. This analysis was supplemented by the examination of a number of users' problem statements, together with their accompanying recordings, which had been collected for another project, and by interviews with three intermediaries. The discourse analysis revealed that the User Model interacts with the other interface functions, and this interaction was also investigated. The results showed that the User Model comprises a number of subfunctions, requires extensive knowledge resources, and interacts with the other functions, in particular providing information necessary for the other functions' own processing. A formalism for representing the User Model in a computer system is suggested, and an attempt is made to validate the User Model by applying it to a new dialogue. The results of the validation suggested that the User Model is independent of the data on which it is based, and that the formalism can adequately handle a new interaction. The implications of these findings for the design and implementation of the user modelling function in an intelligent interface, and for the design and implementation of the interface as a whole, are outlined. ] [ AN University Microfilms Order Number ADGD--82394. AU EPISKOPOU, DIANE M. IN University of East Anglia (United Kingdom) Ph.D. 1987, 460 pages. TI THE THEORY AND PRACTICE OF INFORMATION SYSTEMS METHODOLOGIES: A GROUNDED THEORY OF METHODOLOGICAL EVOLUTION. DE Information Science. AB Available from UMI in association with The British Library. Requires signed TDF. An indepth study of forty computer-related companies and sixty user organizations over a three year period (1983-86) investigating the practice of systems development methodologies, focuses on organisational, technical and personal aspects. A grounded theory research approach is used to develop a theory derived directly from the experiences of the participants, examined using phenomenological, case study and survey methods. The product of the research is a theory of information systems methodology evolution, which explains what constitutes a methodology and how it behaves in an organizational context. It includes categories concerning methodology nature, methodology constraints, formalisation of methods, historical influences, context inseparability, communication between developer and client, power and influence in the system development process and methodology evolution. The theory challenges and augments the contingency view of methodology selection and shows how methodologies evolve over time, effected by the people and circumstances surrounding them. The results and implications of the research tackle the issues of the integration of methodologies into an organisational environment and the development of methodologies in context, including the need to develop and maintain methodologies and control evolutionary phenomena of the drifting and dragging of procedures. Guidelines are offered to systems and methodology developers concerning the development and use of suitable methodologies for the future challenges of information systems development. ] [ AN University Microfilms Order Number ADG88-17895. AU GRIFFITHS, JOHN BARRIE. IN University of Pittsburgh Ph.D. 1988, 127 pages. TI INFORMATION NEEDS FOR NETWORK MANAGEMENT IN THE CAMPUS OF THE FUTURE: A MODEL OF THE RELATIVE IMPORTANCE OF USER REQUIREMENTS IN NETWORK PERFORMANCE MEASUREMENT. DE Information Science. AB Network management information needs encompass a variety of choices. Since 1973 research has led to the common recognition of network management concerns, objectives, functions and activities. The changing telecommunications environment has affected network management, which is now an important segment of data processing management. During the recent, general growth in network use by many types of organizations, research has been conducted into network management aspects of centralization, decentralization, standards, network performance measurement at technical and user levels, user organization characteristics, and network monitoring and control techniques. Network design estimates and assumptions play a part in determining how performance measures are interpreted. Network simulation and modelling are common sources of information for network management, most approaches provide statistical views of network performance measures based upon network designs. The relationship between network performance measures and user requirements had not, until this research, been investigated. Network user requirements arise from their organization environments, which, for example, in the case of libraries, are in turn affected by the use of networks. The use of networks in academic organizations too, can be expected to affect the information needs of their network managers. The characteristics of eight categories of academic networks are described, and seven key areas of network performance measurement are identified. A symbolic model of the relative importance of user requirements in network performance measurement is shown to describe priorities in network management information needs during the development of a campus-of-the-future network. ] [ AN University Microfilms Order Number ADGDX-82189. AU REYNOLDS, JAMES E. F. IN The City University (London) (United Kingdom) Ph.D 1987, 360 pages. TI THE DEVELOPMENT AND EVALUATION OF A FULL-TEXT DRUGS DATABASE: MARTINDALE ONLINE. DE Information Science. AB Available from UMI in association with The British Library. Martindale Online is a full-text database on drugs produced from a structured neutral database that is also used to produce a print product. Special characheristics of the database include a hierarchical record structure and a facility for linking records within the same hierarchy. The development of this database is described. Investigation at the development stage indicated a need to index the database and this was carried out using descriptors from a specially designed thesaurus. To evaluate the effect of this indexing, three information pharmacists selected 98 queries for an assessment of retrieval effectiveness; they and the author formulated sets of search statements that were used to search the file in several different ways. It was found that searching the indexed database via descriptors and free text (when appropriate) produced significantly better results, as judged by scores that incorporated precision and recall, than searching either the indexed or the unindexed database solely in a free-text manner. As there was evidence that searchers were slow to make use of the descriptors, highly structured search statements were created for each query using all the details from the relevant sections of the thesaurus and these statements were tested on the unindexed database. While this test produced some conflicting results, it did suggest that as far as major relevance was concerned such a method of searching might be effective with Martindale Online and is worth exploring further, especially with a view to producing a front-end system. Detailed failure analysis was carried out on the searches performed in the recommended manner. With the information pharmacists' search statements the database was operating at a recall ratio of 60.2 for all relevant records (69.3 for records of major relevance); with the author's statements the recall ratio was 65.4 (73.2 for major relevance). Corresponding precision ratios were 63.5 (58.3 for major relevance) for the information pharmacists and 67.5 (59.6) for the author. The largest cause of both recall and precision failure was in limitations of the search statements whether produced by the information pharmacists who had varied experience of Martindale Online, or by the author who has a detailed knowledge of the system and the contents. Limitations in the indexing also accounted for both types of failure; account has already been taken of these limitations and modifications have been made to some of the indexing guidelines. ] [ AN University Microfilms Order Number ADG88-14221. AU CHEN, TSUNG-TENG. IN The University of Arizona Ph.D 1988, 279 pages. TI INFORMATION MANAGEMENT IN INTEGRATED INFORMATION SYSTEM DEVELOPMENT ENVIRONMENTS. DE Information Science. Business Administration, General. Computer Science. AB Information System development involves various activities; the process of developing information systems is considered to be the production of a series of documents. The information derived from the activities of the life cycle needs to be stored in a way that will facilitate the carrying out of subsequent activities. That is, information must be stored with a consistent, semantically rich, flexible, and efficient structure that will make it accessible for use by various tools employed in carrying out the development process. In this research, knowledge base management system (KBMS) to manage the information created by the information system development process was designed and implemented. Several contemporary popular knowledge representation schemes can be managed conveniently by this KBMS, which utilized efficient database techniques to facilitate fast retrieval and traversal of the underlying semantic inheritance net and frame knowledge structure. Inference and logic deduction capability was made a part of the static knowledge structure to further extend the functionality of the KBMS. Furthermore, a specially designed relational database management system was implemented and interfaced with the KBMS to alleviate the possibility of a storage saturation problem and to facilitate the storage of detailed exclusive information of terms defined in the knowledge base. Models that are applicable to various information system development activities were identified and stored in the knowledge base. The aggregation of those models is, in fact, a conceptual non-procedural language that provides a concise descriptive framework to help the user gather and manage information derived from various activities during the information system development process. The knowledge base, the language, and several knowledge-base related tools were used by more than seventy graduate students in a case study for a system analysis and design course. An information system methodology specifically tailored for this knowledge base supported environment was proposed and applied in a simplified case to illustrate the process of how a database-centered information system can be derived from the initial strategic planning phase. The methodology explored and made use of the storage structure of the closely coupled knowledge base and database. Finally, future research direction was identified. ] [ AN University Microfilms Order Number ADG88-09947. AU SMITH, TIMOTHY WILLIAM. IN The University of Arizona PH. 1988, 378 pages. TI ASSESSING THE USABILITY OF USER INTERFACES: GUIDANCE AND ONLINE HELP FEATURES. DE Information Science. Business Administration, General. Computer Science. AB The purpose of this research was to provide evidence to support specific features of a software user interface implementation. A 3 x 2 x 2 full factorial, between subjects design was employed, in a laboratory experiment systematically varying existence or non-existence of a user interface and media of help documentation (either online or written), while blocking for varying levels of user experience. Subjects completed a set of tasks using a computer, so the experimenters could collect and evaluate various performance and attitudinal measures. Several attitudinal measures were developed and validated as part of this research. Consistent with previous findings, this research found that a user's previous level of experience in using a computer had a significant impact on their performance measures. Specifically, increased levels of user experience were associated with reduced time to complete the tasks, fewer number of characters typed, fewer references to help documentation, and fewer requests for human assistance. In addition, increased levels of user experience were generally associated with higher levels of attitudinal measures (general attitude toward computers and satisfaction with their experiment performance). The existence of a user interface had a positive impact on task performance across all levels of user experience. Although experienced users were not more satisfied with the user interface than without it, their performance was better. This contrasts with at least some previous findings that suggest experienced users are more efficient without a menu-driven user interface. The use of online documentation, as opposed to written, had a significant negative impact on task performance. Specifically, users required more time, made more references to the help documentation, and required more human assistance. However, these users generally indicated attitudinal measures (satisfied) that were as high with online as written documentation. There was a strong interaction between the user interface and online documentation for the task performance measures. This research concludes that a set of tasks can be performed in significantly less time when online documentation is facilitated by the presence of a user interface. Written documentation users seemed to perform equivalently with or without the user interface. With online documentation the user interface became crucial to task performance. Research implications are presented for practitioners, designers and researchers. ] [ AN University Microfilms Order Number ADG88-16314. AU HELTNE, MARI MONTRI. IN The University of Arizona Ph.D. 1988, 177 pages. TI KNOWLEDGE-BASED SUPPORT FOR MANAGEMENT OF END USER COMPUTING RESOURCES: ISSUES IN KNOWLEDGE ELICITATION AND FLEXIBLE DESIGN. DE Information Science. Business Administration, Management. AB Effective resource management requires tools and decision aides to help determine users' needs and appropriate assignment. The goal of this research was to design, implement, and test technological tools that, even in a dynamic environment, effectively support the matching of users and resources. The context of the investigation is the Information Center, the structure used to manage and control the computing resources demanded by end users. The major contributions of the research lie in two areas: (1) the development and use of a knowledge acquisition called Resource Attribute Charts (RAC), which allow for the structured definition of the resources managed by the IC, and (2) the design, implementation, validation, and verification of the transportability of Information Center Expert, a system that supports the activities of the IC personnel. Prototyping, the system development methodology commonly used in software engineering, was used to design the general architecture of the knowledge acquisition tools, the knowledge maintenance tool, and the expert system itself. The knowledge acquisition tools, RAC, were used to build the knowledge base of ICE (Information Center Expert). ICE was installed at two corporate sites, its software recommendations were validated, and its transportability from one location to another was verified experimentally. The viability of a rule-based consultation system as a mechanism for bringing together knowledge about users, problems, and resources for the purpose of effective resource management was demonstrated. ] [ AN University Microfilms Order Number ADG88-20760. AU SWARTZMEYER, ELMER GORDON. IN Georgia State University - College of Business Administration PH.D. 1987, 330 pages. TI AN EMPIRICAL INVESTIGATION OF INFORMATION VALUE: A UTILITY APPROACH TO BENEFIT ASSESSMENT. DE Information Science. Business Administration, Management. Economics, Commerce, Business. AB The advent of database management systems (DBMS) has had a major impact on the organizational use of computer information systems. The use of DBMS has provided system users with the flexibility to query a data base and extract information necessary to deal with a particular situation. This flexibility implies that the value of information extracted by one query may differ from that of another. This research applies a utility valuation methodology to data resident in an organization's data base so that the expected value of the information produced by a query may be calculated. The value of information will vary with the user's perception of the information worth. The statistical technique of conjoint measurement is used in this research to provide a way of quantifying perceptual value. Subjects are asked to rank hypothetical descriptions as to their belief that these descriptions describe a data element within the context of typical business scenarios. Profiles are derived by varying levels of experimentally determined characteristics of information. Characteristics used are relevancy, accuracy, timeliness, content, and reliability. A composite utility function is then developed from this ranking and applied to a set of reports composed of randomly derived combinations of data elements. The predicted ranking of these reports is compared with subject ranking of the same reports. Results of this research indicate that the data element can be used as the object of a utility assessment. The "value" of each data element was calculated using MONANOVA. This program also provides quantitative measures of each attribute used in the utility assessment. These measures permit the comparison of one data element's "value" to another and also the determination, in terms of attribute level, of why one data element is valued differently from another. The Kendall's Tau algorithm is used to compare the predicted versus actual report ranking. Research results indicate that the "value" of output information can be predicted in advance of its use based on the "value" assigned to data elements from which the information is composed. ] [ AN University Microfilms Order Number ADG88-15698. AU SHERBY, LOUISE SHARON. IN Columbia University D.L.S. 1988, 371 pages. TI THE DESIGN AND IMPLEMENTATION OF AN ONLINE PUBLIC ACCESS CATALOG IN A LARGE, MULTI-UNIT LIBRARY; A CASE STUDY. DE Library Science. AB This study is a management case study of the design and implementation of the online public access catalog (OPAC) at the Columbia University Libraries. The Libraries, using the structure of a Decision Team, selected the Biblio-Techniques Library & Information System (BLIS) as their online catalog in January 1983. Once the decision to go ahead had been made, the implementation process was under the aegis of the BLIS (later CLIO) Steering Committee. The Columbia University Libraries relied on a committee structure comprised primarily by staff from the Libraries and the Columbia University Center for Computing Activities (CUCCA) to implement the online catalog. This committee structure proved to be an effective way to implement such a system, although a costly one in terms of staff effort. A technological model designed by John Corbin in Managing the Library Automation Project was used to structure the data for the study in terms of the nine phases he describes. The study follows the design and implementation process from the work of the Decision Team in selecting the system to the Fall of 1987 when an evaluation review of the BLIS system was conducted. The study looks at the process from the point of view of the Libraries' management structure. Due to unique factors that affected the Columbia University Libraries, additional variables were identified beyond those considered in the Corbin model. These variables are discussed at length and provide additional areas for further research. ] [ AN University Microfilms Order Number ADG88-18869. AU WASHINGTON, MARYANN SHOWERS. IN Temple University Ed.D. 1988, 134 pages. TI INFORMATION RETRIEVAL FROM ONLINE DATABASES BY STUDENTS IN SECONDARY SCHOOL LIBRARY MEDIA CENTERS. DE Library Science. Information Science. Education, Technology. AB Eighty-five secondary school library media specialists in Pennsylvania were surveyed about information retrieval from online databases by students in secondary school library media centers. The library media specialists surveyed managed sites funded for student searching of online databases by the state under the LIN-TEL (Linking Information Needs-Technology, Education, Libraries) project. The study included search activities for 1985-1986 and investigated what databases were used, how the databases were utilized in curriculum areas, what training programs were in place, and the problems and potential library media specialists attached to online database searching by students. A profile of search sites collected data about staff, equipment, searching, and LIN-TEL membership. The LIN-TEL project was devised to train library media specialists about online searching and provide limited funding for searching by students. The issues of database selection, retrievability of documents, and demands on staff time were identified by respondents as interrelated issues impacting on student searching. Library media specialists reported the major hindrance to online database searching at their sites was lack of staff time. When the financial support of the LIN-TEL project is withdrawn, respondents anticipated the major hindrance to searching would become funding. Acknowledged in the study is the impact of the introduction of CD-ROM format databases on information retrieval in the school library media center. ] [ AN University Microfilms Order Number ADG88-13022. AU WHITNEY, GRETCHEN. IN The University of Michigan Ph.D. 1988, 376 pages. TI THE LANGUAGE DISTRIBUTION OF BIBLIOGRAPHIC RECORDS IN SELECTED ONLINE DATABASES. DE Library Science. Information Science. AB This study explores the language distribution of materials included in on-line bibliographic databases between 1970-84. Eight databases (BIOSIS, Chemical Abstracts, GeoRef, MEDLINE, Criminal Justice, Oceanic Abstracts, PAIS, PsycInfo) on DIALOG were chosen for their world-wide coverage of literature in their respective fields. Trends are accounted for by examining database provider policies and practices. The data are compared with book and serial production statistics, to begin to assess the possible relationship between the databases and the actual availability of literature. The results describe the degree to which English has increased, decreased, or remained stable in relation to other languages, as reflected in the availability of bibliographic records in these databases. ] *************************************************************** Continued in Volume VI Number 7, Issue 7 *************************************************************** IRLIST Digest is distributed from the University of California, Division of Library Automation, 300 Lakeside Drive, Oakland, CA. 94612-3550. Send subscription requests to: LISTSERV@UCCVMA.BITNET Send submissions to IRLIST to: IR-L@UCCVMA.BITNET Editorial Staff: Clifford Lynch lynch@postgres.berkeley.edu calur@uccmvsa.bitnet Mary Engle engle@cmsa.berkeley.edu meeur@uccmvsa.bitnet Nancy Gusack ncgur@uccmvsa.bitnet The IRLIST Archives will be set up for anonymous FTP, and the address will be announced in future issues. These files are not to be sold or used for commercial purposes. Contact Mary Engle or Nancy Gusack for more information on IRLIST. The opinions expressed in IRLIST do not represent those of the editors or the University of California. Authors assume full responsibility for the contents of their submissions to IRLIST.