Information Retrieval List Digest 467 (August 23, 1999) URL = http://hegel.lib.ncsu.edu/stacks/serials/irld/irld-467.txt IRLIST Digest ISSN 1064-6965 August 23, 1999 Volume XVI, Number 31 Issue 467 ****************************************************************** I. QUERIES 1. Web Usability Study 2. Management of Plug-Ins for MultiMedia E-Journals 3. Reply to I.2. Management of Plug-Ins for MultiMedia E-Journals 4. Cataloging of Multimedia E-Journals 5. Reply to I.4. Cataloging of Multimedia E-Journals 6. Reply to I.5. Cataloging of Multimedia E-Journals 7. Reply to I.4. Cataloging of Multimedia E-Journals 8. Reply to I.7. Cataloging of Multimedia E-Journals 9. Reply to I.8. Cataloging of Multimedia E-Journals 10. Reply to Cataloging of Multimedia E-Journals II. JOBS 1. Rutgers U.: Faculty Position: Information Science 2. Thomas Jefferson U.: Scott Memorial Library: Collection Management Librarian III. NOTICES A. Publications 1. Journal Special Issue on Information Agents: CFPapers 2. Special Issue of Instance Selection for DMKD Journal: Last CFPapers 3. JASIS Table of Contents. Vol 50, # 12 4. Electronic Journals: A Selected Resource Guide (updated) B. Meetings 1. ANLP/NAACL 2000: CFPapers 2. CoopIS'99: Programme C. Miscellaneous 1. Oregon Health Sciences U.: Distance Learning for Medical Informatics ****************************************************************** I. QUERIES I.1. Fr: Ruth Wilson Re: Web Usability Study I am a Postgraduate student in the Department of Information Science, University of Strathclyde, writing a dissertation on the usability of scientific textbooks on the Web. I am seeking participants for a study which involves visiting a Web site, performing some tasks and answering several questions, and which should not take more than 15 minutes to complete. If you would like to participate, all the relevant details can be found at the following URL: http://www.dis.strath.ac.uk/students/ruth/formb.html Thank you for your time, Ruth Wilson ********** I.2. Fr: Gerry Mckiernan Re: Management of Plug-Ins for MultiMedia E-Journals _Management of Plug-Ins_ As part of my overview/review of the Ramifications of Multimedia in Electronic Journals, I am greatly interested in learning about the management of the installations of plug-ins / helper applications to support the Multimedia in these journals in academic and research libraries/ institutions. I am interested in managing the plug-ins / helper applications for any and all types of multimedia found in these e-journals, notably, audio, video, animation, VRML, and animation., as well the management of plug-ins in general. For examples, see my M-Bed(sm) registry of embedded multimedia electronic journals at: http://www.public.iastate.edu/~CYBERSTACKS/M-Bed.htm Are the plug-ins installed and/or maintained from a central server, or are they installed machine-by-machine? Otherwise? In general, what are the issues faced in installing and maintaining such plug-ins? The associated costs? Policy issues? As Always, Any and All contributions, comments, questions, or critiques are Most Welcome! Regards, /Gerry McKiernan Theoretical Librarian Iowa State University Ames IA 50011 gerrymck@iastate.edu ********** I.3. Fr: Thea Bergere Re: Reply to I.2. Management of Plug-Ins for MultiMedia E-Journals Dear Gerry, There are many kinds of files that web browsers cannot display, such as animation, sound, video - for these you need helper applications and plug-ins. You must configure your browser to launch these helper applications and plug-ins whenever you click on an object that needs them in order to be viewed, such as a sound or animation file that the browser can't run or play. A web browser displays information on your computer by interpreting the HTML that is used to build the home pages on the Web. Home pages usually display graphics, sound, multimedia files, links, files that can be downloaded, and other internet resources. The coding in the HTML files tells your browser how to display the text, graphics, links, multimedia files, etc. on the home page. The HTML that your browser loads to display the home page doesn't actually have the graphics, sound, multimedia files and other resources on it. Instead, it contains HTML references to those graphics and files. Your browser uses those references to find the files on the server and then display them on the home page. The web browser also interprets HTML tags as links to other Web sites or resources, such as graphics, multimedia files, newsgroups, or files to download. Depending on the link it will perform different functions. Good luck with the project. Sincerely, Thea Bergere ********** I.4. Fr: Gerry Mckiernan Re: Cataloging of Multimedia E-Journals _Cataloging of Multimedia E-Journals_ I recently searched the OCLC database for bibliographic records for each of the titles in my registry of multimedia electronic journals, M-Bed(sm). M-Bed(sm) is available at: http://www.public.iastate.edu/~CYBERSTACKS/M-Bed.htm Of the 41 titles currently listed, 34 had records. Of these, only five (5) -- less than 15% of those with records -- had a mention of the availability of a multimedia component in the record!! To say the least I was surprised and quite perplexed that a major component of such journals has been completely ignored in a vast majority of cases. I am further perplexed that while a number of records make note of the need for the Acrobat plug-in there is no mention of the other plug-ins required for using the associated multimedia. From this brief survey, I've concluded that catalogers in general are not aware of the multimedia dimensions of such journals and that journals such as these would be difficult to identify due to the lack of standard and uniform description of the multimedia. This raises several issues, namely the current status of standard terminology within the cataloging community for such multimedia, the appropriate location with the catalog record (the five records which mentioned multimedia did so in the 516, 520, 538 MARC fields) and the appropriate General Material Designation (GMD) for such 'publications'. In this case, all had 'Computer file' as the GMD. Would it be more appropriate to use 'Interactive Media' as the GMD for multimedia e-journals? [This approach would be an extension of the _Guidelines for Bibliographic Description of Interactive Media_ published by the American Library Association in 1994 and authored by the Interactive Multimedia Guidelines Review Task Force chaired by Laurel Jizba, now of Portland State University] I'd very much appreciate my colleagues thoughts and reactions to my observations and conclusions regarding the cataloging of multimedia electronic e-journals (or any other issues that this posting may inspire regarding multimedia e-journals). As Always, Any and All comments, contributions, questions, queries, critiques, etc. etc. are Most Welcome! /Gerry McKiernan Theoretical Librarian Iowa State University Ames IA 50011 gerrymck@iastate.edu ********** I.5. Fr: Bernhard Eversberg Re: Reply to I.4. Cataloging of Multimedia E-Journals First of all, one has to *become* aware of the new dimensions an e-journal has taken on. Nobody can go to every e-journal site every day and monitor the changes that occur and new services implemented overnight. (As well as changes in URLs, as goes without saying - a big enough headache.) You are all invited to look at the e-journal listings we have set up and revise at least one a week, at http://www.biblio.tu-bs.de/CoOL You get a table of some 45 subject areas, like "PY Physik" or "ME Medizin" (headlines in German). On each of the subject pages, there's a link to "E-Zeitschriften". On each of those pages, you can search for "multimedia". All titles on Gerry's list are contained, in fact we took the trouble to make sure we have them all. Further, we have a database, free for downloading, of some 15.000 titles, not all with URLs but containing all titles in the ISI and OCLC FirstSearch services as well. This is at http://www.alcarta.com/elcarta.htm Among other downloadables, you find the EJO database. It requires windows 95, 98 or NT. Good luck, B.E. Bernhard Eversberg Universitaetsbibliothek, Postf. 3329, D-38023 Braunschweig, Germany Tel. +49 531 391-5026 , -5011 , FAX -5836 e-mail B.Eversberg@tu-bs.de ********** I.6. Fr: Gerry Mckiernan Re: Reply to I.5. Cataloging of Multimedia E-Journals Hi Bernhard Yes! My hope is that is more and more of my colleagues will become aware of the increasing use of multimedia in general and its growing use in e-journals. My observations here relate to the original cataloging of e-journals that are established with a multimedia component with the first issue. [With changes, one would hope that the various link checkers might be of some value here (but perhaps not) [?]]. I am *so* pleased to know that you have informed others of this resource! >You get a table of some 45 subject areas, like "PY >Physik" or "ME Medizin" (headlines in German). On >each of the subject pages, there's a link to "E- >Zeitschriften". On each of those pages, you >can search for "multimedia". This is a *very* nice feature! Thanks for your interest in my work! [:-)] Thanks again for your response! Regards, /Gerry McKiernan Science and Technology Librarian and Bibliographer Iowa State University Library Ames IA 50011 gerrymck@iastate.edu ********** I.7. Fr: Mark Leggott Re: Reply to I.4. Cataloging of Multimedia E-Journals I'm not sure noting the exact nature of the content is wise. Since any given journal/author may choose to incorporate a wide range of multimedia types (QTVR, MOV, Flash, etc.) in an individual article or an "issue" of articles, you may be trying to pin-point a moving target. You may have 2 different media media types in one issue, 7 in the next and 1 after that. How do you deal with that in a note? I suspect many mention Acrobat since the articles themselves are stored in this format, and any additional multimedia components would be incorporated into this "metafile". How would you describe Java-based multimedia components? Do you need to? What about journals that use the SMIL standard and are able to provide more intelligent media handling? Maybe a statement referring to the existence of multimedia content would suffice? Most current browsers are able to recognize standard media types and point to the appropriate plugin for downloading at viewing time. I suspect many workstations used for creating these records would be incapable of viewing/testing the multimedia content even if they knew it was there. I also suspect that many cataloguers (and librarians in general) are unaware of some of these advanced media extensions because of the general lack of appropriate hardware and software and time to play... I think the change in GMD would be a useful approach at this point, as it deals with the need for some identification, but avoids the pitfalls of a higher level of specificity as per the above comments. I think it also points out the short-comings of the MARC record as a descriptor for digital resources, and the need for movement on integrating emerging metadata standards into library systems (and building new library systems). The ultimate goal should be for the bib record to describe the "container" (e.g., the journal) and its location(s), the "data" (e.g., the journal article) to describe itself, and "viewer" (e.g. a web-based opac interface) to use this to render the display. That way the bib record does not have to be everything to everyone. In order to do this effectively we need to step out of the MARC black box we have sealed ourselves in and jump into the Web/Metadata sandbox, or we will fail to deliver these new resources to our users. Mark Leggott - University Librarian University of Winnipeg m.leggott@uwinnipeg.ca 204-786-9801 783-8910 (FAX) ********** I.8. Fr: Gerry Mckiernan Re: Reply to I.7. Cataloging of Multimedia E-Journals Hi Mark You raise an interesting point here. However, it seems that at present from my superficial review of the _M-Bed(sm)_ multimedia journals that individual journals limit the multimedia that can be used. You raise a host of related issues and possible solutions. Would a link to the appropriate page of a given e-journal to its multimedia types and required plug-ins be possible within the bibliographic record? Would it unreasonable, impractical, unrealistic to lobby publishers to create such a standard page? It seems to me that it would help to promote a fuller use of their publication. Acrobat is *one* of two (or three options) in many cases in these journals. HTML and HTML with augmented multimedia are typically the others, although PostsScript seems to be common as well. My personal conclusion about the choice of noting Acrobat is that we still have a 'print/paper' mind set and unconsciously see only the paper analog version [Is this an unreasonable conclusion?] *The* basic issue: What is to described and how should we describe it? It also raises the issue of the multiple roles that a bibliographic/surrogate record has taken on, e.g. information about preservation. Some of the bib records for the multimedia e-journals note for example that the journal exists in HTML format and that it is available via the World Wide Web. Is this necessary? Is it necessary to note the Java requirement? I would say 'Yes' to both questions. I just became aware of SMIL ('smile') two weeks ago and am glad you mention it. To me, SMIL -- the Synchronized Multimedia Integration Language -- holds great potential for multimedia presentation [Readers may wish to review the W3C site for details about SMIL at http://www.w3.org/AudioVideo/] > Maybe a statement referring to the existence of multimedia content > would suffice? A statement referring to the existence of multimedia content is *the* question! *Is* it sufficient? Here again, it depends on what role the record should play. Does/ Can / Should a library allow its users to download and install any and all plug-ins? Or, should the library have a policy. The IUPUI University Library Policy on the Deployment of Internet Plug-ins for Library Scholar's Workstations at could serve as a model for other libraries. Yes, indeed, if the cataloger's workstation is not fully configured they will be unable to do view or test the multimedia content. This is a significant issue that will certainly need to be addressed. As librarians I believe we have the responsibility to be aware of these developments and to provide leadership and opportunities for our users. This *is* the Information Age, isn't it? And we are Information Specialists, aren't we? Why should we limit our professional knowledge to text-based phenomena? The actual recommendation is the GMD 'interactive multimedia' [Please excuse the typo]. Your point about the short-comings of MARC records and the need for integrating metadata standards does raise other issues, e.g., how do we integrate two different systems in one OPAC? I like your last point here and its implications: Let the hardware/system handle the multimedia seamlessly Another point of view would be that the bib record becomes more than just a describer but takes on other roles, e.g. link to full-text [e.g., 856]. We do need to re-think MARC in light of the multimedia issues and the like. Thanks again for your interest, your thoughtful observations and remarks, and your time. Regards, /Gerry McKiernan Science and Technology Librarian and Bibliographer Iowa State University Library Ames IA 50011 gerrymck@iastate.edu ********** I.9. Fr: Tony Barry Re: Reply to I.8. Cataloging of Multimedia E-Journals Commercial print journals with electronic equivalents certainly. I'm not so sure that this applies to electronic only "journals" If you can link to the page for the journal where any responsible web site will indicate what is needed why clutter the bibliographical record? The needed information will normally only be one or two clicks away. The entire idea of a "journal" with "issues" is a print mind set. If the journal is multimedia in _can't_ have a print equivalent. It seems to me that the bibliographic/surrogate record was in the catalogue in such detail so that a decision could be made as to whether it was worthwhile to go and find the item on the shelves. If the item is a click away and then on the screen there is no need for a detailed surrogate (if you don't have to pay for access). If you do have to pay then you will want more information. I am convinced however that the future for "journals" which charge end users and libraries for access is bleak. I'm with Harnard on this on. Authors and their organisations or professional societies are more likely to carry the funding in anelectronic world. You need a pretty old browser to find one that does not support Java. I often turn Java off as few sites providing useful information require it and I find helpful messages as to the functionality I am missing and sometimes a link to where I can update. If you are going to say anything about Java being needed in the record you need to indicate which version and which implementation e.g., an "enhanced" Microsoft version which breaks Netscape. You would also need to put in requirements for javascript, jscript, etc. Why bother to add this to the record when a link to the site provides the information? And XML, MathML, ChemML, with and without RDF which may or may not encapsulate Cublin Core which may specify AACRII for names or something else, MESH or LC for subjects or something else etc. My view is that records need hold far less than they did before as they can link to where the information can be obtained. For instance do you need a full bibliographical record if one click gives you the actual document? We can expect users to have their own equipment and not fiddle with the libraries. On the other hand the library should install any helper or plugin which might be needed. I increasingly wonder if it isn't the complexity of the MARC record That is holding us back. That's why the browser was invented, to pull together the protocols --ftp, gopher, http and telnet (wais and z39.50 are there to but have never got of the groups other than wais as a helper application) and the formats through MIME tagging Once the 856 link is there you potentially have not just the full text but all the details that might be in the bib record. It's not just MARC it's also the catalogue. As more and more material is on the "shelves" of the internet should we catalogue them at considerable expense or find them through another for of database and consider the bibliographical entries in the catalogue as just legacy data which we access like any other database on the internet? Tony phone +61 2 6241 7659 mailto:me@Tony-Barry.emu.id.au http://purl.oclc.org/NET/Tony.Barry ********** I.10. Fr: Gerry Mckiernan Re: Reply to Cataloging of Multimedia E-Journals This is a response to my recent posting on the "Cataloging of Multimedia E-Journals". It raises a number of related issues that were included in my post and reports on a significant study that I believe will be of interest to other lists and their members. The response below has been re-posted with permission from Deborah Woodyard, PADI / Digital Preservation National Library of Australia /Gerry McKiernan Theoretical Librarian Iowa State University Ames IA 50011 gerrymck@iastate.edu Gerry and list members, We conducted a similar survey in 1996, but from a different angle, we didn't have the titles we wanted information on but wanted to find material in our collection that contained computer disk components (see 5.2.2 in "Physical format electronic publications in the National Library of Australia: report on a preservation survey" http://www.nla.gov.au./nla/staffpaper/cwebb6.html). Your reaction to your survey results sound very familiar to me. I was surprised at the difficulty we had obtaining detailed information from the catalogue records about the electronic components. The collation field in the ILMS record for 400 items was checked for the size and number of disks included in a publication - basic information required for preservation management. Only 238 gave complete details. And this did not include checking the system requirements recorded. This information was not required under existing cataloguing guidelines for disks accompanying print materials, but a few local practices have now been modified and the result would be improved. This has highlighted the gap that may exist between information needed for current bibliographic access and that needed for long term management, raising questions about how and where the latter should be recorded. I am pleased to see the cataloguing rules are being updated gradually. See: Task Force on the Harmonization of ISBD(ER) and AACR2 Final Report (Penultimate Draft): Executive Summary revised 14 June 1999: http://www.library.yale.edu/cataloging/aacrer/tf-harm21.htm. And more current Internet cataloguing guidelines available linked from the PADI web site at: http://www.nla.gov.au/padi/internet.html#cat Please excuse my possible ignorance of matters obvious to librarians, but Gerry's message inspired me to share my experience. Deborah Deborah Woodyard PADI / Digital Preservation National Library of Australia Canberra ACT 2600 AUSTRALIA mailto:dwoodyar@nla.gov.au ph: +61 2 6262 1366 PADI: http://www.nla.gov.au/padi/ ****************************************************************** II. JOBS II.1. Fr: Nicholas J. Belkin Re: Rutgers U.: Faculty Position: Information Science RUTGERS UNIVERSITY Department of Library and Information Science ASSISTANT PROFESSOR (TENURE TRACK) Information Science The Department of Library and Information Science seeks applications for a tenure track position to begin Fall 2000. Applicants will be expected to have teaching and research expertise in one or more of the following general areas: information retrieval digital libraries human-computer interaction and interfaces in information systems information behavior in distributed electronic environments The Department of Library and Information Science seeks a dynamic scholar to build upon its internationally recognized program of research and teaching in information science, one of three areas targeted for strategic expansion at Rutgers. The Rutgers commitment has already made possible the establishment of the Rutgers Distributed Laboratory for Digital Libraries (RDLDL), which supports interdisciplinary research and Ph.D.-level study in the basic sciences and technologies relevant to digital libraries. Qualifications for this position include: a Doctorate in library and information science, or in computer science or other area related to this position description with a strong interest in and knowledge of information science and information science education. Applicants are expected to have evidence of experience in teaching and research; knowledge of state of the art technologies for information access, retrieval, visualization, organization and dissemination; a commitment to the development of technological systems in support of human needs; and, the desire and ability to work within an interdisciplinary environment. Applicants should be prepared to participate in and direct substantial research projects in areas of relevance to this position. The School of Communication, Information and Library Studies is a multi-disciplinary professional school comprising three departments: Communication, Journalism/Mass Media, and Library and Information Science. Faculty in the LIS Department participate in the Masters Program in Library Service (MLS), the Masters Program in Communication and Information Studies (MCIS), and the school-wide interdisciplinary Ph.D. Program in Communication, Information and Library Studies. Undergraduate offerings are in the planning stage. The successful applicant will have the opportunity to participate in all of these programs. For further information about the Department's and School's research and teaching programs, see our web site at http://www.scils.rutgers.edu. Applications, consisting of a cover letter, curriculum vitae, and names and contact addresses of three references, should be sent to: Nicholas J. Belkin Chair, Information Science Search Department of Library and Information Science School of Communication, Information & Library Studies Rutgers University 4 Huntington Street New Brunswick NJ 08901-1071 USA or by email at nick@belkin.rutgers.edu (as ASCII, or attached pdf or Word rtf file). Applications will be accepted up to 15 January 2000, or until the position is filled. Rutgers, the State University of New Jersey is an affirmative action, equal opportunity employer with a strong commitment to the diversity of faculty, staff, and student body. ********** II.2. Fr: Diana Zinnato Re: Thomas Jefferson U.: Scott Memorial Library: Collection Management Librarian COLLECTION MANAGEMENT LIBRARIAN (search re-opened) The Scott Memorial Library at Thomas Jefferson University seeks energetic and creative applicants for the position of Collection Management Librarian. Working in a highly automated environment under the direction of the Director of Collection Management, this individual will play a leadership role in all aspects of serials and interlibrary loan management. The incumbent will manage all serials-related work for a collection of 2,200 titles including ordering and receiving using the SIRSI serials control module, binding and in-house preservation activities and the supervision of 2 technicians in this area. The CML will lead the Library's initiative to greatly increase its number of electronic journal subscriptions next year. Additionally, they will oversee collection evaluation, space monitoring, journal holdings reporting and inventories. The CML is also responsible for the coordination of all interlibrary loan (ILL) activities supervising 3 technicians in this area. Responsibilities include evaluating ILL policies and procedures to effect efficient and cost-effective operations and overseeing the implementation of various ILL technologies including OCLC, DOCLINE, and Ariel. Some reference desk service is also required. Requirements for this position are an accredited MLS, a minimum of 3 years professional experience in serials work, preferably in an academic library, supervisory experience, a working knowledge of an ILS serials module, the use of OCLC or other automated systems for ILL. Salary minimum is $34,000. Knowledge of the current trends in serials control, especially electronic journal publishing is highly desirable. The successful candidate must thrive in a dynamic environment, have effective written and oral communication and problem-solving skills and be able to collaborate on team projects. Thomas Jefferson University is an academic health center located in center-city Philadelphia. Scott Memorial Library is a department of Academic Information Services and Research (AISR) which is comprised of the Library, Medical Media Services and the Office of Academic Computing. AISR has a staff of 64 FTE employees and an annual operating budget of $4 million. You are invited to visit JEFFLINE, AISR's information management resource at http://jeffline.tju.edu/ for more information. The University offers excellent flexible benefits including tuition reimbursement. Qualified candidates should send a letter of application, their resume, and the names and phone numbers of three references to: Doug Block, Business Manager, Scott Memorial Library, Thomas Jefferson University, 1020 Walnut St., Philadelphia, PA 19107-5587. Screening of applications will begin on September 7, 1999. ****************************************************************** III. NOTICES III.A.1. Fr: Matthias Klusch Re: Journal Special Issue on Information Agents: CFPapers CALL FOR PAPERS Special Issue of the International Journal on Cooperative Information Systems INTELLIGENT INFORMATION AGENTS: THEORY AND APPLICATIONS Guest Editor: Matthias Klusch Deduction and Multi-Agent Systems Lab German Research Center for Artificial Intelligence Ltd., Germany http://www.dfki.de/~klusch/JCISspecial.html IMPORTANT DATES - Submission of Manuscripts: NOVEMBER 25, 1999 - Notification of Acceptance: MARCH 30, 2000 - Publication of Special Issue due: End of the year 2000 SCOPE & TOPICS This special issue of the International Journal on Cooperative Information Systems is devoted to advances in theory and applications of intelligent information agents. Roughly speaking, an information agent is a computational software entity that has access to one or multiple, heterogeneous and geographically distributed information sources; it pro-actively searches for and maintains relevant information on behalf of users or other agents preferably in a just-in-time fashion. Such an agent is supposed to satisfy one or multiple of following requirements: * Information acquisition and management, i.e., it may monitor, update, and provide transparent access to one or many different information sources, retrieve, extract, analyze and filter data (including semi-structured or even unstructured data). * Information synthesis and presentation, that is, it is able to integrate heterogeneous data and to provide unified (and multi-dimensional) views on data. * Intelligent user assistance by being able, for example to dynamically to user preferences, any kind of changes in information and network environment. It may provide convenient individual interactive assistance for everyday business on the Internet such as a life-like character, recommend sources and future work steps, etc. In other words, the agent helps to manage and overcome the difficulties associated with information overload. In part, there are many approaches and implemented solutions available from advanced databases, knowledge-bases and distributed information systems technology to meet some of these demands. The effective and efficient access to information on the Internet and Web has become a critical research area. Information agents technology emerged as part of the more general intelligent software agent technology around seven years ago mainly as a response to the increasing challenges of the cyberspace from both, the technological and human user perspective. It is an inherently interdisciplinary technology encompassing approaches, methods and tools from different research disciplines such as Artificial Intelligence (AI), Advanced Database and Knowledge Base Systems, Distributed Information Systems, Information Retrieval, Cognitive Sciences and Human Computer Interaction (HCI). Today, it can be seen as one of the key technologies for the actual and future Internet and worldwide Web. Topics are but not limited to: * Architectures of (Systems of) Information Agents General and specific architectures of information agents in different settings and environments. Approaches for communication and collaboration between (systems) of information agents. Service matchmaking and brokering. Inter-Agent Communication languages. * Advanced Database and Knowledge-Base Technology Interoperability in large-scaled, and uncertain information environments. Application of Techniques for Data Mining and Knowledge Discovery in open, distributed and dynamically changing environments. * Methods of Adaptation and Learning for Systems of Information Agents Methods for automated uncertain reasoning for information agents. Computation and action under uncertainty and limited resources. Performance and measurement of adaptation of single agent or multiagent systems in uncertain information environments. * Mobility and Issues of Security in the Internet Architectures, Environments and Languages for Mobile and Secure Information Agents and Servers. Secure agent execution and protection of data servers from malicious agents. Cooperating Information Agents in wearable computers, hand-held and/or satellite-based control devices. * Rational Information Agents and Electronic Commerce Agent-Based Marketplaces, Coalition Formation, Auctions, Negotiations. Economic models of cooperative problem solving among rational information agents in open information environments. Methods for prevention and detection of lying rational information agents. Electronic Commerce with incomplete and uncertain informations. Standards for privacy of communication, security, and jurisdiction for agent-mediated deals. * Human-Agent Interaction Synthetic Agents, believable avatars, and 3-D multimedia-based representation of user information spaces in the Internet. Models and Implementation of Advanced Interfaces for conversation and dialogue among Information Agents and Users. * Systems and Applications Systems and Applications of multiple collaborating Information Agents on the Internet. PREPARATION OF MANUSCRIPT The length of the contribution should not exceed 22 pages. For guidelines on manuscript preparation see the Web site of the International Journal on Cooperative Information Systems at: http://www.wspc.com.sg/journals/ijcis/ijcis.html SUBMISSION Manuscripts are to be submitted by (electronic) mail to the Guest Editor (see below). Authors may suggest the appropriate persons to review/referee their paper, however, the Editor need not necessarily take up the suggestion. Authors may request that their identity be kept unknown to the referee. Camera-ready manuscripts are to be prepared according to the instructions provided, preferably using LATEX or TEX. Please submit your manuscript by E-Mail (printable POSTSCRIPT - A4 format- AND the original text file) to klusch@dfki.de XOR Mail (5 Hard Copies) to Matthias Klusch DFKI GmbH Stuhlsatzenhausweg 3 66123 Saarbruecken, Germany. Dr. Matthias Klusch DFKI German AI Research Center Ltd. Stuhlsatzenhausweg 3 66123 Saarbruecken, Germany Phone: +49-681-302 5297 Fax: +49-681-302 2235 http://www.dfki.de/~klusch/ ********** III.A.2. Fr: Hiroshi Motoda Re: Special Issue of Instance Selection for DMKD Journal: Last CFPapers Last Call for Papers - INSTANCE SELECTION A Special Issue of the Data Mining and Knowledge Discovery Journal http://www.comp.nus.edu.sg/~liuh/dmkd.html Due date: 18 Sept 1999, electronic submission INTRODUCTION Knowledge discovery and data mining (KDD) is growing rapidly as computer technologies advance. However, no matter how powerful computers are now or will be in the future, KDD researchers and practitioners must consider how to manage ever-growing data which is, ironically, due to the extensive use of computers and ease of data collection with computers. Many different approaches have been used to address the data explosion issue. Algorithm scale-up is one and data reduction is another. Instance, example, or tuple selection is about algorithms that select or search for a representative portion of data that can fulfill a KDD task as if the whole data is used. Instance selection is directly related to data reduction and becomes increasingly important in many KDD applications due to the need for processing efficiency and/or storage efficiency. One of the major means of instance selection is sampling whereby a sample is selected for testing and analysis, and randomness is a key element in the process. Instance selection also covers other methods that require search. Examples can be found in density estimation - finding the representative instances (data points) for each cluster, and boundary hunting - finding the critical instances to form boundaries to differentiate data points of different classes. Other important issues related to instance selection extend to unwanted precision, focusing, concept drifts, noise/outlier removal, data smoothing, etc. OBJECTIVES This special issue on instance selection brings researchers and practitioners together to report new developments and applications, share hard-learned experiences to avoid similar pitfalls, and shed light on the future development of instance selection. Several critical questions are interesting to practitioners in KDD, and practically useful in real-life applications: * What are the existing methods? * Are they the same or just different names coined by researchers in different fields? * Are they application dependent or stand-alone? * Are new methods needed? * If there is no generic selection algorithm, are these algorithms specific to tasks such as classification, clustering, association, parallelization? * Are there common and reusable components in instance selection methods? * How can we reconfigure some components of instance selection for a particular task/application? * What are the new challenging issues of instance selection in the context of KDD? Sensible answers to these questions can greatly advance the field of KDD in handling large databases. This special issue hopes to answer these questions and to provide an easy reference point for further research and development. COVERAGE All aspects of instance selection will be considered: theories, methodologies, algorithms, and applications. Also studied are issues such as costs of selection, the gains and losses due to the selection, how to balance the gains and losses, and when to use what. Researchers and practitioners in KDD-related fields (Statistics, Databases, Machine Learning, etc.) are encouraged to submit their work to this special issue to share and exchange ideas and problems in any forms: survey, research manuscript, experimental comparison, theoretical study, or report on applications. IMPORTANT DATES 18 September, 1999 - Submissions due 15 November, 1999 - Reviews due (mainly peer review and the guest editors will review all the submissions) 22 Janurary, 2000 - Revised papers due 13 February, 2000 - To Editor-in-Chief FORMAT and PAGE LIMIT Each submission should be no more than 25 pages, have a line spacing of 1.5, use no smaller than a 12pt font, and have at least a 1 inch margin on each side. CONTACT INFORMATION Please direct any enquiries to the guest editors: Huan Liu, liuh@comp.nus.edu.sg, National University of Singapore Hiroshi Motoda, motoda@sanken.osaka-u.ac.jp, Osaka University, Japan. Please submit your work electronically (postscript file) to either guest editor. If you have to submit it in hard copy, please discuss it with the guest editors first. INFORMATION about the JOURNAL Data Mining and Knowledge Discovery, Kluwer Academic Publishers. http://www.wkap.nl/journalhome.htm/1384-5810 Editors-in-Chief: Usama Fayyad, Gregory Piatetsky-Shapiro, Heikki Mannila. ********** III.A.3. Fr: Richard Hill Re: JASIS Table of Contents. Vol 50, # 12 Journal of the American Society for Information Science JASIS VOLUME 50, NUMBER 12 [Note: below are URLs for viewing contents of JASIS from past issues.] Special Topic Issue: the 50th Anniversary of the Journal of the American Society for Information Science Part 2: Paradigms, Models, and Methods of Information Science Guest Editor: M. J. Bates CONTENTS The Invisible Substrate of Information Science Marcia J. Bates 1043 Information Science Tefko Saracevic 1051 Industrial Roots of Information Science Donald A. Windsor 1064 Historial Note: The Start of a Stop List at Biological Abstracts Barbara J. Flood 1066 Interaction in Information Retrieval: Trends Over Time Pamela A. Savage-Knepshield and Nicholas J. Belkin 1067 Museum Informatics and Collaborative Technologies: The Emerging Socio-Technological Dimension of Information Science in Museum Environments Paul F. Marty 1083 Mapping the Dimensions of a Dynamic Field Caroline Haythornthwaite, Geoffrey Bowker, Christine Jenkins, and W. Boyd Rayward 1092 Information Science and Information Systems: Conjunct Subjects Disjunct Disciplines David Ellis, David Allen, and Tom Wilson 1095 Comparing Information Access Approaches Matthew Chalmers 1108 The Rise of Ontologies or the Reinvention of Classification Dagobert Soergel 1119 From Retrieval to Communication: The Development, Use, and Consequences of Digital Documentary Systems Rob Kling and Holly Crawford 1121 More Research Needed: Informal Information-Seeking Behavior of Youth on the Internet Eliza T. Dresang 1123 An Information View of History Julian Warner 1125 The Control and Direction of Professional Education Bill Crowley 1127 Informing Information Science: The Case for Activity Theory Mark A. Spasser 1136 Aligning Studies of Information Seeking and Use with Domain Analysis Carole L. Palmer 1139 The Growth of Understanding in Information Science: Towards a Developmental Model Nigel Ford 1141 Information Science in 2010: A Loughborough University View Ron Summers, Charles Oppenheim, Jack Meadows, Cliff McKnight, and Margaret Kinnell 1153 The ASIS home page contains the Table of Contents and brief abstracts as above from January 1993 (Volume 44) to date. The John Wiley Interscience site includes issues from 1986 (Volume 37) to date. Guests have access only to tables of contents and abstracts. Registered users of the interscience site have access to the full text of these issues. We are still working on restoring access for ASIS members as "registered users." Richard Hill American Society for Information Science 8720 Georgia Avenue, Suite 501 Silver Spring, MD 20910 (301) 495-0900 FAX: (301) 495-0810 http://www.asis.org ********** III.A.4. Fr: Kathy Klemperer Re: Electronic Journals: A Selected Resource Guide (updated) "Electronic Journals; A Selected Resource Guide" has just been revised and updated on the HARRASSOWITZ website. This guide is an overview and summary of issues relating to electronic journals, covering such topics as the history of e-journals, technical standards, legal and business issues, scholarly publishing issues, preservation, and archiving. Each section of the Guide consists of an introductory discussion and a selected, annotated bibliography of resources, most of which are available on the WWW. In addition, there are pointers for maintaining current awareness in this area. The revised version includes 28 new sites and articles, and a new section called "The Cutting Edge." HARRASSOWITZ is committed to maintaining this resource guide, and relevant new resources and discussion topics will be added regularly. The resource guide can be viewed at: http://www.harrassowitz.de/ms/ejresguide.html Katharina Klemperer Library and Information Systems Consulting 37 Minuteman Rd, Acton MA 01720 klempjo@tiac.net http://www.tiac.net/users/klempjo 978-266-1776 fax: 978-266-2977 (*call ahead*) ********** III.B.1. Fr: Priscilla Rasmussen Re: ANLP/NAACL 2000: CFPapers Language Technology Joint Conference Applied Natural Language Processing and the North American Chapter of the Association for Computational Linguistics General Conference Chair: Marie Meteer, BBN Technologies CALL FOR PAPERS Contents: 1. Overview 2. ANLP Call for Papers 3. NAACL Call for Papers 4. Format for Submissions 5. Deadlines 1. Overview The Association for Computational Linguistics (ACL) is pleased to announce that the 2000 Applied Natural Language Processing (ANLP) conference and the first conference of the new North American Chapter of the ACL (NAACL) will be held jointly 29 April to 3 May 2000 in Seattle, Washington. The joint conferences will offer a unique opportunity to bring industry and researchers together to explore the full spectrum of computational linguistics and natural language processing, from theory and methodology to their application in commercial software. For the general sessions, substantial, original, and unpublished contributions to computational linguistics are solicited. (See the separate Call for Student Papers to be announced soon for requirements for submissions to the student sessions.) Submissions are due by 17 November 1999. See submission details below. The ANLP program committee invites papers describing natural language processing systems -- their development, integration, adaptation and standardization; tools, techniques, and resources contributing to the development of complete end-to-end applications of NLP; evaluation of system performance and related issues. In particular, submissions should be directed to one of the following subject areas: * Monolingual text processing systems * Multilingual text processing systems * Spoken language and multimodal systems * Integrated NLP systems * Tools and resources for developing NLP systems * Evaluation of performance of complete NLP systems The NAACL program committee invites papers on methodology, approaches, algorithms, models, analyses and experiments in computational linguistics. Program subcommittees will be organized around eight main areas: * Discourse, Dialogue, and Pragmatics * Semantics and the Lexicon * Syntax, Morphology, and Phonology * Generation and Summarization * Spoken Language * Corpus-Based and Statistical Natural Language Processing * Cognitive Modeling and Human-Computer Interaction * Multilingual Natural Language Processing There is some inevitable overlap between the topic areas for NAACL and ANLP. In deciding whether to submit their papers to NAACL or ANLP, authors should consider whether their paper focuses more on the methodology or the end application of that methodology to solve a particular problem. A paper accepted for presentation at either meeting must not be or have been presented at any other meeting with publicly available proceedings. A paper may not be submitted to both NAACL 2000 and ANLP 2000, but may be submitted to other conferences provided that, if accepted, it is withdrawn from all but one. Submission to other conferences should be indicated on the paper. Papers will not be exchanged between the two program committees. However, in the final program, papers may be grouped or juxtaposed in related sessions to highlight similarities and downplay artificial distinctions. We also appreciate that it can be advantageous to view the same work from both a theoretical/methodological perspective and an applied perspective; we welcome paired submissions to NAACL and ANLP, though each submission needs to make a significant contribution on its own. Please acknowledge the related submissions and include their abstracts with your submission, though decisions will be made independently and acceptance of one does not guarantee acceptance of the other. Original papers that do not easily fall within one of the suggested areas are also invited. The submission should be directed to the chair of the respective program committee, with the topic area slot in the submission template empty. 2. ANLP Call for Papers ANLP Call for Papers Sixth Applied Natural Language Processing Conference 29 April to 3 May 2000 Seattle, Washington Program Committee Chair: Sergei Nirenburg, New Mexico State University The ANLP program committee invites papers describing natural language processing systems -- their development, integration, adaptation and standardization; tools and resources contributing to the development of complete end applications of NLP; evaluation of system performance and related issues. In particular, submissions should be directed to one of the following subject areas: Monolingual Text Processing Systems. Area Chair: Oliviero Stock, IRST, Trento Italy Systems devoted to information retrieval, text data mining, information extraction, text summarization and related applications. Multilingual Text Processing Systems. Area Chair: Richard Kittredge, University of Montreal, Canada Systems devoted to machine translation, human-aided machine translation, machine-aided human translation, cross-lingual information retrieval, multi-document multilingual information extraction and summarization, text data mining and related applications. Spoken Language and Multimodal Systems. Area Chair: Susann Luperfoy, IET Inc. and Georgetown University, USA Text and dialog processing on telephony, workstation, and PDA platforms. Integrated NLP Systems. Area Chair: Eduard Hovy, University of Southern California, Information Sciences Institute, USA Combinations of multiple NLP applications; multimodal and multimedia systems; adaptation and standardization of existing NLP systems, embedded NLP systems and integration of legacy systems. Tools and Resources for Developing NLP Systems. Area Chair: Lynn Carlson, Department of Defense, USA Development and content of descriptive resources, such as grammars and lexicons of particular languages or sets of languages, ontologies, processed corpora and others; the acquisition and quick ramp-up tools for NLP systems; and methodologies for development and knowledge acquisition for NLP systems and environments and tools for training developers of NLP systems. Evaluation of Performance of Complete NLP Systems. Area Chair: John White, Lytton/PRC, USA Methodologies, case studies and tools. 3. NAACL Call for Papers NAACL Call for Papers 1st Conference of the North American Chapter of the Association for Computational Linguistics 29 April to 3 May 2000 Seattle, Washington Program Committee Chair: Janyce Wiebe, New Mexico State University For the general sessions, papers are invited on substantial, original, and unpublished research contributions on all aspects of computational linguistics methodology, enabling technologies, approaches, algorithms, models, analyses, and experiments. See the separate Call for Student Papers (to be announced) for requirements for submissions to the student sessions. Program subcommittees will be organized around eight main areas, as follows. Discourse, Dialogue, and Pragmatics. Area Chair: Diane Litman, AT&T Research. Empirical and knowledge-based approaches to discourse and dialogue; Dialogue management in spoken dialogue systems; Discourse segmentation; Anaphora resolution; Discourse parsing; Narrative understanding; Design, evaluation, and use of discourse annotation schemes; Topic detection and tracking; Intentional and relational discourse analysis; Robust discourse processing; Methods for evaluating dialogue/discourse systems and their components; Integration with other levels of linguistic processing. Semantics and the Lexicon. Area Chair: Graeme Hirst, University of Toronto. Semantic formalisms; Ontologies; Word-sense disambiguation; Event recognition and categorization; Logics for natural language; Extracting information from on-line dictionaries; Refining sense inventories; Computational lexicography; Lexical resource development. Syntax, Morphology, and Phonology. Area Chair: Michael Collins, AT&T Research. Grammar formalisms; Theoretical and empirical studies of parsing algorithms; Finite-state methods; Representation of syntactic, morphological, and phonological aspects of the lexicon; Robust and shallow parsing; Syntax annotation schemes; Grammar induction; Formal properties of symbolic and weighted/stochastic grammars. Generation and Summarization. Area Chair: Nancy Green, University of North Carolina at Greensboro. Strategic generation for text and dialogue (text planning, argumentation strategies, etc.); Tactical generation (sentence aggregation, lexical choice, etc.); Multimodal and multimedia generation; Knowledge acquisition and resources for generation and summarization; User-customized generation and summarization; Evaluation methodologies for generation and summarization; Application of generation, information extraction, and information retrieval techniques to summarization. Spoken Language. Area Chair: Andreas Stolcke, SRI International. Language modeling; Prosody; Speech annotation; Speech synthesis; Modeling of spontaneous speech phenomena (disfluencies, discourse markers, etc.); Comparative analyses of spoken and written language; Robust NLP for speech recognition output; Higher-level knowledge sources (e.g., dialogue) for speech recognition; Automatic segmentation of speech into sentences, topics, discourse units, etc.; Integration of speech with other modalities such as text and gesture; Methods for speech-to-speech translation. Corpus-Based and Statistical Natural Language Processing. Area Chair: Dekang Lin, University of Manitoba. Annotation, including automatic and semi-automatic methods, mapping between schemes, analyzing and improving agreement, minimizing costs; Induction of patterns and structures such as selectional frames and concept hierarchies; Extraction of terms and collocations; Text mining and knowledge discovery from text; Distributional similarity; Learning applied to NLP, including bootstrapping, smoothing, and multi-strategy learning. Cognitive Modeling and Human-Computer Interaction. Area Chair: Philip Resnik, University of Maryland. Computational psycholinguistics; Models of human sentence processing, language understanding, language generation, and language acquisition; Use of natural language in human-computer interaction; Evaluation of interfaces that use natural language (including multimodal and multimedia interfaces), by field studies, laboratory experimentation, or analytical methods. Multilingual Natural Language Processing. Area Chair: Kevin Knight, USC/Information Sciences Institute. Methods addressing the research challenges of multilingual environments, including cross-language divergences, producing fluent text, and dealing with non-literal translation equivalents; Methods for machine translation (direct, transfer, example-based, knowledge-based, interlingual, statistical, etc.); Design of interlinguas; Multilingual lexicons; Lexical acquisition for machine translation and cross-language information retrieval; Machine-assisted translation; Multilingual generation; Alignment of multilingual texts; Methods for exploiting parallel or comparable corpora for natural language processing tasks. Authors will be asked to identify the area or areas to which their submission corresponds. Relevant papers not fitting precisely into any of these areas are also welcome. All papers will be reviewed by at least three experts. There is some inevitable overlap between the topic areas for NAACL and ANLP. In deciding whether to submit their papers to NAACL or ANLP, authors should consider whether their paper focuses more on the methodology or the end application of that methodology to solve a particular problem. 4. Format for Submissions Submissions must use the ACL latex style aclsub.sty or Microsoft Word style ACL-submission.doc (both available from the conference web page) and may be no more than 3,200 words in total length, exclusive of title page and references. If you cannot use the ACL-standard styles directly, a description of the required format will be available on the conference web page. If you cannot access the conference web page, send email to anlp-naacl2000@bbn.com with subject SUBSTYLE. Reviewing will be blind. Thus, separate identification and title pages are required. The identification page should include the following. It should be sent in a separate e-mail message from the body of the paper itself. * Title * Paper ID Code: see below * Authors' names, affiliations, and e-mail addresses * Topic Area: 1 or 2 areas most closely matching the submission * Keywords: Up to 5 keywords specifying subject area * Conference the paper is being submitted to (NAACL or ANLP) * Word Count, excluding title page and references * Under consideration for other conferences? If yes, please list * Abstract: Short (no more than 5 lines) summary The title page should include: * Title * Paper ID Code: see below * Topic Area: 1 or 2 areas most closely matching the submission * Keywords: Up to 5 keywords specifying subject area * Conference the paper is being submitted to (NAACL or ANLP) * Word Count, excluding title page and references * Under consideration for other conferences? If yes, please list * Abstract: Short (no more than 5 lines) summary Authors' names and affiliations should be omitted from the paper itself. Furthermore, self-references that reveal the author's identity (e.g., "We previously showed (Smith, 1991) ... ") should be avoided. Instead, use citations such as "Smith previously showed (Smith, 1991)....". Papers that do not conform to these requirements are subject to being rejected without review. SUBMISSION QUESTIONS NAACL submission questions should be sent to: naacl2000-program@nmsu.edu Program Chair, NAACL 2000 Computing Research Laboratory BOX 30001/Dept 3CRL Las Cruces, NM 88003-8001 ANLP submission questions should be sent to: anlp2000-program@nmsu.edu Program Chair, ANLP 2000 Computing Research Laboratory BOX 30001/Dept 3CRL Las Cruces, NM 88003-8001 The calls for papers, style files, and information about tutorials, workshops, and the student session will be available on the conference web site. The conference web site will be reachable from the ACL Home Page, www.aclweb.org, in the near future. SUBMISSION PROCEDURE 1) Submission notification: You must submit a notification of submission by filling out a form on the conference web page at least one week before the submission deadline. This will return to you an email with an ID number that should be included on the identification page, the title page and the header of every page of the paper. Also, please use it on all correspondence with the program committee chair. The form will be available on the web after October 1. 2) Electronic submission: send the postscript or MS Word form of your submission to: naacl2000-program@nmsu.edu or anlp2000-program@nmsu.edu The Subject line should contain conference.submission_id.format, e.g., "naacl.100.ps" or "anlp.100.pdf" or "naacl.100.doc". Please submit the identification page in a separate email. Late submissions will not be accepted. Notification of receipt will be e-mailed to the first author shortly after receipt. In extreme cases, an author unable to comply with the above submission procedure should contact the program chair sufficiently before the submission deadline so alternative arrangements can be made. 5. Deadlines Submission notification deadline: 10-Nov-99 Paper submission deadline: 17-Nov-99 Notification of acceptance for papers: 01-Feb-00 Camera ready papers due: 12-Mar-00 Regular sessions begin: 01-May-00A [Signed copyright release statement will be needed along with the final version.] ********** III.B.2. Fr: John A. Keane Re: CoopIS'99: Programme FINAL CALL FOR PARTICIPATION Fourth IFCIS Conference on Cooperative Information Systems (CoopIS'99) (In cooperation with VLDB'99) Edinburgh University Edinburgh, Scotland September 2-4, 1999 Conference Web Page: http://lsdis.cs.uga.edu/events/coopis99.html or our mirror site: http://www.co.umist.ac.uk/coopis99 CoopIS is the premier conference in the area of large-scale distributed collaborative information systems and draws on several inter-disciplinary research activities. The conference has grown in stature over the years and regularly draws a number of well-known researchers. This is the sixth Conference in the series and the fourth conference organized by the International Foundation on Cooperative Information Systems (IFCIS). The conference includes two keynote addresses from well known researchers, two panels, one industrial session, one tutorial, and 29 outstanding research papers. Edinburgh is an exciting and vibrant city and you will be visiting it at the peak of its festival season. It offers an outstanding range of museums, art exhibits, concerts, and other tourist attraction and this all combined with a lot of unique culture. PROGRAM DAY 1 (Sept 2) 8:45-9:00 OPENING Chair: Mike Papazoglou 9:00-10:00 SESSION 1: Invited Talk Chair: Mike Papazoglou "Negotiating Agents for Corporate-Wide Business Process Management" Nick Jennings 10:15-12:15 SESSION 2: Integration & Interoperability Chair: Stuart Madnick "Detection of heterogeneities in a multiple text database environment", W. Meng, C. Yu, K. Liu "A unified graph-based framework for deriving nominal interscheme properties, type conflicts and object cluster similarities", L. Palopoli, D. Sacca', G. Terracina, D. Ursino "Access Keys Warehouse: a new approach to the development of cooperative information systems", F. Arcieri, E. Cappadozzi, P. Naggar, E. Nardelli, M. Talamo "Discovering view expressions", Z. Kedad, M. Bouzeghoub 13:45-5:15 SESSION 3: Collaboration Chair: Arnie Rosenthal "Event Composition in Time-Dependent Distributed Systems", C. Liebig, M. Cilia, A. Buchmann "Providing Customized Process and Situation Awareness in the Collaboration Management Infrastructure", D. Baker, D. Georgakopoulos, H. Schuster, A.R. Cassandra, A. Cichocki "Modeling Interactions based on Consistent Patterns", S. Srinivas, M. Spiliopoulou 15:30-17:00 SESSION 4: Workflow Exceptions & Versioning Chair: Dimitrios Georgakopoulos "Dynamic Workflow Schema Evolution based on Workflow Type Versioning and Workflow Migration", M.Kradolfer, A. Geppert "Generic Workflow Models: How to handle dynamic change and capture management information", W.M.P. van der Aalst "Modeling and Managing Exceptional Behaviors in Commercial Workflow Management Systems", F. Casati, G. Pozzi 17:15-18:15 SESSION 5: Panel Chairs: Arnon Rosenthal, Scott Renner "Annotations: Digital Post-Its as an Information Model?" DAY 2 (Sept 3) 9:00-10:00 SESSION 6: Tutorial Chair: Erich Neuhold "Web-based Information Access", T. Catarci 10:15-12:15 SESSION 7: Web Information Systems Chair: Maurizio Lenzerini "Constructing a personal web map with anytime control of web robots", S. Yamada and N. Nagino "Looking at the Web through XML glasses", A. Sahuguet, F. Azavant "Learning Response Times for WebSources: A Comparison of a Web-based prediction tool (WebPT) and a neural network", L. Bright, L. Raschid, V. Zadorozhny, T. Zhan "A Comprehensive Framework for Querying and Integrating WWW data and services", O. Shmueli, D. Konopnicki 13:45-15:15 SESSION 8: E-Commerce Chair: Marek Rusinkiewicz "E-Commerce Bargain-Hunting with an unBun Model", R. Yahalom, S.E. Madnick "A Formal Yet Practical Approach to Electronic Commerce", L. Leiba, O. Shmueli, Y. Sagiv, D. Konopnicki "A Distributed OLAP Infrastructure for E-Commerce", Q. Chen, U. Dayal, M. Hsu 15:30-17:00 SESSION9: Workflow Modelling Chair: Asuman Dogac "A case for increasing flexibility in workflow systems: modeling and implementation", J. Tang "Conceptual Workflow Schemas", K. Meyer-Wegener, M. Boehm "Cooperative Support for Office Work in the Insurance Business", A. Margelisch, U. Reimer, M. Staudt, T. Vetterli 17:15-18:15 SESSION 10: Panel Chair: Yannis Vassiliou "Data Warehouse Quality Issues" DAY 3 (Sept 4) 9:00-10:00 SESSION 11: Invited Talk Chair: Umeshwar Dayal "Models and Tools for Designing Data-Intensive WEB Applications", Stefano Ceri 10:15-12:15 SESSION 12: Mediators & Query Processing Chair: Felix Saltor "Selectively materializing data in mediators by analyzing user queries", N. Ashish, C.A. Knoblock, C. Shahabi "Using Fagin's Algorithm for Merging Ranked Results in distributed multimedia information system", E.L. Wimmers, L.M. Haas, M. Tork Roth, C. Braendli "Conflict Tolerant Queries in AURORA", L. Ling Yan, T. Ozsu "Optimizing queries in distributed and composable mediators", V. Josifovski, T. Katchaounov, T. Risch 13:45-15:15 SESSION 13: Agents Chair: Laura Haas "The identification of missing resource information agents", M. Minock, M. Rusinkiewicz, B. Perry "Information Aggregation and Agent Interaction Patterns in InfoSleuth", B. Perry, M. Taylor, A. Unruh "ROPE: Role Oriented Programming Environment for Multiagent Systems", M. Becht, T. Gurzki, J. Klarmann, M. Muscholl 15:30 -16:30 SESSION 14: Workflow Transactions Chair: Qiming Chen "Modelling Extensions for Concurrent Workflow Coordination", A.P. Barros, A.H.M. ter Hofstede "Semantics and Architecture of Global Transaction Support in Workflow Environments", P. Grefen, J. Vonk, E. Boertjes, P. Apers ********** III.C.1. Fr: William Hersh Re: Oregon Health Sciences U.: Distance Learning for Medical Informatics A Program in Distance Learning for Medical Informatics at Oregon Health Sciences University The Division of Medical Informatics & Outcomes Research (DMIOR) at Oregon Health Sciences University (OHSU) is seeking individuals to participate in a pilot distance learning offering of its course, Introduction to Medical Informatics, in the fall of 1999. This course is the first offering in a medical informatics distance learning program under development at OHSU. Current plans call for one three-hour courses to be offered in each of the three academic quarters of the 1999-2000 academic year. A certification program (either master's degree or certificate) will be initiated in the 2000-2001 academic year. The first course to be offered will be Medical Informatics (MINF) 510, Introduction to Medical Informatics, a graduate-level survey of medical informatics. The existing on-campus course is taken by graduate students in the OHSU medical informatics program as well as those of other departments, such as public health and graduate nursing. The distance learning version of MINF 510 will proceed in parallel with the on-campus version of the course, running 11 weeks from Sept. 22 to Dec. 8. The course software will be Web-based (using WebCT or CourseInfo) and targeted to individuals with any type of Internet connection, including modem. Learning activities will consist of weekly topics using the following formats: - Lectures using streaming audio plus PowerPoint delivered via RealPlayer software - Textbook reading assignments - On-line homework assignments - Asynchronous class discussion - Synchronous "office hours" The tentative list of topics to be covered includes: - Medical data and its use - Medical computing - Medical decision-making and evidence-based medicine - The electronic medical record - Information retrieval and digital libraries - Computer networks and the Internet - Imaging and telemedicine - Artificial intelligence and decision support - Standards, security and confidentiality - Nursing and consumer health informatics A course term paper and open-book final examination will also be required. We anticipate a commitment typical of a three-hour graduate course, which will be about 6-12 hours per week. Credit for the course will be through OHSU. A transcript will be issued for those seeking to transfer credit to other institutions. Credit for this course will be applicable to those who later enroll in degree programs at OHSU. Standard in-state and out-of-state tuition rates will be charged. Present plans include the offering of MINF 514, Information Retrieval and Digital Libraries, in the winter term. Our plans for the spring term are not set, but we are considering offering a repeat of MINF 510. Details of the certification program will be announced in the fall of 1999. OHSU is a national leader in medical informatics education. Current on-campus programs include a Master of Science (M.S.) degree program and a postdoctoral fellowship. The latter is funded by the National Library of Medicine and the Department of Veteran's Affairs. Since its inception three years ago, 30 individuals have enrolled and 10 have graduated from the M.S. program. Graduates have gone on to take a variety of positions in medical centers, industry, and academia. More information on the program can be found at: http://www.ohsu.edu/bicc-informatics/ms/ Enrollment in the fall quarter pilot course will be limited to approximately twelve students. Participants must be eligible to take graduate courses at OHSU, which means they must have a bachelor's degree. Acceptance into this pilot program does not guarantee later acceptance into any degree programs at OHSU. If you are interested in taking the fall course, please contact the program administrator as soon as possible: Kelly Brougham Administrator Master of Science in Medical Informatics Program Oregon Health Sciences University Email (preferred): informat@ohsu.edu Phone: 503-494-4563 Fax: 503-494-4551 You may also send email if you are interested in being on a mailing list of (occasional) updates about the program. More general questions should be directed to: William Hersh Associate Professor and Chief Division of Medical Informatics & Outcomes Research Oregon Health Sciences University Email (preferred): hersh@ohsu.edu Phone: 503-494-4563 Fax: 503-494-4551 ****************************************************************** IRLIST Digest is distributed from the University of California, California Digital Library, 1111 Franklin Street, Oakland, CA. 94607-5200. Send subscription requests and submissions to: nancy.gusack@ucop.edu Editorial Staff: Nancy Gusack nancy.gusack@ucop.edu Cliff Lynch (emeritus) cliff@cni.org The IRLIST Archives is set up for anonymous FTP. Using anonymous FTP via the host hibiscus.ucop.edu, the files will be found in the directory /data/ftp/pub/irl, stored in subdirectories by year (e.g., data/ftp/pub/irl/1993). Search or browse archived IR-L Digest issues on the Web at: http://www.dcs.gla.ac.uk/idom/irlist/ These files are not to be sold or used for commercial purposes. Contact Nancy Gusack for more information on IRLIST. THE OPINIONS EXPRESSED IN IRLIST DO NOT REPRESENT THOSE OF THE EDITORS OR THE UNIVERSITY OF CALIFORNIA. AUTHORS ASSUME FULL RESPONSIBILITY FOR THEIR MATERIAL.