004 Datenverarbeitung; Informatik
Refine
Year of publication
- 2013 (4) (remove)
Document Type
- Doctoral Thesis (4) (remove)
Keywords
- Information Retrieval (2)
- Abduktion <Logik> (1)
- Deduktion (1)
- Digitale Bibliothek (1)
- Distribution <Linguistik> (1)
- Elektronische Bibliothek (1)
- Fragebeantwortung (1)
- GIRT (1)
- Information (1)
- Information Visualization (1)
Institute
The amount of information on the Web is constantly increasing and also there is a wide variety of information available such as news, encyclopedia articles, statistics, survey data, stock information, events, bibliographies etc. The information is characterized by heterogeneity in aspects such as information type, modality, structure, granularity, quality and by its distributed nature. The two primary techniques by which users on the Web are looking for information are (1) using Web search engines and (2) browsing the links between information. The dominant mode of information presentation is mainly static in the form of text, images and graphics. Interactive visualizations offer a number of advantages for the presentation and exploration of heterogeneous information on the Web: (1) They provide different representations for different, very large and complex types of information and (2) large amounts of data can be explored interactively using their attributes and thus can support and expand the cognition process of the user. So far, interactive visualizations are still not an integral part in the search process of the Web. The technical standards and interaction paradigms to make interactive visualization usable by the mass are introduced only slowly through standardatization organizations. This work examines how interactive visualizations can be used for the linking and search process of heterogeneous information on the Web. Based on principles in the areas of information retrieval (IR), information visualization and information processing, a model is created, which extends the existing structural models of information visualization with two new processes: (1) linking of information in visualizations and (2) searching, browsing and filtering based on glyphs. The Vizgr toolkit implements the developed model in a web application. In four different application scenarios, aspects of the model will be instantiated and are evaluated in user tests or examined by examples.
The search for scientific literature in scientific information systems is a discipline at the intersection between information retrieval and digital libraries. Recent user studies show two typical weaknesses of the classical IR model: ranking of retrieved and maybe relevant documents and the language problem during the query formulation phase. At the same time traditional retrieval systems that rely primarily on textual document and query features are stagnating for years, as it could be observed in IR evaluation campaigns such as TREC or CLEF. Therefore alternative approaches to surpass these two problem fields are needed. Two different search support systems are presented in this work and evaluated with a lab evaluation using the IR test collection GIRT and iSearch with 150 and 65 topics, respectively. These two systems are (1) a query expansion that is based on the analysis of co-occurrences of document attributes and (2) a ranking mechanism that applies informetric analysis of the productivity of information producers in the information production process. Both systems were compared to a baseline system using the Solr search engine. Both methods showed positive effects when applying additional document attributes like author names, ISSN codes and controlled terms. The query expansion showed an improvement in precision (bpref +12%) and in recall (R +22%).
he alternative ranking methods were able to compete with the baseline for author names and ISSN codes and were able to beat the baseline by using controlled terms (MAP +14%). A clear negative influence was seen when using entities like publishers or locations. Both methods were able to generate a substantially different sorting of the result set, measured using Kendall. So, additional to the improved relevance in the result list, the user can get a new and different view on the document set. Query expansion using author names, ISSN codes and thesaurus terms showed great potential that lies within the rich metadata sets of digital library systems. The proposed ranking methods could outperform standard relevance ranking methods after they were filtered by the existence of a so-called power law. This showed that the proposed ranking methods cannot be used universally in any case but require specific frequency distributions in the metadata. A connection between the underlying informetric laws of Bradford, Lotka and Zipf is made clear. The evaluated methods were implemented as interactive search supporting systems that can be used in an interactive prototype and the social science digital library system Sowiport. Besides that, the methods are adaptable to other systems and environments using a free software framework and a web API.
This dissertation investigates the usage of theorem provers in automated question answering (QA). QA systems attempt to compute correct answers for questions phrased in a natural language. Commonly they utilize a multitude of methods from computational linguistics and knowledge representation to process the questions and to obtain the answers from extensive knowledge bases. These methods are often syntax-based, and they cannot derive implicit knowledge. Automated theorem provers (ATP) on the other hand can compute logical derivations with millions of inference steps. By integrating a prover into a QA system this reasoning strength could be harnessed to deduce new knowledge from the facts in the knowledge base and thereby improve the QA capabilities. This involves challenges in that the contrary approaches of QA and automated reasoning must be combined: QA methods normally aim for speed and robustness to obtain useful results even from incomplete of faulty data, whereas ATP systems employ logical calculi to derive unambiguous and rigorous proofs. The latter approach is difficult to reconcile with the quantity and the quality of the knowledge bases in QA. The dissertation describes modifications to ATP systems in order to overcome these obstacles. The central example is the theorem prover E-KRHyper which was developed by the author at the Universität Koblenz-Landau. As part of the research work for this dissertation E-KRHyper was embedded into a framework of components for natural language processing, information retrieval and knowledge representation, together forming the QA system LogAnswer.
Also presented are additional extensions to the prover implementation and the underlying calculi which go beyond enhancing the reasoning strength of QA systems by giving access to external knowledge sources like web services. These allow the prover to fill gaps in the knowledge during the derivation, or to use external ontologies in other ways, for example for abductive reasoning. While the modifications and extensions detailed in the dissertation are a direct result of adapting an ATP system to QA, some of them can be useful for automated reasoning in general. Evaluation results from experiments and competition participations demonstrate the effectiveness of the methods under discussion.
Tagging systems are intriguing dynamic systems, in which users collaboratively index resources with the so-called tags. In order to leverage the full potential of tagging systems, it is important to understand the relationship between the micro-level behavior of the individual users and the macro-level properties of the whole tagging system. In this thesis, we present the Epistemic Dynamic Model, which tries to bridge this gap between the micro-level behavior and the macro-level properties by developing a theory of tagging systems. The model is based on the assumption that the combined influence of the shared background knowledge of the users and the imitation of tag recommendations are sufficient for explaining the emergence of the tag frequency distribution and the vocabulary growth in tagging systems. Both macro-level properties of tagging systems are closely related to the emergence of the shared community vocabulary. rnrnWith the help of the Epistemic Dynamic Model, we show that the general shape of the tag frequency distribution and of the vocabulary growth have their origin in the shared background knowledge of the users. Tag recommendations can then be used for selectively influencing this general shape. In this thesis, we especially concentrate on studying the influence of recommending a set of popular tags. Recommending popular tags adds a feedback mechanism between the vocabularies of individual users that increases the inter-indexer consistency of the tag assignments. How does this influence the indexing quality in a tagging system? For this purpose, we investigate a methodology for measuring the inter-resource consistency of tag assignments. The inter-resource consistency is an indicator of the indexing quality, which positively correlates with the precision and recall of query results. It measures the degree to which the tag vectors of indexed resources reflect how the users perceive the similarity between resources. We argue with our model, and show it with a user experiment, that recommending popular tags decreases the inter-resource consistency in a tagging system. Furthermore, we show that recommending the user his/her previously used tags helps to increase the inter-resource consistency. Our measure of the inter-resource consistency complements existing measures for the evaluation and comparison of tag recommendation algorithms, moving the focus to evaluating their influence on the indexing quality.