004 Datenverarbeitung; Informatik
Filtern
Erscheinungsjahr
Dokumenttyp
- Ausgabe (Heft) zu einer Zeitschrift (73)
- Dissertation (33)
- Masterarbeit (29)
- Diplomarbeit (27)
- Bachelorarbeit (25)
- Studienarbeit (11)
- Konferenzveröffentlichung (4)
- Habilitation (1)
Sprache
- Englisch (203) (entfernen)
Schlagworte
- Software Engineering (6)
- Bluetooth (4)
- ontology (4)
- API (3)
- E-KRHyper (3)
- Enterprise 2.0 (3)
- Knowledge Compilation (3)
- OWL (3)
- OWL <Informatik> (3)
- Ontologie <Wissensverarbeitung> (3)
We introduce linear expressions for unrestricted dags (directed acyclic graphs) and finite deterministic and nondeterministic automata operating on them. Those dag automata are a conservative extension of the Tu,u-automata of Courcelle on unranked, unordered trees and forests. Several examples of dag languages acceptable and not acceptable by dag automata and some closure properties are given.
Probability propagation nets
(2007)
A class of high level Petri nets, called "probability propagation nets", is introduced which is particularly useful for modeling probability and evidence propagation. These nets themselves are well suited to represent the probabilistic Horn abduction, whereas specific foldings of them will be used for representing the flows of probabilities and likelihoods in Bayesian networks.
SOA-Security
(2007)
This paper is a part of the ASG project (Adaptive Services Grid) and addresses some IT security issues of service oriented architectures. It defines a service-oriented security concept, it explores the SOA security challenge, it describes the existing WS-Security standard, and it undertakes a first step into a survey on best practice examples. In particular, the ASG middleware platform technology (JBossWS) is analyzed with respect to its ability to handle security functions.
Generalized methods for automated theorem proving can be used to compute formula transformations such as projection elimination and knowledge compilation. We present a framework based on clausal tableaux suited for such tasks. These tableaux are characterized independently of particular construction methods, but important features of empirically successful methods are taken into account, especially dependency directed backjumping and branch local operation. As an instance of that framework an adaption of DPLL is described. We show that knowledge compilation methods can be essentially improved by weaving projection elimination partially into the compilation phase.
Knowledge compilation is a common technique for propositional logic knowledge bases. The idea is to transform a given knowledge base into a special normal form ([MR03],[DH05]), for which queries can be answered efficiently. This precompilation step is very expensive but it only has to be performed once. We propose to apply this technique to knowledge bases defined in Description Logics. For this, we introduce a normal form, called linkless concept descriptions, for ALC concepts. Further we present an algorithm, based on path dissolution, which can be used to transform a given concept description into an equivalent linkless concept description. Finally we discuss a linear satisfiability test as well as a subsumption test for linkless concept descriptions.
UML models and OWL ontologies constitute modeling approaches with different strength and weaknesses that make them appropriate for use of specifying different aspects of software systems. In particular, OWL ontologies are well suited to specify classes using an expressive logical language with highly flexible, dynamic and polymorphic class membership, while UML diagrams are much more suitable for specifying not only static models including classes and associations, but also dynamic behavior. Though MOF based metamodels and UML profiles for OWL have been proposed in the past, an integrated use of both modeling approaches in a coherent framework has been lacking so far. We present such a framework, TwoUse, for developing integrated models, comprising the benefits of UML models and OWL ontologies
This paper shows how multiagent systems can be modeled by a combination of UML statecharts and hybrid automata. This allows formal system specification on different levels of abstraction on the one hand, and expressing real-time system behavior with continuous variables on the other hand. It is not only shown how multi-robot systems can be modeled by a combination of hybrid automata and hierarchical state machines, but also how model checking techniques for hybrid automata can be applied. An enhanced synchronization concept is introduced that allows synchronization taking time and avoids state explosion to a certain extent.
In this paper we describe a series of projects on location based and personalised information systems. We start wit a basic research project and we show how we came with the help of two other more application oriented project to a product. This is developed by a consortium of enterprises and it already is in use in the city of Koblenz.
This paper offers an informal overview and discussion on first order predicate logic reasoning systems together with a description of applications which are carried out in the Artificial Intelligence Research Group of the University in Koblenz. Furthermore the technique of knowledge compilation is shortly introduced.
The term "Augmented Reality (AR)" denotes the superposition of additional virtual objects and supplementary information over real images. The joint project Enhanced Reality (ER)1 aims at a generic AR-system. The ER-project is a cooperation of six different research groups of the Department of Computer Science at the University of Koblenz-Landau. According to Ronald Azuma an AR-system combines real and virtual environments, where the real and virtual objects are registered in 3-D, and it provides interactivity in real time [Azu97]. Enhanced Reality extends Augmented Reality by requiring the virtual objects to be seamlessly embedded into the real world as photo-realistic objects according to the exact lighting conditions. Furthermore, additional information supplying value-added services may be displayed and interaction of the user may even be immersive. The short-term goal of the ER-project is the exploration of ER-fundamentals using some specific research scenarios; the long-term goal is the development of a component-based ER-framework for the creation of ER-applications for arbitrary application areas. ER-applications are developed as single-user applications for users who are moving in a real environment and are wearing some kind of visual output device like see-through glasses and some mobile end device. By these devices the user is able to see reality as it is, but he can also see the virtual objects and the additional information about some value-added service. Furthermore he might have additional devices whereby he can interact with the available virtual objects. The development of a generic framework for ER-applications requires the definition of generic components which are customizable and composable to build concrete applications and it requires a homogeneous data model which supports all components equally well. The workgroup "Software Technology"2 is responsible for this subproject. This report gives some preliminary results concerning the derivation of a component-based view of ER. There are several augmented reality frameworks like ARVIKA, AMIRE, DWARF, MORGAN, Studierstube and others which offer some support for the development of AR-applications. All of them ease the use of existing subsystems like AR-Toolkit, OpenGL and others and leverage the generation process for realistic systems by making efficient use of those subsystems. Consequently, they highly rely on them.
This paper describes the development of security requirements for non-political Internet voting. The practical background is our experience with the Internet voting within the Gesellschaft für Informatik (GI - Informatics Society) 2004 and 2005. The theoretical background is the international state-of-the-art of requirements about electronic voting, especially in the US and in Europe. A focus of this paper is on the user community driven standardization of security requirements by means of a Protection Profile of the international Common Criteria standard.
Semantic descriptions of non-textual media available on the web can be used to facilitate retrieval and presentation of media assets and documents containing them. While technologies for multimedia semantic descriptions already exist, there is as yet no formal description of a high quality multimedia ontology that is compatible with existing (semantic) web technologies. We explain the complexity of the problem using an annotation scenario. We then derive a number of requirements for specifying a formal multimedia ontology, including: compatibility with MPEG-7, embedding in foundational ontologies, and modularisation including separation of document structure from domain knowledge. We then present the developed ontology and discuss it with respect to our requirements.
Networked RDF graphs
(2007)
Networked graphs are defined in this paper as a small syntactic extension of named graphs in RDF. They allow for the definition of a graph by explicitly listing triples as well as by SPARQL queries on one or multiple other graphs. By this extension it becomes possible to define a graph including a view onto other graphs and to define the meaning of a set of graphs by the way they reference each other. The semantics of networked graphs is defined by their mapping into logic programs. The expressiveness and computational complexity of networked graphs, varying by the set of constraints imposed on the underlying SPARQL queries, is investigated. We demonstrate the capabilities of networked graphs by a simple use case.
Zahlreiche Studien belegen, dass menschliche Bewegungen Informationen über den Akteur in sich bergen. Beobachter sind daher in der Lage, Dinge wie Persönlichkeit, Geschlecht und Gefühlslage allein aus Bewegungen von Menschen zu erkennen. Um dem Ziel nach glaubwürdigen und realistischen virtuellen Charakteren näher zu kommen, verbesserte sich in den letzten Jahren vorwiegend das Aussehen der Charaktere. Dank moderner Techniken und einer rapiden Entwicklung der Computer Hardware können heute visuell extrem realistische Charaktere in virtuellen Echtzeitumgebungen dargestellt werden. Trotz ihrer visuellen Qualität werden sie jedoch in interaktiven Umgebungen häufig als mechanisch wahrgenommen. Diese Störung der Illusion, einem lebendigen, Menschen ähnlichem Lebewesen gegen über zu stehen ist in einem mangelndem menschlichen Verhalten des virtuellen Charakters begründet. Daher können ausdrucksvolle Bewegungen, die einen emotionalen Zustand des Charakters vermitteln, dazu verhelfen dem Menschen ähnlichere und daher glaubwürdigere Charaktere zu realisieren. Im Rahmen dieser Diplomarbeit wird die Umsetzbarkeit eines Systems zur automatischen Generierung emotional expressiver Charakter Animationen untersucht. Übliche Techniken zur Erstellung von Animationen sind sehr aufwendig und zeitintensiv. Um alle möglichen Variationen von Bewegungen in einer interaktiven Umgebung zu erstellen kommen solche Ansätze daher nicht in Frage. Um interaktive Charakter zu ermöglichen, welche in der Lage sind ihre Gefühle zum Ausdruck zu bringen, wird daher diese Problematik im Zuge dieser Diplomarbeit behandelt werden. Einschlägige Literatur aus Forschungsgebieten, welche sich mit Emotionen und Bewegungen befassen werden im Rahmen dieser Arbeit untersucht. Eigenschaften, anhand derer Menschen Emotionen in Bewegungen erkennen, werden technisch in einem Animationssystem umgesetzt, um aus neutralen Animationen emotionale Bewegungen zu generieren. Abschliessend werden die erstellten Ergebnisanimationen in Tests ausgewertet in Bezug auf Erkennbarkeit der Emotionen und Qualität der Ergebnisse.
Die hohen Infrastrukturkosten machen das Überprüfen von Theorien bezüglich großer Rechnernetze zu einer schwierigen und teuren Aufgabe. Ein möglicher Ansatz dieses Problem zu beheben ist die Verwendung von virtueller anstelle von physikalischer Infrastrukur. OPNets IT Guru ist ein Programm, das entworfen wurde zur Simulation großer Netze und zur Repräsentation relevanter Informationen. Es gestattet großflächige Änderungen zu testen oder Theorien zu überpruefen ohne den Aufwand einer physikalischen Infrastruktur.
Currently more than 850 biological databases exist. The majority of biological knowledge is not in these databases but rather contained as free text in scientific literature. For systems biology tasks it is often necessary to integrate and extract data from heterogeneous databases and free text as well as to analyse the information in the context of experimental data. ONDEX is an integration framework which aims to address these challenges by combining features of database integration, text mining and sequence analysis with methods for graph-based data analysis and visualisation. The main topics of this diploma thesis are the redesign of the ONDEX backend, the development of a data exchange format, the development of a query environment and the allocation of Web services for data integration, data exchange and queries. These Web services allow backend workflow control from both local and remote workstations.
Interactive video retrieval
(2006)
The goal of this thesis is to develop a video retrieval system that supports relevance feedback. One research approach of the thesis is to find out if a combination of implicit and explicit relevance feedback returns better retrieval results than a system using explicit feedback only. Another approach is to identify a model to weight existing feature categories. For this purpose, a state-of-the-art analysis is presented and two systems implemented, which run under the conditions of the international TRECVID workshop. It will be a basis system for further research approaches in the field of interactive video retrieval. Amongst others, it shall participate in the 2006 search task of the mentioned workshop.
We aim to demonstrate that automated deduction techniques, in particular those following the model computation paradigm, are very well suited for database schema/query reasoning. Specifically, we present an approach to compute completed paths for database or XPath queries. The database schema and a query are transformed to disjunctive logic programs with default negation, using a description logic as an intermediate language. Our underlying deduction system, KRHyper, then detects if a query is satisfiable or not. In case of a satisfiable query, all completed paths -- those that fulfill all given constraints -- are returned as part of the computed models. The purpose of our approach is to dramatically reduce the workload on the query processor. Without the path completion, a usual XML query processor would search the database for solutions to the query. In the paper we describe the transformation in detail and explain how to extract the solution to the original task from the computed models. We understand this paper as a first step, that covers a basic schema/query reaÂsoning task by model-based deduction. Due to the underlying expressive logic formalism we expect our approach to easily adapt to more sophisticated problem settings, like type hierarchies as they evolve within the XML world.
The model evolution calculus
(2004)
The DPLL procedure is the basis of some of the most successful propositional satisfiability solvers to date. Although originally devised as a proof procedure for first-order logic, it has been used almost exclusively for propositional logic so far because of its highly inefficient treatment of quantifiers, based on instantiation into ground formulas. The recent FDPLL calculus by Baumgartner was the first successful attempt to lift the procedure to the first-order level without resorting to ground instantiations. FDPLL lifts to the first-order case the core of the DPLL procedure, the splitting rule, but ignores other aspects of the procedure that, although not necessary for completeness, are crucial for its effectiveness in practice. In this paper, we present a new calculus loosely based on FDPLL that lifts these aspects as well. In addition to being a more faithful litfing of the DPLL procedure, the new calculus contains a more systematic treatment of universal literals, one of FDPLL's optimizations, and so has the potential of leading to much faster implementations.
The Living Book is a system for the management of personalized and scenario specific teaching material. The main goal of the system is to support the active, explorative and selfdetermined learning in lectures, tutorials and self study. The Living Book includes a course on 'logic for computer scientists' with a uniform access to various tools like theorem provers and an interactive tableau editor. It is routinely used within teaching undergraduate courses at our university. This paper describes the Living Book and the use of theorem proving technology as a core component in the knowledge management system (KMS) of the Living Book. The KMS provides a scenario management component where teachers may describe those parts of given documents that are relevant in order to achieve a certain learning goal. The task of the KMS is to assemble new documents from a database of elementary units called 'slices' (definitions, theorems, and so on) in a scenario-based way (like 'I want to prepare for an exam and need to learn about resolution'). The computation of such assemblies is carried out by a model-generating theorem prover for first-order logic with a default negation principle. Its input consists of meta data that describe the dependencies between different slices, and logic-programming style rules that describe the scenario-specific composition of slices. Additionally, a user model is taken into account that contains information about topics and slices that are known or unknown to a student. A model computed by the system for such input then directly specifies the document to be assembled. This paper introduces the elearning context we are faced with, motivates our choice of logic and presents the newly developed calculus used in the KMS.