004 Datenverarbeitung; Informatik
Refine
Year of publication
Document Type
- Diploma Thesis (185)
- Bachelor Thesis (163)
- Study Thesis (137)
- Part of Periodical (126)
- Master's Thesis (84)
- Doctoral Thesis (48)
- Conference Proceedings (6)
- Book (1)
- Habilitation (1)
- Report (1)
Language
- German (546)
- English (203)
- Multiple languages (3)
Keywords
- Bildverarbeitung (13)
- Augmented Reality (10)
- Computersimulation (10)
- Robotik (10)
- Computergraphik (9)
- OpenGL (8)
- Routing (8)
- Semantic Web (8)
- Computerspiel (7)
- Informatik (7)
Institute
- Fachbereich 4 (273)
- Institut für Computervisualistik (222)
- Institut für Informatik (114)
- Institut für Wirtschafts- und Verwaltungsinformatik (104)
- Institut für Management (49)
- Institut für Softwaretechnik (47)
- Institute for Web Science and Technologies (34)
- Institut für Integrierte Naturwissenschaften (4)
- An-Institute (1)
Die vorliegende Arbeit hat gezeigt, dass es möglich ist an bestehenden Studien zum Thema Perceived External Reputation und der Teilnahme an der Implementierung von Dienstleitungsinnovationen anzuknüpfen. Es war sogar möglich die Abhängigkeiten der beiden Themen voneinander herauszustellen. Dies gelang unter zu Hilfenahme von weiteren Einflussfaktoren wie der erwarteten Ansehenssteigerung durch innovatives Verhalten und dem Stolz der Mitarbeiter mit dem eigenen Unternehmen in Verbindung gebracht zu werden.
In this thesis, the methods of a feasibility study are applied to analyze whether or not the foundation of an academic based startup focusing on IT-consulting is possible. For this purpose the concept of consulting, the demand for the offering of consulting services as well as the relevant market are analyzed. Furthermore, empirical research through face-to-face interviews with IT-companies located in the region of Koblenz is utilized in order to gain further insight about the feasibility of said business venture. The result of the research is to be presented in a concrete recommendation for further actions.
Ziel der Bachelorarbeit war es, eine moderne Art des Geländespiels "Schnitzeljagd" zu entwickeln. Dieses sollte möglichst auf jedem aktuellen Smartphone spielbar sein, welches mit dem Betriebssystem Android arbeitet. Das Gelände ist begrenzt auf den Universitätscampus Koblenz, somit dient das Spiel auch dazu, den Campus besser kennenzulernen.
Den Benutzern der Campusjagd wird eine mobile Applikation geboten, welche sie anhand von Hinweisen und Rätseln über den kompletten Campus führt, um letztendlich an einen Zielort zu gelangen, an welchem sich ein "Schatz" befindet. Anstatt wie üblich auf dem Gelände Schnitzel bzw. Schnipsel mit Hinweisen zu verstecken, werden bei der Campusjagd QR-Codes aufgehängt, um so den Weg zu markieren. Zur Täuschung sind auch irrelevante Codes zu finden. Die Codes müssen in der richtigen Reihenfolge abgelaufen werden, d. h. von einem Code erhält der Spieler den Hinweis zum jeweiligen Nächsten. Außerdem ist es möglich, dass aus einem QR-Code mehrere Hinweise für nächste Stationen angezeigt werden.
Existing algorithms using a beaconless strategy for geographic routing in Unit-Disk Graphs offer an approach for improving non-beaconless routing algorithms for Quasi-Unit-Disk Graphs. The majority of the aforementioned non-beaconlessrnrouting algorithms for Quasi-Unit-Disk Graphs are based on collecting 2-hop neighbourhood information. As shown by the Beaconless Clustering Algorithm developed in this thesis, a beaconless strategy can be used to enhance an existing non-beaconless algorithm by reducing message overhead and power consumption. The Beaconless Clustering algorithm is based on geographic clustering and constructs a sparse graph with constant number of vertices per unit area. The thesis at hand contains a detailed description of the algorithm, a proof of correctness and a simulation for presenting reachable improvements.
Diese Bachelorarbeit befasst sich mit der Entwicklung und Implementierung einer Gesichtserkennenden Software, die in der Lage ist Personenströme zurnerkennen und zu protokollieren. Dabei wird, ausgehend von den speziellen Anforderungen der Bildverarbeitung die entstandene Softwarearchitektur und deren Implementation vorgestellt. Zusätzlich wird zur Implementation ein Webinterface entwickelt welches die Verwaltung der Daten vereinfachen soll. Abschließend werden weitere Verfahren der Gesichtserkennung vorgestellt und gegen das eingesetzte Verfahren verglichen. Zum Schluss wird die implementierte Software evaluiert.
Erweiterung der Konzeption und Implementierung einer Screening Applikation für mobile Endgeräte
(2014)
In this bachelor thesis an existing generic concept and an existing prototype for a smartphone application to record, monitor and document physical symptoms or observations of the human body are being extended. The existing funktionalities are being complemented by analysis of the previous Prototype. The concept and its Function modules, which are implemented in the existing prototype for the mobile platform Android, are being extended based on their analysed weaknesses. The resulting prototype and generic concept are evaluated and optimizations and extensions are being collected for further projects.
This master thesis is about the possibilities of supporting local corporate sales with the help of current mobile applications. The internet already has served in order to make the trading market more dynamic. The conditions for long-term local corporate sales have become more challenging. Because of smaller cost structures online-retailer offer prizes on which local point of sales can hardly keep up with. Another point is that more customers decide to order online because the service in e-shops enhanced; therefore, the digital transactions become more attractive for consumers. Today smartphones and tablets have brought the digitalization to a whole new level. With the possibility of Mobile Web the effects that the internet already showed us in the past have been intensified. The question that proposes here is: In which way do the conditions of competitions for local corporate sales change? This paper follows the outline of putting together various mobile services, their functions, and their practical usage, as well as the process of integrating them successfully into marketing. With that one should be able to find out if Mobile Web can be seen as an advantage for local corporates sales.
In the age of Web 2.0 the internet as a platform to provide services or sell goods gained more attention in companies. Especially for customer support customer forums can be a useful tool for companies. By providing customer forums, companies can reduce their support costs dramatically. Nevertheless, it is often difficult for companies to measure the success of customer forums. In the research literature the determinants of success of customer forums have seldom been studied. The purpose of this bachelor thesis is to fill this research gap by applying the model of Lin and Lee to measure the success of online-communities on customer forums. In addition, metrics for measuring the success of this model are discussed. The discussion of this topic leads to the conclusion that the model of Lin and Lee is mostly applicable for customer forums, and provides a useful approach to measuring suc-cess both in theory and in practice. This could be demonstrated by the example of the 1&1 customer forum. The metrics that were found in the theoretical part were quite relevant in practice. Nevertheless, future research should focus more on monetary indicators to also make success of customer forums financially assessable.
A guideline for the examination of business models is developed in this research project and M.Sc. study, focusing on young, innovative enterprises ("start-ups"). Start-ups often start to operate in uncertainty and tentativeness. To forecast the success of such an enterprise is therefore today hardly possible by means of quantitative data. The evaluation of innovative business models ("Business Model Check") today is a gap in Business Administration and Management studies.
The goal of this thesis is the development of methods for augmented image synthesis using 3D photo collections. 3D photo collections are representations of real scenes automatically generated from single photos and describe a scene as a set of images with known camera poses as well as a sparse point-based model of the scene geometry. The main goal is to perform a photo-realistic augmented image synthesis of real and virtual parts, where the real scene is provided as a 3D photo collection. Therefore, three main problems are addressed.
Since the photos may be represented in different device-specific RGB color spaces, a color characterization of the 3D photo collections is necessary to gain correct color information that is consistent with human perception. The proposed novel method automatically transforms all images into a common RGB color space and thereby simplifies color characterization of 3D photo collections.
As a main problem for augmented image synthesis, all environmental lighting has to be known in order to apply illumination to virtual parts that is consistent with the real portions shown in the photos. To solve this problem, two novel methods were developed to reconstruct the lighting from 3D photo collections.
In order to perform image synthesis for arbitrary views on the scene, an image-based approach was developed that generates new views in 3D photo collections making direct use of its point cloud. The novel method creates new views in real-time and allows free-navigation.
In conclusion, the proposed novel methods show that 3D photo collections are a useful representation for real scenes in Augmented Reality and they can be used to perform a realistic image synthesis of real and virtual portions.
The availability of digital cameras and the possibility to take photos at no cost lead to an increasing amount of digital photos online and on private computers. The pure amount of data makes approaches that support users in the administration of the photo necessary. As the automatic understanding of photo content is still an unsolved task, metadata is needed for supporting administrative tasks like search or photo work such as the generation of photo books. Meta-information textually describes the depicted scene or consists of information on how good or interesting a photo is.
In this thesis, an approach for creating meta-information without additional effort for the user is investigated. Eye tracking data is used to measure the human visual attention. This attention is analyzed with the objective of information creation in the form of metadata. The gaze paths of users working with photos are recorded, for example, while they are searching for photos or while they are just viewing photo collections.
Eye tracking hardware is developing fast within the last years. Because of falling prices for sensor hardware such as cameras and more competition on the eye tracker market, the prices are falling, and the usability is increasing. It can be assumed that eye tracking technology can soon be used in everyday devices such as laptops or mobile phones. The exploitation of data, recorded in the background while the user is performing daily tasks with photos, has great potential to generate information without additional effort for the users.
The first part of this work deals with the labeling of image region by means of gaze data for describing the depicted scenes in detail. Labeling takes place by assigning object names to specific photo regions. In total, three experiments were conducted for investigating the quality of these assignments in different contexts. In the first experiment, users decided whether a given object can be seen on a photo by pressing a button. In the second study, participants searched for specific photos in an image search application. In the third experiment, gaze data was collected from users playing a game with the task to classify photos regarding given categories. The results of the experiments showed that gaze-based region labeling outperforms baseline approaches in various contexts. In the second part, most important photos in a collection of photos are identified by means of visual attention for the creation of individual photo selections. Users freely viewed photos of a collection without any specific instruction on what to fixate, while their gaze paths were recorded. By comparing gaze-based and baseline photo selections to manually created selections, the worth of eye tracking data in the identification of important photos is shown. In the analysis of the data, the characteristics of gaze data has to be considered, for example, inaccurate and ambiguous data. The aggregation of gaze data, collected from several users, is one suggested approach for dealing with this kind of data.
The results of the performed experiments show the value of gaze data as source of information. It allows to benefit from human abilities where algorithms still have problems to perform satisfyingly.
Tiny waves driven by wind, shallow, long waves, head overlapping sea, all of these waves occur in every ocean and even in small lakes. The surface of water is one of the most versatile phenomenas of nature. Not only the movement of waves, but also the reflection of sky, sun and coastline makes the surface of water unique. Exactly this complexity is what brings its own challenges to the simulation of water surfaces. That is why simulation of water occupies mathematicians with a challenge for nearly 400 years now.
In the last fifty years this challenge has more and more shifted to computer science. Computer graphic designers have tried to visualise water in a realistic manner for centuries. Science in this field expends from simple noise filters to mathematically complex solutions like Fourier Transformation.
In the following work historical background of todays wave theories, as well as mathematical fundamentals are given. The focus of this work is set on the implementation of these methods in OpenGL 3.3.
Community-Plattformen im Internet verwenden codebasierte Governance, um ihre hohe Anzahl an Nutzerbeiträgen zu verwalten. Dazu gehören alle Arten von Funktionalitäten, mit denen die Community Nutzerbeiträge in irgendeiner Form direkt oder indirekt beurteilen kann. Diese Arbeit erklärt zunächst die Bedeutung codebasierter Governance und der verschiedenen dafür nutzbaren Funktionalitäten. Anschließend werden die erfolgreichsten 50 Community-Plattformen auf codebasierte Governance untersucht. Das Ergebnis zeigt die Zusammenhänge zwischen dem Aufbau einer Plattform, der Beschaσffenheit der Nutzerbeiträge und der darauf ausübbaren codebasierten Governance auf.
In this bachelor thesis, the question of whether or not a jump'n run game with sensor control for android devices is useful, is handled. To this end, a game was developed, which is once controlled with and without sensors at different levels. In a second version, the game is completely controlled by means of sensors, so that the controls can later be compared. It is explained how the game was planned, designed and investigated. In addition, it is checked whether games with sensor control already exist. The engine, which was used to developed the game, is also introduced. Finally, the evaluation is carried out for an elaborated user test on the playability of the game in terms of control.
In den letzten Jahren ist eine steigende Verbreitung von Touchscreen-Geräten zu verzeichnen. Ihre Bedienung unterscheidet sich grundlegend von der mit Maus und Tastatur. Durch die Eingabe mit Gesten oder mehreren Fingern kann es schwierig sein den Aktionen eines Anderen zu folgen. Probleme entstehen durch die Verdeckung des Bildschirms mit der Eingabehand. Sieht man nur den Bildschirminhalt, zum Beispiel bei einer Videoübertragung, gehen Informationen über die Eingabe verloren.
In dieser Arbeit wird ein System entwickelt, das die kollaborative Arbeit an voneinander entfernten Touchscreen-Geräten verbessern soll. Dazu wird aus den Tiefendaten eines Kinect Sensors eine grafische Repräsentation der Eingabehand erstellt. Durch Einblendung dieser Visualisierung soll es einem Anwender erleichtert werden den Eingaben eines Anwenders zu folgen. Bedienkonzepte, wie zum Beispiel Gesten, sollen dadurch besser vermittelt werden. Außerdem soll so die Möglichkeit geschaffen werden, Informationen über eine gemeinsame Problematik effizienter auszutauschen. Deshalb wurde ein Testsystem mit zwei Arbeitsplätzen entwickelt. Darin übernimmt ein Anwender die Rolle des Erklärenden und führt einen zweiten Anwender, den Ausführenden, durch verschiedene Testszenarien. Hierbei stehen ihm bei einem Teil der Aufgaben die Visualisierung der Hand zur Verfügung, während er in anderen Aufgaben nur verbal mit seinem Gegenüber kommunizieren kann.
Im Rahmen einer Evaluation wird das System auf seine Effizienz zur Bedienung von Touchscreen-Systemen überprüft. Des Weiteren wird untersucht, inwieweit die grafische Qualität den gestellten Anforderungen genügt, um einen Mehrwert für die Anwendung zu bieten.
Der Markt der mobilen Endgeräte entwickelt sich schnell weiter und es kommen schon Kinder im frühsten Alter mit solchen Technologien in Berührung. Daher ist es wichtig, Kinder richtig an die Geräte heranzuführen. Von Vorteil wäre eine Einbindung von Smartphones und Tablets, im Bezug auf den Lernprozess, in den Unterricht. Die vorliegende Arbeit behandelt deshalb das Konzept einer Lernspielapp, die durch Pädagogen konfiguriert werden kann. Die Evaluation soll Aufschluss über die Motivation der Kinder geben und die Aufgeschlossenheit der Pädagogen gegenüber neuen Medien ermitteln.
German politicians have identified a need for greater citizen involvement in decision-making than in the past, as confirmed by a recent German parliamentarians study ("DEUPAS"). As in other forms of social interactions, the Internet provides significant potential to serve as the digital interface between citizens and decision-makers: in the recent past, dedicated electronic participation ("e-participation") platforms (e.g. dedicated websites) have been provided by politicians and governments in an attempt to gather citizens" feedback and comment on a particular issue or subject. Some of these have been successful, but a large proportion of them are grossly under-used " often only small numbers of citizens use them. Over the same time period, enthusiasm of Society for social networks has increased and is now commonplace. Many citizens use social networks such as Facebook and Twitter for all kinds of purposes, and in some cases to discuss political issues.
Social networks are therefore obviously attractive to politicians " from local government to federal agencies, politicians have integrated social media into their daily work. However, there is a significant challenge regarding the usefulness of social networks. The problem is the continuous increase in digital information: social networks contain vast amounts of information, and it is impossible for a human to manually filter the relevant information from the irrelevant (so-called "information overload"). Even using the search tools provided by social networks, it is still a huge task for a human to determine meanings and themes from the multitude of search results. New technologies and concepts have been proposed to provide summaries of masses of information through lexical analysis of social media messages, and therefore they promise an easy and quick overview of the information.
This thesis examines the relevance of these analyses" results, for the use in everyday political life, with the emphasis on the social networks Facebook and Twitter as data sources. Here we make use of the WeGov Toolbox and its analysis components that were developed during the EU project WeGov. The assessment has been performed in consultation with actual policy-makers from different levels of German government: policy-makers from the German Federal Parliament, the State Parliament North Rhine-Westphalia, the State Chancellery of the Saarland and the cities of Cologne and Kempten all took part in the study. Our method was to execute the analyses on data collected from Facebook and Twitter, and present the results to the policy-makers, who would then evaluate them using a mixture of qualitative methods.
The responses of the participants have provided us with some useful conclusions:
1) None of the participants believe that e-participation is possible in this way. But participants confirm that "citizen-friendliness" can be supported by this approach.
2) The most likely users for the summarisation tools are those who have experience with social networks, but are not "power users". The reason being is that "power users" already knew the relevant information provided by analysis tools. But without any experiences for social networks it is hard to interpret the analysis results the right way.
3) The evaluation has considered geographical aspects, and related this to e.g. a politician- constituency as a local area of social networks. Comparing the rural to the urban areas, it is shown that the amount of relevant political information in the rural areas is low. While the proportion of publicly available information in urban areas is relatively high, the proportion in the rural areas is much lower.
The findings that result from the engagement with policy-makers will be systematically surveyed and validated within this thesis.
Die Diplomarbeit "Entwicklung eines Telemedizinregister-Anforderungskatalog" behandelt die Erstellung eines Anforderungskatalogs für die Entwicklung eines im telemedizinischen Bereich anwendbaren Registers zur Unterstützung von Abrechnungsvorgängen. Diese werden im deutschen Gesundheitswesen zwischen telemedizinischen Dienstleistern und Kostenträgern in Zusammenhang mit der integrierten Versorgungsform durchgeführt, um die Finanzierung durchgeführter telemedizinischer Behandlungen abzurechnen. Dabei dient das Telemedizinregister als eine datenvorhaltende Speicherstelle, die Kopien von Behandlungsdaten telemedizinischer Dienstleister aufnimmt und deren Verarbeitungsprozesse im Register protokolliert. Den beteiligten Kostenträgern wird Zugriff auf dieses Telemedizinregister gewährt, um die Validität der Therapiedaten überprüfen zu können, die ihnen durch telemedizinische Dienstleister zur Analyse vorgelegt werden. Die Arbeit beschreibt die theoretischen Grundlagen der Bereiche Datenschutz und Telemedizin, mit denen Anforderungslisten und ein SOLL-Modell eines Telemedizinregisters erstellt werden. Dieses Modell setzt sich aus Datenmodellen und Prozessbeschreibungen zusammen und wird mit Hilfe eines praktischen Beispiels einer telemedizinischen Behandlung überprüft. Die Integration verschiedener Standards, welche bei Datenaustausch-Prozessen eingesetzt werden können, ist ein weiterer Teil zur Konzeptionierung des Telemedizinregisters, zu dem mögliche Anwendungsfelder zur Erweiterung der Funktionalität beschrieben werden.
Web 2.0 provides technologies for online collaboration of users as well as the creation, publication and sharing of user-generated contents in an interactive way. Twitter, CNET, CiteSeerX, etc. are examples of Web 2.0 platforms which facilitate users in these activities and are viewed as rich sources of information. In the platforms mentioned as examples, users can participate in discussions, comment others, provide feedback on various issues, publish articles and write blogs, thereby producing a high volume of unstructured data which at the same time leads to an information overload. To satisfy various types of human information needs arising from the purpose and nature of the platforms requires methods for appropriate aggregation and automatic analysis of this unstructured data. In this thesis, we propose methods which attempt to overcome the problem of information overload and help in satisfying user information needs in three scenarios.
To this end, first we look at two of the main challenges of sparsity and content quality in Twitter and how these challenges can influence standard retrieval models. We analyze and identify Twitter content features that reflect high quality information. Based on this analysis we introduce the concept of "interestingness" as a static quality measure. We empirically show that our proposed measure helps in retrieving and filtering high quality information in Twitter. Our second contribution relates to the content diversification problem in a collaborative social environment, where the motive of the end user is to gain a comprehensive overview of the pros and cons of a discussion track which results from social collaboration of the people. For this purpose, we develop the FREuD approach which aims at solving the content diversification problem by combining latent semantic analysis with sentiment estimation approaches. Our evaluation results show that the FREuD approach provides a representative overview of sub-topics and aspects of discussions, characteristic user sentiments under different aspects, and reasons expressed by different opponents. Our third contribution presents a novel probabilistic Author-Topic-Time model, which aims at mining topical trends and user interests from social media. Our approach solves this problem by means of Bayesian modeling of relations between authors, latent topics and temporal information. We present results of application of the model to the scientific publication datasets from CiteSeerX showing improved semantically cohesive topic detection and capturing shifts in authors" interest in relation to topic evolution.
This thesis deals with quality assurance of model-based SRS, in particular SRS-Models and SRS-Diagrams. The interesting thing about model-based SRS is that they are generated by a documentation generator based on the following input data: SRS-Model, SRS-Diagrams and texts external to the model. Therefore to assure the quality of the documentation the quality of their four factors must be assured, which are the SRS-Model, SRS-Diagrams, external texts and the documentation generator. The thesis" goal is to define a quality connotation for SRS-Models and -Diagrams and to show an approach for realizing automatically quality testing, measurement and assessment for the modelling tool Innovator.
Diffusion imaging captures the movement of water molecules in tissue by applying varying gradient fields in a magnetic resonance imaging (MRI)-based setting. It poses a crucial contribution to in vivo examinations of neuronal connections: The local diffusion profile enables inference of the position and orientation of fiber pathways. Diffusion imaging is a significant technique for fundamental neuroscience, in which pathways connecting cortical activation zones are examined, and for neurosurgical planning, where fiber reconstructions are considered as intervention related risk structures.
Diffusion tensor imaging (DTI) is currently applied in clinical environments in order to model the MRI signal due to its fast acquisition and reconstruction time. However, the inability of DTI to model complex intra-voxel diffusion distributions gave rise to an advanced reconstruction scheme which is known as high angular resolution diffusion imaging (HARDI). HARDI received increasing interest in neuroscience due to its potential to provide a more accurate view of pathway configurations in the human brain.
In order to fully exploit the advantages of HARDI over DTI, advanced fiber reconstructions and visualizations are required. This work presents novel approaches contributing to current research in the field of diffusion image processing and visualization. Diffusion classification, tractography, and visualizations approaches were designed to enable a meaningful exploration of neuronal connections as well as their constitution. Furthermore, an interactive neurosurgical planning tool with consideration of neuronal pathways was developed.
The research results in this work provide an enhanced and task-related insight into neuronal connections for neuroscientists as well as neurosurgeons and contribute to the implementation of HARDI in clinical environments.
The way information is presented to users in online community platforms has an influence on the way the users create new information. This is the case, for instance, in question-answering fora, crowdsourcing platforms or other social computation settings. To better understand the effects of presentation policies on user activity, we introduce a generative model of user behaviour in this paper. Running simulations based on this user behaviour we demonstrate the ability of the model to evoke macro phenomena comparable to the ones observed on real world data.
In der vorliegenden Arbeit wird die Integration einer Business Intelligence-Lösung in eine bestehende Social Software beschrieben. Dafür wird zunächst der Begriff Business Intelligence und Social Software, der Aufbau sowie deren Bestandteile näher erläutert. Danach erfolgt eine Analyse der IST-Situation der Zielgruppe durch Interviews, deren Auswertungen in der SOLL-Konzeptionierung in eine Anforderungsliste transformiert werden. Abschließend werden die herausgearbeiteten Anforderungen an der finalen Installation geprüft und getestet, um festzustellen ob die Erwartungen der Zielgruppe und ihre Vorstellungen von Business Intelligence realisierbar sind.
Das Ergebnis dieser Arbeit soll eine installierte Business Intelligence-Lösung in einer Social Software sein. Diese soll einen Überblick darüber geben, was mit der aktuellsten Version der Software bereits möglich ist und kritisch aufzeigen, wo es Stärken und Schwächen gibt, die bei zukünftigen Versionenrnbedacht werden sollten.
Modeling and publishing Linked Open Data (LOD) involves the choice of which vocabulary to use. This choice is far from trivial and poses a challenge to a Linked Data engineer. It covers the search for appropriate vocabulary terms, making decisions regarding the number of vocabularies to consider in the design process, as well as the way of selecting and combining vocabularies. Until today, there is no study that investigates the different strategies of reusing vocabularies for LOD modeling and publishing. In this paper, we present the results of a survey with 79 participants that examines the most preferred vocabulary reuse strategies of LOD modeling. Participants of our survey are LOD publishers and practitioners. Their task was to assess different vocabulary reuse strategies and explain their ranking decision. We found significant differences between the modeling strategies that range from reusing popular vocabularies, minimizing the number of vocabularies, and staying within one domain vocabulary. A very interesting insight is that the popularity in the meaning of how frequent a vocabulary is used in a data source is more important than how often individual classes and properties arernused in the LOD cloud. Overall, the results of this survey help in understanding the strategies how data engineers reuse vocabularies, and theyrnmay also be used to develop future vocabulary engineering tools.
Diese Bachelorarbeit behandelt die Zusammenführung der bereits vorliegenden Winkelrekonstruktions- und Simulationskomponente und erweitert diese mit Funktionen, um die Durchführung von systematischen Tests zu ermöglichen. Hierzu wird die Übergabe von Bildern aus der Simulationskomponente an die Winkelrekonstruktionskomponente ermöglicht. Des weiteren wird eine GUI zur Testlaufsteuerung und Parameterübergabe sowie eine Datenbankanbindung zur Speicherung der verwendeten Einstellungen und erzeugter Daten angebunden. Durch die Analyse der erzeugten Daten zeigt sich eine ausreichende durchschnittliche Präzision von 0.15° und eine maximale Abweichung der einzelnen Winkel von 0.6°. Der größte Gesamtfehler beläuft sich in den Testläufen auf 0.8°. Der Einfluss von fehlerhaften Parametern hat von variable zu Variable unterschiedliche Auswirkungen. So verstärkt ein Fehler in Höhe den Messfehler um ein vielfaches mehr, als ein Fehler in der Länge der Deichsel.
Object recognition is a well-investigated area in image-based computer vision and several methods have been developed. Approaches based on Implicit Shape Models have recently become popular for recognizing objects in 2D images, which separate objects into fundamental visual object parts and spatial relationships between the individual parts. This knowledge is then used to identify unknown object instances. However, since the emergence of aσordable depth cameras like Microsoft Kinect, recognizing unknown objects in 3D point clouds has become an increasingly important task. In the context of indoor robot vision, an algorithm is developed that extends existing methods based on Implicit Shape Model approaches to the task of 3D object recognition.
Anreizfaktoren der Wissensverwertung für Universitäten und kleine und mittelständische Unternehmen
(2014)
This scientific paper identifies and describes the incentives for the utilization of knowledge for universities and small and medium-sized companies. In addition, different models " for example the Knott/Wildavsky model " are continuously adapted, created and expanded, which leads to an new integrative model. The main problem is that companies have to integrate knowledge from external sources into their operations. According to the literature, this model of open innovation is considered as inevitable in order to remain competitive. This is especially the case for small and medium-sized companies. The reasons for this are illustrated as the possibilities of a successful collaboration. Germany has a relativley high involvement in knowledge and technology transfer in international comparison. Nevertheless, a number of companies assume their insitution won´t benefit from knowledge utilization.
The literature review revealed that there is no existing model, which combines the stages of knowledge utilization with the incentives of universities and companies. This paper closes the identified gap through the created integrative model. The formulated incentive factors can help universities and companies recognizing if a cooperated research is beneficial or not.
At the beginning the basic theoretical foundations are defined based on the literature review, followed by a description of the incentive factors of knowledge exploitation. On the one hand there is a distinction between tangible and intangible incentives. On the other hand there is a segmentation of extrinsic and intrinsic motivation. Both are important regarding to the motivation of employees and scientists. In the end, a knowledge utilization model is presented and adapted to the present case, before proceeding with the extension of the model providing an additional perspective and adding the incentive factors.
By the procedure described an integrative model is created. It can be useful to all affected people, universities and their scientists as well as small and medium-sized companies and their employees.
This thesis covers the mathematical background of ray-casting as well as an exemplary implementation on graphics processing units, using a modern programming interface. The implementation is embedded within an editor, which enables the user to activate optimizations of the algorithm. Techniques like transfer functions and local illumination are available for a more realistic visualization of materials. Moreover, the user interface gives access to features like importing volumes, let one define a custom transfer function, holds controls to adjust parameters of rendering and allows to activate further techniques, which are also subject of discussion in this thesis. Benefit of all shown techniques is measured, whether it is expected to be visual or on the part of performance.
This diploma thesis describes the concept and implementation of a software router for policy-based Internet regulation. It is based on the ontology InFO described by Kasten and Scherp. InFO is destined for a system-independent description of regulation mechanisms. Additionally, InFO enables a transparent regulation by linking background information to the regulation mechanisms. The InFO extension RFCO extends the ontology with router-specific entities. A software router is developed to implement RFCO at the IP level. The regulation is designed to be transparent by letting the router inform affected users about the regulation measures. The router implementation is exemplarily tested in a virtual network environment.
Remote rendering services offer the possibility to stream high quality images to lower powered devices. Due to the transmission of data the interactivity of applications is afflicted with a delay. A method to reduce delay of the camera manipulation on the client is called 3d-warping. This method causes artifacts. In this thesis different approaches of remote rendering setups will be shown. The artifacts and improvements of the warping method will be described. Methods to reduce the artifacts will be implemented and analyzed.
Abstract 3 This paper explains the convolution reverb, a method that enables users to add realistic sounding reverberation to audio material that was recorded in neutral sounding rooms. In particular, the possibility of computing the effect on the GPU using OpenCL is discussed, to make use of the high concurrency of the problem. This paper aims at the development of a VST plugin that utilizes the GPU accelerated convolution algorithm, so that it can be used for audio software solutions.
Aufgrund des branchenweiten Bedarfs den Konkurrenzkampf zu umgehen, entwickelten Kim und Mauborgne die Blue Ocean Strategy, um neue Märkte zu ergründen. Diese bezeichnen sie als einzigartig. Da jedoch weitere Strategien zur Ergründung neuer Märkte existieren, ist es das Ziel dieser Arbeit herauszufinden, anhand welcher Charakterisierungsmerkmale die Blue Ocean Strategy als einzigartig angesehen werden kann.
Die Strategie von Kim und Mauborgne soll daher mit Schumpeters schöpferischen Zerstörung, Ansoffs Diversifikationsstrategie, Porters Nischenstrategie und Druckers Innovationsstrategien verglichen werden. Für den Vergleich werden die Charakterisierungsmerkmale herangezogen, nach denen Kim und Mauborgne die Blue Ocean Strategy als einzigartig beurteilen. Auf Basis dieser Kriterien wird ein Metamodell entwickelt, mit dessen Hilfe die Untersuchung durchgeführt wird.
Der Vergleich zeigt, dass die Konzepte von Schumpeter, Ansoff, Porter und Drucker in einigen Kriterien der Blue Ocean Strategy ähneln. Keine der Strategien verhält sich jedoch in allen Punkten so wie das Konzept von Kim und Mauborgne. Während die Blue Ocean Strategy ein Differenzierung und Senkung der Kosten anstrebt, orientieren sich die meisten Konzepte entweder an einer Differenzierung oder an einer Kostenreduktion. Auch die Betretung des neuen Marktes wird unterschiedlich interpretiert. Während die Blue Ocean Strategy auf einen Markt abzielt, der unergründet ist und somit keinen Wettbewerb vorweist, werden bei den anderen Strategien oft bestehende Märkte als neu interpretiert, auf denen das Unternehmen bisher nicht agiert hat. Dies schließt die vorherige Existenz der Märkte jedoch nicht aus.
Auf Basis der durch den Vergleich gezogenen Erkenntnisse, kann somit die Blue Ocean Strategy als einzigartig bezeichnet werden.
Data Mining im Fußball
(2014)
The term Data Mining is used to describe applications that can be applied to extract useful information from large datasets. Since the 2011/2012 season of the german soccer league, extensive data from the first and second Bundesliga have been recorded and stored. Up to 2000 events are recorded for each game.
The question arises, whether it is possible to use Data Mining to extract patterns from this extensive data which could be useful to soccer clubs.
In this thesis, Data Mining is applied to the data of the first Bundesliga to measure the value of individual soccer players for their club. For this purpose, the state of the art and the available data are described. Furthermore, classification, regression analysis and clustering are applied to the available data. This thesis focuses on qualitative characteristics of soccer players like the nomination for the national squad or the marks players get for their playing performance. Additionally this thesis considers the playing style of the available players and examines if it is possible to make predictions for upcoming seasons. The value of individual players is determined by using regression analysis and a combination of cluster analysis and regression analysis.
Even though not all applications can achieve sufficient results, this thesis shows that Data Mining has the potential to be applied to soccer data. The value of a player can be measured with the help of the two approaches, allowing simple visualization of the importance of a player for his club.
The initial problem that motivated this thesis is the lock of possibility to represent finished theses by students of the research group BAS. Many finished thesis are only available in a printed version. Some of the students created their own websites but these are not uniform.
The first step to solve this problem is to create an overall research design. The research design of this thesis based on the construction-oriented approach of design science research by Hevner [2007]. The initial problem will be solved by creating a Web 2.0 website. Therefore the open source content management system Drupal is used. For the implementation of the target system, a set of requirements will be collected. This set of requirements will be collected by using various methods such like mock ups, interviews, collaboration scenarios and personas. To meet the collected requirements a set of additional modules will be added to the core version of Drupal. This advanced version of Drupal will be scenario and user tested. A result of this work is a deployable prototype, with which it is possible to present various theses. A further result will be user guides that describe the operation of the prototype. This thesis finishes with a conclusion and an outlook on the further use of the prototype.
Im Rahmen dieser Bachelor-Arbeit wurde ein IT-gestützter Prototyp (als Excel-Applikation) entwickelt, mit dem komplexe Entscheidungsfindungen auf Grundlage der Nutzwertanalyse durchgeführt werden können. Er eignet sich zur Bewertung aller Arten von betrieblichen Anwendungssystemen, darüber hinaus ist er auch für andere unternehmerische Entscheidungen verwendbar, da die zugrunde liegende Nutzwertanalyse universell einsetzbar ist. Der Prototyp berücksichtigt und identifiziert 13 Merkmalsgruppen mit insgesamt 100 Merkmalen für Groupware. Ein zusätzlich erstelltes 20-minütiges Tutorial-Video erläutert Schritt für Schritt dessen Nutzung und Funktionalität. Sämtliche Gruppen und Merkmale wurden von einem befragten externen Experten gewichtet. Mit Hilfe des erarbeiteten umfangreichen Kataloges lassen sich künftig Groupware-Produkte effizienter und aussagekräftiger vergleichen. Dieses Tool ist eine Weiterentwicklung im Bereich der Nutzwertanalyse und hilft dabei, intuitiv und inhaltlich-vergleichend Merkmale/Gruppen zu erstellen und eine Nutzwertanalyse durchzuführen. Damit wird ein Benchmark mit vielfältigen Filteroptionen erstellt, der eine tabellarische als auch graphische Auswertung ermöglicht.
Das durchgeführte Experten-Interview und die Auswertung der Fachliteratur haben aber auch deutlich gemacht, dass die Nutzwertanalyse nicht als einziges Argument bzw. Instrument zur Entscheidungsfindung beitragen darf. Zangemeister, Systemtechniker und Fachmann im Bereich der mehrdimensionalen Bewertung und Entscheidungsfindung, merkt hierzu an: "Nutzwertmodelle dürfen nicht als Ersatz, sondern zunächst als eine wichtige Ergänzung der übrigen Modelle betrachtet werden, die dem systematischen Abbau der Entscheidungsproblematik bei der Auswahl von Projektalternativen dienen können" [Zangemeister 1976, S.7]. Alles in allem bietet die Nutzwertanalyse aufgrund der strukturierten Zergliederung des Bewertungsprozesses in Teilaspekte eine qualitativ bessere Übersicht über ein zu bewertendes Problem und stellt eine aussagestarke Zusammenstellung und Auswertung mit detaillierten Informationen über die Bewertungsobjekte auf.
Next word prediction is the task of suggesting the most probable word a user will type next. Current approaches are based on the empirical analysis of corpora (large text files) resulting in probability distributions over the different sequences that occur in the corpus. The resulting language models are then used for predicting the most likely next word. State-of-the-art language models are based on n-grams and use smoothing algorithms like modified Kneser-Ney smoothing in order to reduce the data sparsity by adjusting the probability distribution of unseen sequences. Previous research has shown that building word pairs with different distances by inserting wildcard words into the sequences can result in better predictions by further reducing data sparsity. The aim of this thesis is to formalize this novel approach and implement it by also including modified Kneser-Ney smoothing.
In the man-machine interaction tracking and identification of individuals plays an important role. In this work, a framework for the service-robot Lisa, of the Active Vision Group, has been created to combine different methods for the detection, tracking and identification of individuals. First leg detection is performed to establish hypotheses for people using a 2D-laserscan. This assumption needs to be confirmed by an analysis of the Kinect point cloud. After successful confirmation online-boosting on RGB-data is performed for identification. The leg data will also be used with a linear Kalman filter to estimate the movement of people. Through the combination of of Kalman filter with leg detection and online-boosting people tracking should be enabled. Further receiving an interchange of persons should - by brief occlusion or faulty associate of legs - can be prevented.
The architecture of decentralized digital transaction systems with public transaction history provides no transaction monitoring to prevent unwanted transactions and to identify the transmitter and receiver of those transactions. With the introduction of a public list of unwanted addresses, it is possible to isolate these unwanted addresses by general exclusion and thereby to prevent unwanted transactions, as well as to identify the owners of unwanted addresses. The public list management can be performed by decentralized multiple instances using a trust network, so that the decentralized nature of the systems is maintained.
This work presents an application for simulation objects, which can change their aggregate states between solid and liquid using a temperature system. The focal points are the simulation of fluids with a particle system, the generation of a surface and the visualization of metal. The application should be interactive and match the real time conditions. Different types of Shader are used for the parallelized computations on the GPU. Also more options to use the application and possible improvements are presented.
Systems to simulate crowd-behavior are used to simulate the evacuation of a crowd in case of an emergency. These systems are limited to the moving-patterns of a crowd and are generally not considering psychological and/or physical conditions. Changing behaviors within the crowd (e.g. by a person falling down) are not considered.
For that reason, this thesis will examine the psychological behavior and the physical impact of a crowd- member on the crowd. In order to do so, this study develops a real-time simulation for a crowd of people, adapted from a system for video games. This system contains a behavior-AI for agents. In order to show physical interaction between the agents and their environment as well as their movements, the physical representation of each agent is realized by using rigid bodies from a physics-engine. The movements of the agents have an additional navigation mesh and an algorithm for collision avoidance.
By developing a behavior-AI a physical and psychological state is reached. This state contains a psychological stress-level as well as a physical condition. The developed simulation is able to show physical impacts such as crowding and crushing of agents, interaction of agents with their environment as well as factors of stress.
By evaluating several tests of the simulation, this thesis examines whether the combination of physical and psychological impacts is implementable successfully. If so, this thesis will be able to give indications of an agent- behavior in dangerous and/or stressful situations as well as a valuation of the complex physical representation.
Ziel dieser Ausarbeitung ist es, das Wippe-Experiment gemäß dem Aufbau innerhalb der AG Echtzeitsysteme unter Leitung von Professor Dr. Dieter Zöbel mithilfe eines LEGO Mindstorms NXT Education-Bausatzes funktionsfähig nachzubauen und das Vorgehen zu dokumentieren. Der dabei entstehende Programmcode soll didaktisch aufbereitet und eine Bauanleitung zur Verfügung gestellt werden. Dies soll gewährleisten, dass Schülerinnen und Schüler auch ohne direkten Zugang zu einer Hochschule oder ähnlichem Institut den Versuchsaufbau Wippe möglichst unkompliziert im Klassenraum erleben können.
Das Kommunikationsverhalten hat sich in den letzten Jahren durch die Smartphonenutzung verändert. Die Nutzer kommunizieren oft nur noch über den elektronischen Weg. Die persönliche Kommunikation, außerhalb des Smartphones, nimmt ab. Das Umfeld gerät unterdessen in Vergessenheit. In der vorliegenden Arbeit werden verschiedene Spielkonzepte entwickelt, welche die Kommunikation steigern sollen. Realisiert wird der Ansatz in einer prototypischen Stadtführer-App, nach den Spielkonzepten von "Scotland Yard" und "Schnitzeljagd". Während der Nutzung müssen die Spieler verschiedene Aufgaben lösen. Welches Spielkonzept sich in Bezug auf die Kommunikationsförderung am besten eignet, wird in einer Evaluation analysiert.
The following thesis analyses the functionality and programming capabilitiesrnof compute shaders. For this purpose, chapter 2 gives an introductionrnto compute shaders by showing how they work and how they can be programmed. In addition, the interaction of compute shaders and OpenGL 4.3 is shown through two introductory examples. Chapter 3 describes an NBodyrnsimulation that has been implemented in order to show the computational power of compute shaders and the use of shared memory. Then it is shown in chapter 4 how compute shaders can be used for physical simulationsrnand where problems may arise. In chapter 5 a specially conceived and implemented algorithm for detecting lines in images is described and then compared with the Hough transform. Lastly, a final conclusion is drawn in chapter 6.
Die Arbeit stellt Path Tracing zum Rendern von Bildern mitrnglobaler Beleuchtung vor. Durch die Berechnung der Rendergleichung, mithilfe von Zufallsexperimenten, ist das Verfahren physikalisch plausibel. Entscheidend für die Qualität der Ergebnisse ist Sampling. Der Schwerpunktrnder Arbeit ist die Untersuchung verschiedener Samplingstrategien. Dazu werden die Ergebnisse unterschiedlicher Dichtefunktionen verglichenrnund die Methoden bewertet. Außerdem werden Effekte, wie beispielsweise Depth of Field, mittels Sampling visualisiert.
The amount of information on the Web is constantly increasing and also there is a wide variety of information available such as news, encyclopedia articles, statistics, survey data, stock information, events, bibliographies etc. The information is characterized by heterogeneity in aspects such as information type, modality, structure, granularity, quality and by its distributed nature. The two primary techniques by which users on the Web are looking for information are (1) using Web search engines and (2) browsing the links between information. The dominant mode of information presentation is mainly static in the form of text, images and graphics. Interactive visualizations offer a number of advantages for the presentation and exploration of heterogeneous information on the Web: (1) They provide different representations for different, very large and complex types of information and (2) large amounts of data can be explored interactively using their attributes and thus can support and expand the cognition process of the user. So far, interactive visualizations are still not an integral part in the search process of the Web. The technical standards and interaction paradigms to make interactive visualization usable by the mass are introduced only slowly through standardatization organizations. This work examines how interactive visualizations can be used for the linking and search process of heterogeneous information on the Web. Based on principles in the areas of information retrieval (IR), information visualization and information processing, a model is created, which extends the existing structural models of information visualization with two new processes: (1) linking of information in visualizations and (2) searching, browsing and filtering based on glyphs. The Vizgr toolkit implements the developed model in a web application. In four different application scenarios, aspects of the model will be instantiated and are evaluated in user tests or examined by examples.
Im Rahmen des "Design Thinking"-Prozesses kommen unterschiedliche Varianten kreativitätsfördernder Techniken zum Einsatz. Aufgrund der zunehmenden Globalisierung ergeben sich immer häufiger Kollaborationen, bei denen sich die jeweiligen Projektteilnehmer an verteilten Standorten befinden. Somit erweist sich die Digitalisierung des Design-Prozesses als durchaus erstrebenswert. Ziel der hier vorliegenden Untersuchung ist daher die Erstellung eines Bewertungsschemas, welches die Eignung digitaler Kreativitätstechniken in Bezug auf das "Entrepreneurial Design Thinking" misst. Des Weiteren soll geprüft werden, inwiefern sich der Einsatz von e-Learning-Systemen in Kombination mit der Verwendung digitaler Kreativitätstechniken eignet. Diese Prüfung soll am Beispiel der e-Learning Software "WebCT" konkretisiert werden. Hieraus ergibt sich die folgende Fragestellung: Welche digitalen Kreativitätstechniken eignen sich für die Anwendung im Bereich des "Entrepreneurial Design Thinkings" unter Einsatz der e-Learning Plattform "WebCT"? Zunächst wird eine Literaturanalyse bezüglich des "Entrepreneurial Design Thinkings", der klassische und digitale Kreativitätstechniken sowie des Arbeitens in Gruppen, was auch das Content Management, e-Learning-Systeme und die Plattform "WebCT" beinhaltet, durchgeführt. Im Anschluss daran wird eine qualitative Untersuchung durchgeführt. Auf Basis bereits bestehender Literatur, soll ein Bewertungsschema erstellt werden, welches misst, welche der behandelten digitalen Kreativitätstechniken für den Einsatz im "Entrepreneurial Design Thinking" am besten geeignet ist. Darauf aufbauend erfolgt die Verknüpfung des digitalisierten "Design Thinking"-Prozesses mit der e-Learning Plattform "WebCT". Abschließend wird diskutiert, in wie fern diese Zusammenführung als sinnvoll erachtet werden kann.
This paper presents a method for the evolution of SHI ABoxes which is based on a compilation technique of the knowledge base. For this the ABox is regarded as an interpretation of the TBox which is close to a model. It is shown, that the ABox can be used for a semantically guided transformation resulting in an equisatisfiable knowledge base. We use the result of this transformation to effciently delete assertions from the ABox. Furthermore, insertion of assertions as well as repair of inconsistent ABoxes is addressed. For the computation of the necessary actions for deletion, insertion and repair, the E-KRHyper theorem prover is used.
Die vorliegende Arbeit befasst sich mit der volkswirtschaftlichen Untersuchung von Arbeit in virtuellen Welten und hat als Kerninhalt die Analyse des Arbeitsmarktes in "Massively Multiplayer Online Role-Playing Games" (MMORPGs). Als Ausgangsbasis diente zum einen der Faktor Arbeit in der Realität, zum anderen wurden zusätzliche Besonderheiten von MMORPGs in die Betrachtung miteinbezogen, woraus sich ein Gesamtbild des virtuellen Arbeitsmarkts ergab, aus dem sich relevante Indikatoren ableiten ließen. Neben dem grundsätzlichen Befund der Existenz eines virtuellen Arbeitsmarktes, wurden Ähnlichkeiten zum realen Arbeitsmarkt deutlich. So war es möglich virtuelle Stundenlöhne zu berechnen, unternehmensähnliche Strukturen in Spielergruppierungen nachzuweisen und ausgehend von der Humankapitaltheorie, eine modifizierte Theorie ("Avatarkapital") für virtuelle Welten zu ermitteln. Allerdings ergaben sich auch Unterschiede, so ist die Komplexität der Herstellungsprozesse in den untersuchten MMORPGs in der Regel weitaus geringer als in der Realität. Durch eine Gegenüberstellung von Motivationsfaktoren in beiden Arbeitswelten wurden weiterhin Gemeinsamkeiten, aber auch Unterschiede festgestellt und dargelegt. Zusätzlich wurde aufgezeigt, dass das aktuell diskutierte Thema Mindestlohn auch in virtuellen Arbeitsmärkten von MMORPGs anzutreffen ist und als Spielmechanik implementiert wurde, um Motivation durch andauernde Beschäftigung zu gewährleisten. Über diese Parallelen hinaus, wurde anhand einer Analyse von Waren- und Geldtransaktionen (Real-Money-Trading) zwischen Virtualität und Realität eine Verbindung beider Welten nachgewiesen, die beide Arbeitsmärkte gleichermaßen betrifft. Neben der theoretischen Untersuchung, war es auch Ziel eigene Beobachtungen und Ansätze in die Ergebnisse einfließen zu lassen. Besonders in der abschließenden empirischen Untersuchung war es somit möglich weitere Faktoren zu entdecken, die nicht ausreichend aus der Theorie heraus zu ermitteln waren. Vor allem weitere Erkenntnisse zum Thema Produktivitätsmessung in virtuellen Welten konnten so aus der Praxis in die Theorie einfließen. Schlussendlich wurde aber auch deutlich, dass sich die Untersuchungen zum Thema Arbeitsmarkt in virtuellen Welten noch in einem frühen Stadium befinden und zahlreiche Forschungsobjekte in diesem Bereich existieren, die mit Sicherheit zu einem Erkenntnisgewinn in der Volkswirtschaftslehre führen.
The Microsoft Kinect is currently polular in many application areas because ofrnthe cheap price and good precission. But controlling the cursor is unapplicablerndue to jitter in the skeletton data. My approach will try to stabilisize the cursor position with common techniques from image processing. The input therefore will be the Kinect color camera. A final position will be calculated using the different positions of the tracking techniques. For controlling the cursor the right hand should be tracked. A simple click gesture will also be developed. The evaluation will show if this approach was succesfull.
ERP market analysis
(2013)
Der aktuelle ERP Markt wird dominiert von den fünf größten Anbietern SAP, Oracle, Microsoft, Infor und Sage. Da der Markt und die angebotenen Lösungen vielfältig sind, bedarf es einer fundierten Analyse der Systeme. Die Arbeit beleuchtet dabei anhand ausgesuchter Literatur und Kennzahlen der verschiedenen Unternehmen die theoretische Seite der angebotenen Lösungen der fünf großen ERP Anbieter. Daneben wird die Nutzung der Systeme in der Praxis anhand der Befragung von sechs Anwendern analysiert und die Systeme miteinander verglichen.
Ziel der Arbeit ist es, dass die Forschungsfragen beantwortet werden und dass es bezogen auf die Systeme dem Leser der Arbeit ersichtlich wird, welches ERP System für welche Unternehmensbranche und Unternehmensgröße am besten geeignet ist.
Des Weiteren gibt die Arbeit Aufschluss darüber, welche Trends für ERP Systeme für die Zukunft zu erwarten sind und welche Herausforderungen sich dadurch für die Unternehmen stellen.
In dieser Studienarbeit sollen verschiedene Routing-Lookup Algorithmen aufgelistet und verglichen werden, mit denen eine Routing-Tabelle erstellt und angepasst werden kann. Dazu werden hier nur dynamische Verfahren in Betracht gezogen. Allgemein wird die Funktionsweise einer Routing-Tabelle erklärt und drei Verfahren bzw. Algorithmen analysiert und bewertet. Die Algorithmen werden anhand von Beispielen erläutert und in einem abschließenden Kapitel gegenüber gestellt. Dabei werden die Vor- und Nachteile der einzelnen Verfahren aufgelistet.
Polsearchine: Implementation of a policy-based search engine for regulating information flows
(2013)
Many search engines regulate Internet communication in some way. It is often difficult for end users to notice such regulation, as well as obtaining background information for it. Additionally, the regulation can usually be circumvented easily. This bachelor thesis presents the prototypical metasearch engine "Polsearchine" for addressing these weaknesses. Its regulation is established through InFO, a model for regulating information flows developed by Kasten and Scherp. More precisely, the extension for regulating search engines SEFCO is being used. For retrieving search results, Polsearchine uses an external search engine API. The API can be interchanged easily to make this metasearch engine independent from one specific API.
Die weltweite Zugänglichkeit und umfangreiche Nutzung des Internets machen dieses Medium zu einem effizienten und beliebten Informations-, Kommunikations-, und Verkaufsinstrument. Immer mehr Menschen und Organisationen versuchen, diese Vorzüge durch eine eigene Website für ihre Zwecke zu verwenden. Als hilfreiches Mittel zur Optimierung von Webpräsenzen bewährte sich in den letzten Jahren der Einsatz von Web-Analytics-Software. Durch diese Software sind Websitebetreiber in der Lage, Informationen über die Besucher ihrer Website und deren Nutzungsverhalten zu sammeln und zu messen. Das angestrebte Resultat sind Optimierungsentscheidungen auf Basis von Daten an Stelle von Annahmen und wirkungsvolle Testmöglichkeiten.
Für den Bereich des E-Commerce existieren bis dato zahlreiche wissenschaftliche und praxiserprobte Hilfestellungen für Web-Analytics-Projekte. Informationswebsites hingegen werden trotz ihrer Wichtigkeit nur vereinzelt thematisiert. Um diesem Defizit entgegenzuwirken, hat Hausmann 2012 das Framework for Web Analytics entwickelt, welches dem Anwender ein hilfreiches Referenzmodell für sein Web Analytics-Vorhaben bietet. Diesen Ansatz weiter voranzutreiben ist das Ziel der Abschlussarbeit. Dazu wird mithilfe einer Literaturanalyse und einer Fallstudie das Framework validiert und ergänzt, sowie weitere Handlungsempfehlungen identifiziert. Als Ergebnis werden die wichtigsten Erkenntnisse dieser Forschung zusammengefasst und für den zukünftigen Gebrauch festgehalten.
Large amounts of qualitative data make the utilization of computer-assisted methods for their analysis inevitable. In this thesis Text Mining as an interdisciplinary approach, as well as the methods established in the empirical social sciences for analyzing written utterances are introduced. On this basis a process of extracting concept networks from texts is outlined and the possibilities of utilitzing natural language processing methods within are highlighted. The core of this process is text processing, to whose execution software solutions supporting manual as well as automated work are necessary. The requirements to be met by these solutions, against the background of the initiating project GLODERS, which is devoted to investigating extortion racket systems as part of the global fiσnancial system, are presented, and their fulσlment by the two most preeminent candidates reviewed. The gap between theory and pratical application is closed by a prototypical application of the method to a data set of the research project utilizing the two given software solutions.
In this thesis we discuss the increasingly important routing aggregation and its consequences on avoiding routing loops. As basis for implementation and evaluation I will use the RMTI protocol, developed at the University of Koblenz, which is an evolution of the RFC2453 specified in the Routing Information Protocol version 2. The virtual network environment Virtual Network User Mode Linux (VNUML) is used within this thesis as environment. With VNUML it is possible to operate and evaluate real network scenarios in a virtual environment. The RMTI has already proven its ability to detect topological loops and thereby prevent the formation of these routing loops. In this thesis we will describe the function of the RMTI and then discuss under which circumstances we can use routing aggregation, without it resulting in routing anomalies. In order to implement these changes it is essential to have a deeper understanding of the structure of routing tables, so the construction will be explained using reference to examples. There follows a description of which points have to be changed, in the RMTI in order to avoid loops despite aggregation. In Summary we will evaluate the affect the routing aggregation has on the reorganization ability of the virtual network.
This thesis describes the implementation of a Path-planning algorithm for multi-axle vehicles using machine learning algorithms. For that purpose, a general overview over Genetic Algorithms is given and alternative machine learning algorithms are briefly explained. The software developed for this purpose is based on the EZSystem Simulation Software developed by the AG Echtzeitysteme at the University Koblenz-Landau and a path correction algorithm developed by Christian Schwarz, which is also detailed in this paper. This also includes a description of the vehicle used in these simulations. Genetic Algorithms as a solution for path-planning in complex scenarios are then evaluated based on the results of the developed simulation software and compared to alternative, non-machine learning solutions, which are also shortly presented.
Forwarding loops
(2013)
Today you can find smartphones everywhere. This situation created a hype for Augmented Reality and AR Apps. The big question is: Do these applications provide a real added value? To make AR pratically it is important to add the computational power of a computer to the advantages of AR. An easy and fast way of interaction is essential.
A Poker-Assistance-Software is an ideal test area for an AR Application with real added value. The estimation of the winning probability and a fast automated tracking of the playing cards is the perfect field of investigation.
In this discussion it is interesting to evaluate the added value of AR Applications in common.
Recipients" youtube comments to the five most successful songs of 2011 and 2012 are tested for nostalgic content. These nostalgic relevant comments are analyzed by content and finally interpreted. It shall be found out, whether nostalgic music content is a factor for success. By using the uses and gratifications theory the recipients" purpose of consuming nostalgic-evoking music will be identified. Music is a clearly stronger trigger for evoking nostalgia than the music video whereas nostalgia triggers positive and/or negative affect. Furthermore personal nostalgia is much more evident than historical nostalgia. Moreover the lyrics have a considerably higher potential to elicit nostalgia than any other song units. Persons and momentous events are the most frequent objects in personal nostalgic reverie. The purpose of consuming nostalgic music is the intended evocation of positive and/or negative affect. Hence nostalgia in music seems to satisfy certain needs and it can be assumed that nostalgia is a factor of success in music industry.
Infinite worlds
(2013)
This work is concerned with creating a 2D action-adventure with roleplay elements. It provides an overview over various tasks of the implementation. First, the game idea and the used gamemechanism are verified and a definfinition of requirements is created. After introducing the used framework, the software engineering concept for realization is presented. The implementation of control components, game editor, sound and graphics is shown. The graphical implementation pays special attention to the abstraction of light and shadow into the 2D game world.
Due to the increasing pervasiveness of the mobile web, it is possible to send and receive mails with mobile devices. Content of digital communication should be encrypted to prevent eavesdropping and manipulation. Corresponding procedures use cryptographic keys, which have to be exchange previously. It has to be ensured, that a cryptographic key really belongs to the person, who it is supposedly assigned to. Within the scope of this thesis a concept for a smartphone application to exchange cryptographic keys was designed. The concept consists of a specification of a component-based framework, which can be used to securely exchange data in general. This framework was extended and used as the basis for a smartphone application. The application allows creating, managing and exchanging cryptographic keys. The Near Field Communication is used for the exchange. Implemented security measures prevent eavesdropping and specific manipulation. In the future the concept and the application can be extended and adjusted to be used in other contexts.
We present the conceptual and technological foundations of a distributed natural language interface employing a graph-based parsing approach. The parsing model developed in this thesis generates a semantic representation of a natural language query in a 3-staged, transition-based process using probabilistic patterns. The semantic representation of a natural language query is modeled in terms of a graph, which represents entities as nodes connected by edges representing relations between entities. The presented system architecture provides the concept of a natural language interface that is both independent in terms of the included vocabularies for parsing the syntax and semantics of the input query, as well as the knowledge sources that are consulted for retrieving search results. This functionality is achieved by modularizing the system's components, addressing external data sources by flexible modules which can be modified at runtime. We evaluate the system's performance by testing the accuracy of the syntactic parser, the precision of the retrieved search results as well as the speed of the prototype.
Geschäftsprozessmanagement (GPM) gilt in der heutigen Unternehmensentwicklung als einer der wichtigsten Erfolgsfaktoren und wird von modernen Unternehmen auch als solcher wahrgenommen [vgl. IDS Scheer 2008]. Bereits 1993 waren Geschäftsprozesse für Hammer und Champy der zentrale Schlüssel zur Reorganisation von Unternehmen [vgl. Hammer, Champy 1993, S. 35]. Den Paradigmenwechsel von der Aufbau- zur Ablauforganisation und letztendlich zur etablierten "Prozessorganisation" wurde von Gaitanides schon 1983 erstmals beschrieben [vgl. Gaitanides 2007].
Trotz einer breiten und tiefen Behandlung des Themengebiets "Geschäftsprozessmanagement" in der wissenschaftlichen Literatur, gestaltet es sich schwierig, einen schnellen Überblick in Bezug auf Vorgehensweisen zur Einführung von Geschäftsprozessmanagement zu erhalten. Dies ist im Wesentlichen der Tatsache geschuldet, dass in der Literatur "Geschäftsprozessmanagement" in unterschiedlichen wissenschaftlichen Bereichen wie z.B. der Organisationslehre [vgl. z.B. Vahs 2009; Schulte-Zurhausen 2005], der Betriebswirtschaft [vgl. z.B. Helbig 2003; Schmidt 2012] oder der Informatik bzw. Wirtschaftsinformatik [vgl. z.B. Schmelzer, Sesselmann 2008; Schwickert, Fischer 1996] behandelt und der Aufbau eines GPMs anhand unterschiedlicher Themenschwerpunkte beschrieben wird. Insbesondere gestaltet sich die Suche nach Literatur zu Geschäftsprozessmanagement speziell für kleine und mittlere Unternehmen (KMU) und zu Einführungsmethoden von BPM in KMU als schwierig. Die Kombination "Vorgehensweisen zur Einführung von Geschäftsprozessmanagement bei KMU" ist in der wissenschaftlichen Literatur nicht aufzufinden. Mit der vorliegenden Arbeit soll ein erster Ansatz geschaffen werden, diese Lücke zu schließen. Diese Arbeit zielt darauf ab, anhand einer Auswahl von Vorgehensweisen zur Einführung von Geschäftsprozessmanagement deren charakteristische Eigenschaften zu analysieren und einander gegenüberzustellen. Zudem erfolgt eine Bewertung auf die Anwendbarkeit einzelner Vorgehensweisen auf kleine und mittlere Unternehmen anhand zuvor erhobener, für KMU wichtiger Anforderungen an BPM und dessen Einführung.
Auf Basis der dieser Arbeit zugrundeliegenden Bewertungskriterien schneidet die Vorgehensweise nach Schulte-Zurhausen im Gesamtergebnis am besten ab. Dennoch ist festzustellen, dass jede der untersuchten Vorgehensweisen Stärken und Schwächen bzgl. der Eignung für ein KMU aufweist. Dies hat zur Folge, dass bei der Einführung eines Geschäftsprozessmanagements jede der untersuchten Vorgehensweisen einer Anpassung und Adaption auf die Situation eines KMUs bedarf. Aus diesem Grund empfiehlt der Autor dieser Arbeit einem KMU, eine Vorgehensweise als grundlegende Vorgehensweise der Einführung festzulegen (in diesem Fall die Vorgehensweise nach Schulte-Zurhausen) und diese durch jeweils geeignete Aspekte der weiteren Vorgehensweisen anzureichern bzw. zu vervollständigen.
Augmented Reality (AR) is getting more and more popular. To augment information into the field of vision of the user using HMDs, e.g. front shields of a car, glasses, displays of a smartphone or tablets are the main use of AR technology. It is necessary to get the position and orientation (pose) of the camera in space to augment correctly.
Nowadays, this is solved with artificial markers. These known markers are placed in the room and the system is taught to this set up. The next step is to get rid of these artificial markers. If we are calculating the pose without such markers we are talking about marker-less tracking. Instead of artificial markers we will use natural objects in the real world as reference points to calculate the pose. Thus, this approach can be used flexibly and dynamically. We are no longer dependent on artificial markers but we need much more knowledge about the scenery to find the pose. This is compensated by technical actions and/or the user himself. However, both solutions are neither comfortable nor efficient for the usage of such a system. This is why marker-less 3D tracking is still a big field of research.
This sets the starting point for the bachelor thesis. In this thesis an approach is proposed that needs only a quantity of 2D Feature from a given camera image and a quantity of 3D Feature of an object to find the initial Pose. With this approach, we got rid of the technical and user assistance. 2D and 3D Features can be detected in any way you like.
The main idea of this approach is to build six correspondences between these quantities. With those we are able to estimate the pose. Each 3D Feature is mapped with the estimated pose onto image coordinates, whereby the estimated pose can be evaluated. Each distance is measured between the mapped 3D Feature and the associated 2D Feature. Each correspondency is evaluated and the results are summed up to evaluate the whole pose. The lower this summed up value is, the better the pose. It has been shown to have a correct pose with a value around ten pixels.
Due to lots of possibilities to build six correspondences between the quantities, it is necessary to optimize the building process. For the optimization we will use a genetic algorithm.
During the test case the system worked quite reliably. The hit rate was around 90% with a runtime of approximately twelve minutes. Without optimization it can take easily some years.
Diese Bachelor-Thesis beschäftigt sich mit der Entwicklung eines Programms, welches den Zahnarzt durch die AR bei seiner Behandlung am Patienten unterstützen soll. Um eine angemessene theoretische Grundlage zu schaffen, wird zunächst der aktuelle Stand der Technik erläutert, der für dieses Projekt relevant ist. Daraufhin werden mögliche zukünftige Technologien vorgestellt, welche die hypothetische Basis dieser Arbeit darstellen. In dem darauffolgenden Unterkapitel wird die Auswahl der Systeme erläutert, die für dieses Projekt verwendet wurden. Der Hauptteil beschäftigt sich zunächst mit dem Vorgehen in der Vorbereitungs- und Planungsphase, um daraufhin den Programmablauf der Applikation sukzessiv vorzustellen. Dabei wird auch auf die Probleme eingegangen, die während des Programmierens entstanden sind. In dem reflektierenden Auswertungsteil werden Verbesserungsvorschlägen und Zusatzfunktionen für das geschriebene Programm präsentiert.
This master thesis deals basically with the design and implementation of a path planning system based on rapidly exploring search trees for general-n-trailers. This is a probabilistic method that is characterized by a fast and uniform exploration. The method is well established, however, has been applied only to vehicles with simple kinematics to date. General-n-trailers represent a particular challenge as their controllability is limited. For this reason the focus of this thesis rests on the application of the mentioned procedure to general-n-trailers. In this context systematic correlations between the characteristics of general-n-trailers and the possibilities for the realization and application of the method are analyzed.
This thesis deals with the development and evaluation of a concept of novel interaction with ubiquitous user interfaces. To accomplish the evaluation of this interaction concept, a prototype was implementated by using an existing head-mounted display solution and an android smartphone.
Furthermore, in the course of this thesis, a concrete use case for this prototype " the navigation through a city block with the aid of an electronic map " was developed and built as an executable application to help evaluate the quality of the interaction concept. Therefore, fundamental research results were achieved.
This bachelor thesis deals with the concept of a smartphone application for emergencies. It describes the basic problem and provides a conceptual approach.
The core content of this thesis is a requirement analysis of the newly to be designed emergency application. Furthermore the functional and non-functional requirements such as usability are specified to give insights for the concept of the application. In addition, single sub-functions of the mHealth applications of the University Koblenz which exists or are still under construction can be integrated in the future emergency application. Based on the catalog of requirements a market analysis for strengths and weaknesses of existing emergency application systems is realized. In the to-be concept the findings were summarized and possible architectural sketches for future emergency applications were given. Furthermore, one conclusion of dealing with this topic is that a design alone is not sufficient to guarantee a good working app. That is why the requirements for the thesis were expanded by the connection to and integration of rescue centers in the architecture of the emergency app.
At the end of the thesis, the reader will receive a comprehensive overview of the provision of emergency data to the rescue control centers by different transmission channels. Furthermore, conditions for the system requirements are also presented as possible scenarios of the architecture of the whole system of the emergency application. The generic and modular approach guarantees that the system is open for future development and integration of functions of other applications.
Die vorliegende Arbeit behandelt die Entwicklung einer Simulationsumgebung zur Darstellung von Objekten im Weltraum und ihrer gravitativen Wechselwirkung zu einander.
Vorab werden in Kapitel 1 Motivation und Zielsetzung der Arbeit erläutert, des Weiteren werden die verwendeten Werkzeuge benannt. Die nötigen astronomischen Grundlagen werden in Form von Begriffserklärungen und der Vorstellung der dieser Arbeit zugrunde liegenden physikalischen Gesetze in Kapitel 2 beschrieben.
Kapitel 3 befasst sich mit dem Aufbau der einzelnen Klassen. Hier wird insbesondere auf die Berechnung der Positionen und Geschwindigkeiten der simulierten Himmelskörper und den Aufbau und die Funktionsweise der verwendeten Elemente der Graphikengine Ogre3D eingegangen.
Im Kapitel 4 wird der Einsatz des Werkzeugs 3ds Max zur Erstellung der Geometrieobjekte und Materialien erläutert.
Abschließend wird in Kapitel 5 ein Fazit gezogen und mögliche zukünftige Erweiterungen erwogen.
Im Rahmen dieser Masterarbeit wird ein umfassender Überblick über die Vielfalt der Sicherheitsmodelle gegeben, indem ausgewählte Sicherheitsmodelle beschrieben, klassifiziert und miteinander verglichen werden.
Sicherheitsmodelle beschreiben in einer abstrakten Weise die sicherheitsrelevanten
Komponenten und Zusammenhänge eines Systems. Mit den Sicherheitsmodellen können komplexe Sachverhalte veranschaulicht und analysiert werden.
Da Sicherheitsmodelle unterschiedliche Sicherheitsaspekte behandeln, beschäftigt
sich diese Arbeit mit der Ausarbeitung eines Klassifizierungsschemas, welches
die strukturelle und konzeptuelle Besonderheiten der Modelle in Bezug auf die zugrundeliegenden Sicherheitsaspekte beschreibt. Im Rahmen des Klassifizierungsschemas werden die drei grundlegenden Modellklassen gebildet: Zugriffskontrollmodelle, Informationsflussmodelle und Transaktionsmodelle.
Sicherheitsmodelle werden in einem direkten und indirekten Vergleich gegenüber gestellt. Im letzten Fall werden sie einer oder mehrerer Modellklassen des Klassifizierungsschemas zugeordnet. Diese Klassifizierung erlaubt, Aussagen über die betrachteten Sicherheitsaspekte und die strukturellen bzw. konzeptuellen Besonderheiten eines Sicherheitsmodells in Bezug auf die anderen Sicherheitsmodelle
zu machen.
Beim direkten Vergleich werden anhand der ausgewählten Kriterien die Eigenschaften
und Aspekte der Sicherheitsmodelle orthogonal zu den Modellklassen
betrachtet.
Ein Werkzeug zur schnellen Erstellung individueller Schriftarten für die jeweiligen akuten Bedürfnisse wäre ein hilfreiches Instrument für Grafiker und Typographen. Die Anforderung für ein solches Instrument kann kaum sein, gute Schriftsätze zu erzeugen, dies liegt in den Händen des Gestalters, jedoch sollte sie jedem, der sich mit dem Thema befassen möchte, einen leichten Einstieg in die Gestaltung geben. Diese Arbeit versucht somit eine möglichst simple Lösung für das komplexe Thema der Schriftgestaltung zu liefern.
Human detection is a key element for human-robot interaction. More and more robots are used in human environments, and are expected to react to the behavior of people. Before a robot can interact with a person, it must be able to detect it at first. This thesis presents a system for the detection of humans and their hands using a RGB-D camera. First, a model based hypotheses for possible positions of humans are created to recognize a person. By using the upper parts of the body are used to extract, new features based on relief and width of a person- head and shoulders are extracted. The hypotheses are checked by classifying the features with a support vector machine (SVM). The system is able to detect people in different poses. Both sitting and standing humans are found, by using the visible upper parts of the person. Moreover, the system is able to recognize if a human is facing or averting the sensor. If the human is facing the sensor, the color information and the distance between hand and body are used to detect the positions of the person- hands. This information is useful for gestures recognition and thus can further enhances human-robot interaction.
Iterative Signing of RDF(S) Graphs, Named Graphs, and OWL Graphs: Formalization and Application
(2013)
When publishing graph data on the web such as vocabulariesrnusing RDF(S) or OWL, one has only limited means to verify the authenticity and integrity of the graph data. Today's approaches require a high signature overhead and do not allow for an iterative signing of graph data. This paper presents a formally defined framework for signing arbitrary graph data provided in RDF(S), Named Graphs, or OWL. Our framework supports signing graph data at different levels of granularity: minimum self-contained graphs (MSG), sets of MSGs, and entire graphs. It supports for an iterative signing of graph data, e. g., when different parties provide different parts of a common graph, and allows for signing multiple graphs. Both can be done with a constant, low overhead for the signature graph, even when iteratively signing graph data.
The search for scientific literature in scientific information systems is a discipline at the intersection between information retrieval and digital libraries. Recent user studies show two typical weaknesses of the classical IR model: ranking of retrieved and maybe relevant documents and the language problem during the query formulation phase. At the same time traditional retrieval systems that rely primarily on textual document and query features are stagnating for years, as it could be observed in IR evaluation campaigns such as TREC or CLEF. Therefore alternative approaches to surpass these two problem fields are needed. Two different search support systems are presented in this work and evaluated with a lab evaluation using the IR test collection GIRT and iSearch with 150 and 65 topics, respectively. These two systems are (1) a query expansion that is based on the analysis of co-occurrences of document attributes and (2) a ranking mechanism that applies informetric analysis of the productivity of information producers in the information production process. Both systems were compared to a baseline system using the Solr search engine. Both methods showed positive effects when applying additional document attributes like author names, ISSN codes and controlled terms. The query expansion showed an improvement in precision (bpref +12%) and in recall (R +22%).
he alternative ranking methods were able to compete with the baseline for author names and ISSN codes and were able to beat the baseline by using controlled terms (MAP +14%). A clear negative influence was seen when using entities like publishers or locations. Both methods were able to generate a substantially different sorting of the result set, measured using Kendall. So, additional to the improved relevance in the result list, the user can get a new and different view on the document set. Query expansion using author names, ISSN codes and thesaurus terms showed great potential that lies within the rich metadata sets of digital library systems. The proposed ranking methods could outperform standard relevance ranking methods after they were filtered by the existence of a so-called power law. This showed that the proposed ranking methods cannot be used universally in any case but require specific frequency distributions in the metadata. A connection between the underlying informetric laws of Bradford, Lotka and Zipf is made clear. The evaluated methods were implemented as interactive search supporting systems that can be used in an interactive prototype and the social science digital library system Sowiport. Besides that, the methods are adaptable to other systems and environments using a free software framework and a web API.
Autonomous systems such as robots already are part of our daily life. In contrast to these machines, humans an react appropriately to their counterparts. People can hear and interpret human speech, and interpret facial expressions of other people.
This thesis presents a system for automatic facial expression recognition with emotion mapping. The system is image-based and employs feature-based feature extraction. This thesis analyzes the common steps of an emotion recognition system and presents state-of-the-art methods. The approach presented is based on 2D features. These features are detected in the face. No neutral face is needed as reference. The system extracts two types of facial parameters. The first type consists of distances between the feature points. The second type comprises angles between lines connecting the feature points. Both types of parameters are implemented and tested. The parameters which provide the best results for expression recognition are used to compare the system with state-of-the-art approaches. A multiclass Support Vector Machine classifies the parameters.
The results are codes of Action Units of the Facial Action Coding System. These codes are mapped to a facial emotion. This thesis addresses the six basic emotions (happy, surprised, sad, fearful, angry, and disgusted) plus the neutral facial expression. The system presented is implemented in C++ and is provided with an interface to the Robot Operating System (ROS).
The goal of this Bachelor thesis was programming an existig six-legged robot, which should be able to explore any environment and create a map of it autonomous. A laser scanner is to be integrated for cognition of this environment. To build the map and locate the robot a suitable SLAM(Simultaneous Localization and Mapping) technique will be connected to the sensor data. The map is reported to be the robots base of path planning and obstancle avoiding, what will be developed in the scope of the bachelor thesis, too. Therefore both GMapping and Hector SLAM will be implemented and tested.
An exploration algorithm is described in this bachelor thesis for exploring the robots environment. The implementation on the robot takes place in the space of ROS(Robot Operating System) framework on a "Raspberry Pi" miniature PC.
A Kinect device has the ability to record color and depth images simultaneously. This thesis is an attempt to use the depth image to manipulate lighting information and material properties in the color image. The presented method of lighting and material manipulation needs a light simulation of the lighting conditions at the time of recording the image. It is used to transform information from a new light simulation directly back into the color image. Since the simulations are performed on a three-dimensional model, a way is searched to generate a model out of single depth image. At the same time the text will react to the problems of the depth data acquisition of the Kinect sensor. An editor is designed to make lighting and material manipulations possible. To generate a light simulation, some simple, real-time capable rendering methods and lighting modells are proposed. They are used to insert new illumination, shadows and reflections into the scene. Simple environments with well defined lighting conditions are manipulated in experiments to show boundaries and possibilities of the device and the techniques being used.
This thesis describes the conception, implementation and evaluation of a collaborative multiplayer game for preschoolers for mobile devices.
The main object of this thesis is to find out, if mobile devices like smartphones and tablet computers are suitable for the interaction of children. In order to develop this kind of game relevant aspects were researched. On this basis a game was designed which was finally tested by preschoolers.
This dissertation investigates the usage of theorem provers in automated question answering (QA). QA systems attempt to compute correct answers for questions phrased in a natural language. Commonly they utilize a multitude of methods from computational linguistics and knowledge representation to process the questions and to obtain the answers from extensive knowledge bases. These methods are often syntax-based, and they cannot derive implicit knowledge. Automated theorem provers (ATP) on the other hand can compute logical derivations with millions of inference steps. By integrating a prover into a QA system this reasoning strength could be harnessed to deduce new knowledge from the facts in the knowledge base and thereby improve the QA capabilities. This involves challenges in that the contrary approaches of QA and automated reasoning must be combined: QA methods normally aim for speed and robustness to obtain useful results even from incomplete of faulty data, whereas ATP systems employ logical calculi to derive unambiguous and rigorous proofs. The latter approach is difficult to reconcile with the quantity and the quality of the knowledge bases in QA. The dissertation describes modifications to ATP systems in order to overcome these obstacles. The central example is the theorem prover E-KRHyper which was developed by the author at the Universität Koblenz-Landau. As part of the research work for this dissertation E-KRHyper was embedded into a framework of components for natural language processing, information retrieval and knowledge representation, together forming the QA system LogAnswer.
Also presented are additional extensions to the prover implementation and the underlying calculi which go beyond enhancing the reasoning strength of QA systems by giving access to external knowledge sources like web services. These allow the prover to fill gaps in the knowledge during the derivation, or to use external ontologies in other ways, for example for abductive reasoning. While the modifications and extensions detailed in the dissertation are a direct result of adapting an ATP system to QA, some of them can be useful for automated reasoning in general. Evaluation results from experiments and competition participations demonstrate the effectiveness of the methods under discussion.
Customization is a phenomenon which was introduced quite early in information systems literature. As the need for customized information technology is rising, different types of customization have emerged. In this study, customization processes in information systems are analyzed from a perspective based on the concept of open innovation. The objective is to identify how customization of information systems can be performed in an open innovation context. The concept of open innovation distinguishes three processes: Outside-in process, inside-out process and coupled process. After categorizing the selected journals into three core processes, the findings of this analysis indicated that there is a major concentration on outside-in processes. Further research on customization in coupled and inside-out processes is recommended. In addition, the establishment of an extensive up-to-date definition of customization in information systems is suggested.
This paper consists of the observation of existing first aid applications for smartphones and comparing them to a first aid application developed by the University of Koblenz called "Defi Now!". The main focus lies on examining "Defi Now!" in respect to its usability based on the dialogue principles referring to the seven software ergonomic principles due to the ISO 9241-110 standard. These are known as suitability for learning, controllability, error tolerance, self-descriptiveness, conformity with user expectations, suitability for the task, and suitability for individualization.
Therefore a usability study was conducted with 74 participants. A questionnaire was developed, which was to be filled out by the test participants anonymously. The test results were used for an optimization of the app referring its' usability.
Various best practices and principles guide an ontology engineer when modeling Linked Data. The choice of appropriate vocabularies is one essential aspect in the guidelines, as it leads to better interpretation, querying, and consumption of the data by Linked Data applications and users.
In this paper, we present the various types of support features for an ontology engineer to model a Linked Data dataset, discuss existing tools and services with respect to these support features, and propose LOVER: a novel approach to support the ontology engineer in modeling a Linked Data dataset. We demonstrate that none of the existing tools and services incorporate all types of supporting features and illustrate the concept of LOVER, which supports the engineer by recommending appropriate classes and properties from existing and actively used vocabularies. Hereby, the recommendations are made on the basis of an iterative multimodal search. LOVER uses different, orthogonal information sources for finding terms, e.g. based on a best string match or schema information on other datasets published in the Linked Open Data cloud. We describe LOVER's recommendation mechanism in general and illustrate it alongrna real-life example from the social sciences domain.
Concept for a Knowledge Base on ICT for Governance and Policy Modelling regarding eGovPoliNet
(2013)
Abstract The EU project eGovPoliNet is engaged in research and development in the field of information and communication technologies (ICT) for governance and policy modelling. Numerous communities pursue similar goals in this field of IT-based, strategic decision making and simulation of social problem areas. Though, the existing research approaches and results so far are quite fragmented. The aim of eGovPoliNet is to overcome the fragmentation across disciplines and to establish an international, open dialogue by fostering the cooperation between research and practice. This dialogue will advance the discussion and development of various problem areas with the help of researchers from different disciplines, who share knowledge, expertise and best practice supporting policy analysis, modelling and governance. To support this dialogue, eGovPoliNet will provide a knowledge base, which's conceptual development is the subject of this thesis. The knowledge base is to be filled with content from the area of ICT for strategic decision making and social simulation, such as publications, ICT solutions and project descriptions. This content needs to be structured, organised and managed in a way, so that it generates added value and the knowledge base is used as source of accumulated knowledge, which consolidates the previously fragmented research and development results in a central location.
The aim of this thesis is the development of a concept for a knowledge base, which provides the structure and the necessary functionalities to gather and process knowledge concerning ICT solutions for governance and policy modelling. This knowledge needs to be made available to users and thereby motivate them to contribute to the development and maintenance of the knowledge base.
This bachelor thesis deals with the topic "user-friendly design of applications (apps)" on mobile devices, a subdomain of software-ergonomics. In the process, two applications are being analyzed with the aim of developing a solution on how support on a mobile device should be conducted. This study focuses primarily on appropriate gestures to coordinate the 'help function' on a mobile device. The study results show that the test persons request a customized help function, but reject an extensive help description, as this seems to be overwhelming for the user.
The purpose of this bachelor- thesis is to teach Lisa - a robot of the university of Koblenz- AGAS department developed for participation in the @home league of the RoboCup - to draw. This requires the expansion of the robbie software framework and the operation of the robot- hardware components. Under consideration of a possible entry in the Open Challenge of the @home RoboCup, the goals are to detect a sheet of paper using Lisa- visual sensor, a Microsoft Kinect and draw on it using her Neuronics Katana robot arm. In addition, a pen mounting for the arm- gripper has to be constructed.
Outlined within this thesis are the procedures utilized to convert an image template into movement of the robotic arm, which in turn leads to drawing of a painting by the pen attached to the arm on a piece of paper detected by the visual sensor through image processing. Achieved were the parsing and drawing of an object made up of an indefinite amount of straight lines from a SVG-file onto a white sheet of paper, detected on a slightly darker surface and surrounded by various background objects or textures.
Pedestrian Detection in digital images is a task of huge importance for the development of automaticsystems and in improving the interaction of computer systems with their environment. The challenges such a system has to overcome are the high variance of the pedestrians to be recognized and the unstructured environment. For this thesis, a complete system for pedestrian detection was implemented according to a state of the art technique. A novel insight about precomputing the Color Self-Similarity accelerates the computations by a factor of four. The complete detection system is described and evaluated, and was published under an open source license.
Das Vertrauen von jungen Erwachsenen in politische Beiträge aus Rundfunk, Print- und Digitalmedien
(2013)
Die Kernfrage dieser Bachelorarbeit ist, ob das Vertrauen in Medien auf die politische Einstellung wirkt und ob Mediennutzung auf diese Wirkungsrichtung Einfluss nimmt. Hierbei werden sowohl Mediengattungen als auch einzelne Medienformate differenziert betrachtet. Die politische Einstellung wird anhand der Einstellungsdimensionen Effektivität der Regierung, Legitimität der Regierung, Einflussüberzeugung, Responsivität der politischen Akteure und Integrität der politischen Akteure operationalisiert. Hierbei wird der Fokus auf junge Erwachsene gelegt, welchen verbreitet Politikverdrossenheit nachgesagt wird.
Zur Prüfung des Zusammenhangs zwischen Medienvertrauen und der politischen Einstellung wird eine quantitative Online-Befragung der Studenten/ Studentinnen der Universität Koblenz (N = 496) durchgeführt. Zur Datenauswertung werden Regressionsanalysen sowie die ANOVA angewandt. Die Ergebnisse weisen nicht auf eine allgemeine negative politische Grundhaltung junger Erwachsenen hin. Zudem indizieren die Resultate, dass das Vertrauen in Medien einen signifikanten Effekt auf die politische Einstellung hat (p ≤ .05). Mediennutzung hat hingegen unzureichende Erklärungskraft. Auch in zukünftigen Studien würde es sich anbieten, das Medienvertrauen als zentrale unabhängige Variable zu untersuchen, wobei ein Generationenvergleich unterschiedlicher Bildungsschichten empfehlenswert wäre.
In this thesis, a first prototype of a mobile instruction device with mixed reality (MR) funktionality is developed. This system shall be capable to support training on the job through interaction with the work item. The concept corresponds to a didactic approach presented by Martens-Parree that combines constructivism with situated learning. As an application example, the training of glider pilots checked out on a new type was chosen. Whether the MR device could increase the competence, or facilitiate the completion of certain tasks, was examined in a survey with fifteen testers. The results of the study show that in general, the didactic approach of Martens-Parree is valid. While an increase in fact knowledge has been observed, it was not (yet) possible to demonstrate an increase in skills with respect to the work tasks.
This study investigates crowdfunding, a new form of financing projects. In the past years more and more crowdfunding platforms emerged. The main question is if crowdfunding is able to compete with the traditional types of financing social projects. The history and development of crowdfunding is presented in this paper. The different crowdfunding models are explained. An overview of German crowd funding platforms is presented. Based on successful social crowdfunding projects a list of key success factors is listed and described. In a case study a concept for financing a social project through crowdfunding is developed upon the previous studies.
In a software reengineering task legacy systems are adapted computer-aided to new requirements. For this an efficient representation of all data and information is needed. TGraphs are a suitable representation because all vertices and edges are typed and may have attributes. Further more there exists a global sequence of all graph elements and for each vertex exists a sequence of all incidences. In this thesis the "Extractor Description Language" (EDL) was developed. It can be used to generate an extractor out of a syntax description, which is extended by semantic actions. The generated extractor can be used to create a TGraph representation of the input data. In contrast to classical parser generators EDL support ambiguous grammars, modularization, symbol table stacks and island grammars. These features simplify the creation of the syntax description. The collected requirements for EDL are used to determine an existing parser generator which is suitable to realize the requirements.
After that the syntax and semantics of EDL are described and implemented using the suitable parser generator. Following two extractors one for XML and one for Java are created with help of EDL. Finally the time they need to process some input data is measured.
This thesis deals with problems, which occure when rendering stereoscopic contents. These problems are elaborated, simulated with the help of a program developed in this thesis and evaluated by a group of volunteers. Thereby it shall be determined, whether the errors are noticeable and how much they influence the 3D effect of the stereoscopic images. Each error is simulated using different camera assemblies and evaluated depending on the choosen assembly.
E-KRHyper is a versatile theorem prover and model generator for firstorder logic that natively supports equality. Inequality of constants, however, has to be given by explicitly adding facts. As the amount of these facts grows quadratically in the number of these distinct constants, the knowledge base is blown up. This makes it harder for a human reader to focus on the actual problem, and impairs the reasoning process. We extend E-Hyper- underlying E-KRhyper tableau calculus to avoid this blow-up by implementing a native handling for inequality of constants. This is done by introducing the unique name assumption for a subset of the constants (the so called distinct object identifiers). The obtained calculus is shown to be sound and complete and is implemented into the E-KRHyper system. Synthetic benchmarks, situated in the theory of arrays, are used to back up the benefits of the new calculus.
This work deals with the migration of software systems towards the use of the character set defined in the Unicode standard. The work is performed as a case study on the document-management-system PROXESS. A conversion process will be designed that defines the working-steps of the migration for the entire system as well as an arbitrary decomposition of the system into individual modules. The working-steps for each module can be performed chronologically independent of each other to a great extend. For the conversion of the implementation, an approach of automatic recognition of usage patterns is applied. The approach aims at searching the abstract syntax tree for sequences of program instructions that can be assigned to a certain usage pattern. The usage pattern defines another sequence of instructions that acts as an sample solution for that usage pattern. The sample solution demonstrates the Unicode-based management of strings for that usage pattern. By applying a transformation rule, the original sequence of instructions is transferred to the sequence of instructions exposed by the sample solution of the related usage pattern. This mechanism is a starting point for the development of tools that perform this transformation automatically.
The annotation of digital media is no new area of research, instead it is widely investigated. There are many innovative ideas for creating the process of annotation. The most extensive segment of related work is about semi automatic annotation. One characteristic is common in the related work: None of them put the user in focus. If you want to build an interface, which is supporting and satsfying the user, you will have to do a user evaluation first. Whithin this thesis we want to analyze, which features an interface should or should not have to meet these requirements of support, user satisfaction and beeing intuitive. After collecting many ideas and arguing with a team of experts, we determined only a few of them. Different combination of these determined variables form the interfaces, we have to investigate in our usability study. The results of the usability leads to the assumption, that autocompletion and suggestion features supports the user. Furthermore coloring tags for grouping them into categories is not disturbing to the user, but has a tendency of being supportive. Same tendencies emerge for an interface consisting of two user interface elements. There is also an example given for the definition differences of being intuitive. This thesis leads to the concolusion that for reasons of user satisfaction and support it is allowed to differ from classical annotation interface features and to implement further usability studies in the section of annotation interfaces.
Die Bedeutung von Social Software (SSW) nimmt nicht nur im Privatleben vieler Menschen zu. Auch Unternehmen haben mittlerweile die Potentiale dieser Systeme erkannt und setzen vermehrt auf Web 2.0 Technologien basierende Systeme im Unternehmenskontext ein. So brachte eine Studie der Association for Information and Image Management (AIIM) im Jahr 2009 hervor, dass über 50 % der Befragten Enterprise 2.0 (E2.0), d.h. der Einsatz von SSW im Unternehmen, als kritischen Faktor des Unternehmenserfolges ansahen. Auch durch diesen Trend mit verursacht stieg, laut einer Studie des Beratungsunternehmens IDC, die Menge an digital verfügbaren Informationen innerhalb einer Zeitspanne von fünf Jahren (2006-2011) um den Faktor zehn. Wo früher galt, "Je mehr Information, desto besser.", bereitet heute das Managen dieser schieren Flut an Informationen vielen Unternehmen Probleme (bspw. in Bezug auf die Auffindbarkeit von Informationen). SSW bietet mit neuen Funktionen, wie Social Bookmarking, Wikis oder Tags, das Potential, Informationen durch Nutzerbeteiligung besser zu strukturieren und zu organisieren. In der vorliegenden Arbeit wird am Beispiel der Forschungsgruppe für Betriebliche Anwendungssysteme (FG BAS) gezeigt, wie man vorhandene Informationsstrukturen erfassen, analysieren und darauf basierend Empfehlungen für den Einsatz von SSW herleiten kann. Den Rahmen für dieses Vorgehen bildet ein von Henczel (2000) entwickeltes Modell zur Durchführung eines Information Audits. Hervorzuhebende Ergebnisse der Arbeit stellen zum Einen das Erfassungsmodell für Informationen und Prozesse dar (Informationsmatrix) und zum Anderen das Visualisierungsmodell der erfassten Daten.
Tagging systems are intriguing dynamic systems, in which users collaboratively index resources with the so-called tags. In order to leverage the full potential of tagging systems, it is important to understand the relationship between the micro-level behavior of the individual users and the macro-level properties of the whole tagging system. In this thesis, we present the Epistemic Dynamic Model, which tries to bridge this gap between the micro-level behavior and the macro-level properties by developing a theory of tagging systems. The model is based on the assumption that the combined influence of the shared background knowledge of the users and the imitation of tag recommendations are sufficient for explaining the emergence of the tag frequency distribution and the vocabulary growth in tagging systems. Both macro-level properties of tagging systems are closely related to the emergence of the shared community vocabulary. rnrnWith the help of the Epistemic Dynamic Model, we show that the general shape of the tag frequency distribution and of the vocabulary growth have their origin in the shared background knowledge of the users. Tag recommendations can then be used for selectively influencing this general shape. In this thesis, we especially concentrate on studying the influence of recommending a set of popular tags. Recommending popular tags adds a feedback mechanism between the vocabularies of individual users that increases the inter-indexer consistency of the tag assignments. How does this influence the indexing quality in a tagging system? For this purpose, we investigate a methodology for measuring the inter-resource consistency of tag assignments. The inter-resource consistency is an indicator of the indexing quality, which positively correlates with the precision and recall of query results. It measures the degree to which the tag vectors of indexed resources reflect how the users perceive the similarity between resources. We argue with our model, and show it with a user experiment, that recommending popular tags decreases the inter-resource consistency in a tagging system. Furthermore, we show that recommending the user his/her previously used tags helps to increase the inter-resource consistency. Our measure of the inter-resource consistency complements existing measures for the evaluation and comparison of tag recommendation algorithms, moving the focus to evaluating their influence on the indexing quality.