Fachbereich 4
Refine
Year of publication
Document Type
- Part of Periodical (104)
- Bachelor Thesis (66)
- Diploma Thesis (47)
- Master's Thesis (32)
- Study Thesis (9)
- Conference Proceedings (5)
Keywords
- Simulation (5)
- Bluetooth (4)
- ontology (4)
- Android <Systemplattform> (3)
- Augmented Reality (3)
- Customer Relationship Management (3)
- Enterprise 2.0 (3)
- Informatik (3)
- Knowledge Compilation (3)
- Mikrocontroller AVR (3)
Institute
- Fachbereich 4 (263)
- Institut für Informatik (27)
- Institut für Wirtschafts- und Verwaltungsinformatik (26)
- Institute for Web Science and Technologies (13)
- Institut für Computervisualistik (9)
- Institut für Management (9)
- Institut für Softwaretechnik (3)
- Institut für Integrierte Naturwissenschaften (1)
Information systems research has started to use crowdsourcing platforms such as Amazon Mechanical Turks (MTurk) for scientific research, recently. In particular, MTurk provides a scalable, cheap work-force that can also be used as a pool of potential respondents for online survey research. In light of the increasing use of crowdsourcing platforms for survey research, the authors aim to contribute to the understanding of its appropriate usage. Therefore, they assess if samples drawn from MTurk deviate from those drawn via conventional online surveys (COS) in terms of answers in relation to relevant e-commerce variables and test the data in a nomological network for assessing differences in effects.
The authors compare responses from 138 MTurk workers with those of 150 German shoppers recruited via COS. The findings indicate, inter alia, that MTurk workers tend to exhibit more positive word-of mouth, perceived risk, customer orientation and commitment to the focal company. The authors discuss the study- results, point to limitations, and provide avenues for further research.
Der Fachbereich 4 (Informatik) besteht aus fünfundzwanzig Arbeitsgruppen unter der Leitung von Professorinnen und Professoren, die für die Forschung und Lehre in sechs Instituten zusammenarbeiten.
In jedem Jahresbericht stellen sich die Arbeitsgruppen nach einem einheitlichen Muster dar, welche personelle Zusammensetzung sie haben, welche Projekte in den Berichtszeitraum fallen und welche wissenschaftlichen Leistungen erbracht wurden. In den folgenden Kapiteln werden einzelne Parameter aufgeführt, die den Fachbereich in quantitativer Hinsicht, was Drittmitteleinwerbungen, Abdeckung der Lehre, Absolventen oder Veröffentlichungen angeht, beschreiben.
The aim of this paper is to identify and understand the risks and issues companies are experiencing from the business use of social media and to develop a framework for describing and categorising those social media risks. The goal is to contribute to the evolving theorisation of social media risk and to provide a foundation for the further development of social media risk management strategies and processes. The study findings identify thirty risk types organised into five categories (technical, human, content, compliance and reputational). A risk-chain is used to illustrate the complex interrelated, multi-stakeholder nature of these risks and directions for future work are identified.
Im Laufe der letzten Jahre hat sich der typische Komplex an kritischen Erfolgsfaktoren für Unternehmen verändert, infolgedessen der Faktor Wissen eine wachsende Bedeutung erlangt hat. Insofern kann man zum heutigen Zeitpunkt von Wissen als viertem Produktionsfaktor sprechen, welcher die Faktoren Arbeit, Kapital und Boden als wichtigste Faktoren eines Unternehmens ablöst (vgl. Keller & Yeaple 2013, S. 2; Kogut & Zander 1993, S. 631). Dies liegt darin begründet, dass aktive Maßnahmen zur Unterstützung von Wissenstransfer in Unternehmen höhere Profite und Marktanteile sowie bessere Überlebensfähigkeit gegenüber Wettbewerbern ohne derartige Maßnahmen nach sich ziehen (vgl. Argote 1999, S. 28; Szulanski 1996, S. 27; Osterloh & Frey 2000, S. 538). Der hauptsächliche Vorteil von wissensbasierten Entwicklungen liegt dabei in deren Nachhaltigkeit, da aufgrund der immateriellen Struktur (vgl. Inkpen & Dinur 1998, S. 456; Spender 1996a, S. 65 f.; Spender 1996b, S. 49; Nelson & Winter 1982, S. 76 ff.) eine Nachahmung durch Wettbewerber erschwert wird (vgl. Wernerfelt 1984, S. 173; Barney 1991, S. 102).
The way information is presented to users in online community platforms has an influence on the way the users create new information. This is the case, for instance, in question-answering fora, crowdsourcing platforms or other social computation settings. To better understand the effects of presentation policies on user activity, we introduce a generative model of user behaviour in this paper. Running simulations based on this user behaviour we demonstrate the ability of the model to evoke macro phenomena comparable to the ones observed on real world data.
Modeling and publishing Linked Open Data (LOD) involves the choice of which vocabulary to use. This choice is far from trivial and poses a challenge to a Linked Data engineer. It covers the search for appropriate vocabulary terms, making decisions regarding the number of vocabularies to consider in the design process, as well as the way of selecting and combining vocabularies. Until today, there is no study that investigates the different strategies of reusing vocabularies for LOD modeling and publishing. In this paper, we present the results of a survey with 79 participants that examines the most preferred vocabulary reuse strategies of LOD modeling. Participants of our survey are LOD publishers and practitioners. Their task was to assess different vocabulary reuse strategies and explain their ranking decision. We found significant differences between the modeling strategies that range from reusing popular vocabularies, minimizing the number of vocabularies, and staying within one domain vocabulary. A very interesting insight is that the popularity in the meaning of how frequent a vocabulary is used in a data source is more important than how often individual classes and properties arernused in the LOD cloud. Overall, the results of this survey help in understanding the strategies how data engineers reuse vocabularies, and theyrnmay also be used to develop future vocabulary engineering tools.
Remote rendering services offer the possibility to stream high quality images to lower powered devices. Due to the transmission of data the interactivity of applications is afflicted with a delay. A method to reduce delay of the camera manipulation on the client is called 3d-warping. This method causes artifacts. In this thesis different approaches of remote rendering setups will be shown. The artifacts and improvements of the warping method will be described. Methods to reduce the artifacts will be implemented and analyzed.
Aufgrund des branchenweiten Bedarfs den Konkurrenzkampf zu umgehen, entwickelten Kim und Mauborgne die Blue Ocean Strategy, um neue Märkte zu ergründen. Diese bezeichnen sie als einzigartig. Da jedoch weitere Strategien zur Ergründung neuer Märkte existieren, ist es das Ziel dieser Arbeit herauszufinden, anhand welcher Charakterisierungsmerkmale die Blue Ocean Strategy als einzigartig angesehen werden kann.
Die Strategie von Kim und Mauborgne soll daher mit Schumpeters schöpferischen Zerstörung, Ansoffs Diversifikationsstrategie, Porters Nischenstrategie und Druckers Innovationsstrategien verglichen werden. Für den Vergleich werden die Charakterisierungsmerkmale herangezogen, nach denen Kim und Mauborgne die Blue Ocean Strategy als einzigartig beurteilen. Auf Basis dieser Kriterien wird ein Metamodell entwickelt, mit dessen Hilfe die Untersuchung durchgeführt wird.
Der Vergleich zeigt, dass die Konzepte von Schumpeter, Ansoff, Porter und Drucker in einigen Kriterien der Blue Ocean Strategy ähneln. Keine der Strategien verhält sich jedoch in allen Punkten so wie das Konzept von Kim und Mauborgne. Während die Blue Ocean Strategy ein Differenzierung und Senkung der Kosten anstrebt, orientieren sich die meisten Konzepte entweder an einer Differenzierung oder an einer Kostenreduktion. Auch die Betretung des neuen Marktes wird unterschiedlich interpretiert. Während die Blue Ocean Strategy auf einen Markt abzielt, der unergründet ist und somit keinen Wettbewerb vorweist, werden bei den anderen Strategien oft bestehende Märkte als neu interpretiert, auf denen das Unternehmen bisher nicht agiert hat. Dies schließt die vorherige Existenz der Märkte jedoch nicht aus.
Auf Basis der durch den Vergleich gezogenen Erkenntnisse, kann somit die Blue Ocean Strategy als einzigartig bezeichnet werden.
Data Mining im Fußball
(2014)
The term Data Mining is used to describe applications that can be applied to extract useful information from large datasets. Since the 2011/2012 season of the german soccer league, extensive data from the first and second Bundesliga have been recorded and stored. Up to 2000 events are recorded for each game.
The question arises, whether it is possible to use Data Mining to extract patterns from this extensive data which could be useful to soccer clubs.
In this thesis, Data Mining is applied to the data of the first Bundesliga to measure the value of individual soccer players for their club. For this purpose, the state of the art and the available data are described. Furthermore, classification, regression analysis and clustering are applied to the available data. This thesis focuses on qualitative characteristics of soccer players like the nomination for the national squad or the marks players get for their playing performance. Additionally this thesis considers the playing style of the available players and examines if it is possible to make predictions for upcoming seasons. The value of individual players is determined by using regression analysis and a combination of cluster analysis and regression analysis.
Even though not all applications can achieve sufficient results, this thesis shows that Data Mining has the potential to be applied to soccer data. The value of a player can be measured with the help of the two approaches, allowing simple visualization of the importance of a player for his club.
Systems to simulate crowd-behavior are used to simulate the evacuation of a crowd in case of an emergency. These systems are limited to the moving-patterns of a crowd and are generally not considering psychological and/or physical conditions. Changing behaviors within the crowd (e.g. by a person falling down) are not considered.
For that reason, this thesis will examine the psychological behavior and the physical impact of a crowd- member on the crowd. In order to do so, this study develops a real-time simulation for a crowd of people, adapted from a system for video games. This system contains a behavior-AI for agents. In order to show physical interaction between the agents and their environment as well as their movements, the physical representation of each agent is realized by using rigid bodies from a physics-engine. The movements of the agents have an additional navigation mesh and an algorithm for collision avoidance.
By developing a behavior-AI a physical and psychological state is reached. This state contains a psychological stress-level as well as a physical condition. The developed simulation is able to show physical impacts such as crowding and crushing of agents, interaction of agents with their environment as well as factors of stress.
By evaluating several tests of the simulation, this thesis examines whether the combination of physical and psychological impacts is implementable successfully. If so, this thesis will be able to give indications of an agent- behavior in dangerous and/or stressful situations as well as a valuation of the complex physical representation.
Ziel dieser Ausarbeitung ist es, das Wippe-Experiment gemäß dem Aufbau innerhalb der AG Echtzeitsysteme unter Leitung von Professor Dr. Dieter Zöbel mithilfe eines LEGO Mindstorms NXT Education-Bausatzes funktionsfähig nachzubauen und das Vorgehen zu dokumentieren. Der dabei entstehende Programmcode soll didaktisch aufbereitet und eine Bauanleitung zur Verfügung gestellt werden. Dies soll gewährleisten, dass Schülerinnen und Schüler auch ohne direkten Zugang zu einer Hochschule oder ähnlichem Institut den Versuchsaufbau Wippe möglichst unkompliziert im Klassenraum erleben können.
Die Arbeit stellt Path Tracing zum Rendern von Bildern mitrnglobaler Beleuchtung vor. Durch die Berechnung der Rendergleichung, mithilfe von Zufallsexperimenten, ist das Verfahren physikalisch plausibel. Entscheidend für die Qualität der Ergebnisse ist Sampling. Der Schwerpunktrnder Arbeit ist die Untersuchung verschiedener Samplingstrategien. Dazu werden die Ergebnisse unterschiedlicher Dichtefunktionen verglichenrnund die Methoden bewertet. Außerdem werden Effekte, wie beispielsweise Depth of Field, mittels Sampling visualisiert.
Im Rahmen des "Design Thinking"-Prozesses kommen unterschiedliche Varianten kreativitätsfördernder Techniken zum Einsatz. Aufgrund der zunehmenden Globalisierung ergeben sich immer häufiger Kollaborationen, bei denen sich die jeweiligen Projektteilnehmer an verteilten Standorten befinden. Somit erweist sich die Digitalisierung des Design-Prozesses als durchaus erstrebenswert. Ziel der hier vorliegenden Untersuchung ist daher die Erstellung eines Bewertungsschemas, welches die Eignung digitaler Kreativitätstechniken in Bezug auf das "Entrepreneurial Design Thinking" misst. Des Weiteren soll geprüft werden, inwiefern sich der Einsatz von e-Learning-Systemen in Kombination mit der Verwendung digitaler Kreativitätstechniken eignet. Diese Prüfung soll am Beispiel der e-Learning Software "WebCT" konkretisiert werden. Hieraus ergibt sich die folgende Fragestellung: Welche digitalen Kreativitätstechniken eignen sich für die Anwendung im Bereich des "Entrepreneurial Design Thinkings" unter Einsatz der e-Learning Plattform "WebCT"? Zunächst wird eine Literaturanalyse bezüglich des "Entrepreneurial Design Thinkings", der klassische und digitale Kreativitätstechniken sowie des Arbeitens in Gruppen, was auch das Content Management, e-Learning-Systeme und die Plattform "WebCT" beinhaltet, durchgeführt. Im Anschluss daran wird eine qualitative Untersuchung durchgeführt. Auf Basis bereits bestehender Literatur, soll ein Bewertungsschema erstellt werden, welches misst, welche der behandelten digitalen Kreativitätstechniken für den Einsatz im "Entrepreneurial Design Thinking" am besten geeignet ist. Darauf aufbauend erfolgt die Verknüpfung des digitalisierten "Design Thinking"-Prozesses mit der e-Learning Plattform "WebCT". Abschließend wird diskutiert, in wie fern diese Zusammenführung als sinnvoll erachtet werden kann.
This paper presents a method for the evolution of SHI ABoxes which is based on a compilation technique of the knowledge base. For this the ABox is regarded as an interpretation of the TBox which is close to a model. It is shown, that the ABox can be used for a semantically guided transformation resulting in an equisatisfiable knowledge base. We use the result of this transformation to effciently delete assertions from the ABox. Furthermore, insertion of assertions as well as repair of inconsistent ABoxes is addressed. For the computation of the necessary actions for deletion, insertion and repair, the E-KRHyper theorem prover is used.
Die vorliegende Arbeit befasst sich mit der volkswirtschaftlichen Untersuchung von Arbeit in virtuellen Welten und hat als Kerninhalt die Analyse des Arbeitsmarktes in "Massively Multiplayer Online Role-Playing Games" (MMORPGs). Als Ausgangsbasis diente zum einen der Faktor Arbeit in der Realität, zum anderen wurden zusätzliche Besonderheiten von MMORPGs in die Betrachtung miteinbezogen, woraus sich ein Gesamtbild des virtuellen Arbeitsmarkts ergab, aus dem sich relevante Indikatoren ableiten ließen. Neben dem grundsätzlichen Befund der Existenz eines virtuellen Arbeitsmarktes, wurden Ähnlichkeiten zum realen Arbeitsmarkt deutlich. So war es möglich virtuelle Stundenlöhne zu berechnen, unternehmensähnliche Strukturen in Spielergruppierungen nachzuweisen und ausgehend von der Humankapitaltheorie, eine modifizierte Theorie ("Avatarkapital") für virtuelle Welten zu ermitteln. Allerdings ergaben sich auch Unterschiede, so ist die Komplexität der Herstellungsprozesse in den untersuchten MMORPGs in der Regel weitaus geringer als in der Realität. Durch eine Gegenüberstellung von Motivationsfaktoren in beiden Arbeitswelten wurden weiterhin Gemeinsamkeiten, aber auch Unterschiede festgestellt und dargelegt. Zusätzlich wurde aufgezeigt, dass das aktuell diskutierte Thema Mindestlohn auch in virtuellen Arbeitsmärkten von MMORPGs anzutreffen ist und als Spielmechanik implementiert wurde, um Motivation durch andauernde Beschäftigung zu gewährleisten. Über diese Parallelen hinaus, wurde anhand einer Analyse von Waren- und Geldtransaktionen (Real-Money-Trading) zwischen Virtualität und Realität eine Verbindung beider Welten nachgewiesen, die beide Arbeitsmärkte gleichermaßen betrifft. Neben der theoretischen Untersuchung, war es auch Ziel eigene Beobachtungen und Ansätze in die Ergebnisse einfließen zu lassen. Besonders in der abschließenden empirischen Untersuchung war es somit möglich weitere Faktoren zu entdecken, die nicht ausreichend aus der Theorie heraus zu ermitteln waren. Vor allem weitere Erkenntnisse zum Thema Produktivitätsmessung in virtuellen Welten konnten so aus der Praxis in die Theorie einfließen. Schlussendlich wurde aber auch deutlich, dass sich die Untersuchungen zum Thema Arbeitsmarkt in virtuellen Welten noch in einem frühen Stadium befinden und zahlreiche Forschungsobjekte in diesem Bereich existieren, die mit Sicherheit zu einem Erkenntnisgewinn in der Volkswirtschaftslehre führen.
The Microsoft Kinect is currently polular in many application areas because ofrnthe cheap price and good precission. But controlling the cursor is unapplicablerndue to jitter in the skeletton data. My approach will try to stabilisize the cursor position with common techniques from image processing. The input therefore will be the Kinect color camera. A final position will be calculated using the different positions of the tracking techniques. For controlling the cursor the right hand should be tracked. A simple click gesture will also be developed. The evaluation will show if this approach was succesfull.
ERP market analysis
(2013)
Der aktuelle ERP Markt wird dominiert von den fünf größten Anbietern SAP, Oracle, Microsoft, Infor und Sage. Da der Markt und die angebotenen Lösungen vielfältig sind, bedarf es einer fundierten Analyse der Systeme. Die Arbeit beleuchtet dabei anhand ausgesuchter Literatur und Kennzahlen der verschiedenen Unternehmen die theoretische Seite der angebotenen Lösungen der fünf großen ERP Anbieter. Daneben wird die Nutzung der Systeme in der Praxis anhand der Befragung von sechs Anwendern analysiert und die Systeme miteinander verglichen.
Ziel der Arbeit ist es, dass die Forschungsfragen beantwortet werden und dass es bezogen auf die Systeme dem Leser der Arbeit ersichtlich wird, welches ERP System für welche Unternehmensbranche und Unternehmensgröße am besten geeignet ist.
Des Weiteren gibt die Arbeit Aufschluss darüber, welche Trends für ERP Systeme für die Zukunft zu erwarten sind und welche Herausforderungen sich dadurch für die Unternehmen stellen.
Polsearchine: Implementation of a policy-based search engine for regulating information flows
(2013)
Many search engines regulate Internet communication in some way. It is often difficult for end users to notice such regulation, as well as obtaining background information for it. Additionally, the regulation can usually be circumvented easily. This bachelor thesis presents the prototypical metasearch engine "Polsearchine" for addressing these weaknesses. Its regulation is established through InFO, a model for regulating information flows developed by Kasten and Scherp. More precisely, the extension for regulating search engines SEFCO is being used. For retrieving search results, Polsearchine uses an external search engine API. The API can be interchanged easily to make this metasearch engine independent from one specific API.
Die weltweite Zugänglichkeit und umfangreiche Nutzung des Internets machen dieses Medium zu einem effizienten und beliebten Informations-, Kommunikations-, und Verkaufsinstrument. Immer mehr Menschen und Organisationen versuchen, diese Vorzüge durch eine eigene Website für ihre Zwecke zu verwenden. Als hilfreiches Mittel zur Optimierung von Webpräsenzen bewährte sich in den letzten Jahren der Einsatz von Web-Analytics-Software. Durch diese Software sind Websitebetreiber in der Lage, Informationen über die Besucher ihrer Website und deren Nutzungsverhalten zu sammeln und zu messen. Das angestrebte Resultat sind Optimierungsentscheidungen auf Basis von Daten an Stelle von Annahmen und wirkungsvolle Testmöglichkeiten.
Für den Bereich des E-Commerce existieren bis dato zahlreiche wissenschaftliche und praxiserprobte Hilfestellungen für Web-Analytics-Projekte. Informationswebsites hingegen werden trotz ihrer Wichtigkeit nur vereinzelt thematisiert. Um diesem Defizit entgegenzuwirken, hat Hausmann 2012 das Framework for Web Analytics entwickelt, welches dem Anwender ein hilfreiches Referenzmodell für sein Web Analytics-Vorhaben bietet. Diesen Ansatz weiter voranzutreiben ist das Ziel der Abschlussarbeit. Dazu wird mithilfe einer Literaturanalyse und einer Fallstudie das Framework validiert und ergänzt, sowie weitere Handlungsempfehlungen identifiziert. Als Ergebnis werden die wichtigsten Erkenntnisse dieser Forschung zusammengefasst und für den zukünftigen Gebrauch festgehalten.
Large amounts of qualitative data make the utilization of computer-assisted methods for their analysis inevitable. In this thesis Text Mining as an interdisciplinary approach, as well as the methods established in the empirical social sciences for analyzing written utterances are introduced. On this basis a process of extracting concept networks from texts is outlined and the possibilities of utilitzing natural language processing methods within are highlighted. The core of this process is text processing, to whose execution software solutions supporting manual as well as automated work are necessary. The requirements to be met by these solutions, against the background of the initiating project GLODERS, which is devoted to investigating extortion racket systems as part of the global fiσnancial system, are presented, and their fulσlment by the two most preeminent candidates reviewed. The gap between theory and pratical application is closed by a prototypical application of the method to a data set of the research project utilizing the two given software solutions.
In this thesis we discuss the increasingly important routing aggregation and its consequences on avoiding routing loops. As basis for implementation and evaluation I will use the RMTI protocol, developed at the University of Koblenz, which is an evolution of the RFC2453 specified in the Routing Information Protocol version 2. The virtual network environment Virtual Network User Mode Linux (VNUML) is used within this thesis as environment. With VNUML it is possible to operate and evaluate real network scenarios in a virtual environment. The RMTI has already proven its ability to detect topological loops and thereby prevent the formation of these routing loops. In this thesis we will describe the function of the RMTI and then discuss under which circumstances we can use routing aggregation, without it resulting in routing anomalies. In order to implement these changes it is essential to have a deeper understanding of the structure of routing tables, so the construction will be explained using reference to examples. There follows a description of which points have to be changed, in the RMTI in order to avoid loops despite aggregation. In Summary we will evaluate the affect the routing aggregation has on the reorganization ability of the virtual network.
This thesis describes the implementation of a Path-planning algorithm for multi-axle vehicles using machine learning algorithms. For that purpose, a general overview over Genetic Algorithms is given and alternative machine learning algorithms are briefly explained. The software developed for this purpose is based on the EZSystem Simulation Software developed by the AG Echtzeitysteme at the University Koblenz-Landau and a path correction algorithm developed by Christian Schwarz, which is also detailed in this paper. This also includes a description of the vehicle used in these simulations. Genetic Algorithms as a solution for path-planning in complex scenarios are then evaluated based on the results of the developed simulation software and compared to alternative, non-machine learning solutions, which are also shortly presented.
Forwarding loops
(2013)
Today you can find smartphones everywhere. This situation created a hype for Augmented Reality and AR Apps. The big question is: Do these applications provide a real added value? To make AR pratically it is important to add the computational power of a computer to the advantages of AR. An easy and fast way of interaction is essential.
A Poker-Assistance-Software is an ideal test area for an AR Application with real added value. The estimation of the winning probability and a fast automated tracking of the playing cards is the perfect field of investigation.
In this discussion it is interesting to evaluate the added value of AR Applications in common.
Recipients" youtube comments to the five most successful songs of 2011 and 2012 are tested for nostalgic content. These nostalgic relevant comments are analyzed by content and finally interpreted. It shall be found out, whether nostalgic music content is a factor for success. By using the uses and gratifications theory the recipients" purpose of consuming nostalgic-evoking music will be identified. Music is a clearly stronger trigger for evoking nostalgia than the music video whereas nostalgia triggers positive and/or negative affect. Furthermore personal nostalgia is much more evident than historical nostalgia. Moreover the lyrics have a considerably higher potential to elicit nostalgia than any other song units. Persons and momentous events are the most frequent objects in personal nostalgic reverie. The purpose of consuming nostalgic music is the intended evocation of positive and/or negative affect. Hence nostalgia in music seems to satisfy certain needs and it can be assumed that nostalgia is a factor of success in music industry.
Infinite worlds
(2013)
This work is concerned with creating a 2D action-adventure with roleplay elements. It provides an overview over various tasks of the implementation. First, the game idea and the used gamemechanism are verified and a definfinition of requirements is created. After introducing the used framework, the software engineering concept for realization is presented. The implementation of control components, game editor, sound and graphics is shown. The graphical implementation pays special attention to the abstraction of light and shadow into the 2D game world.
Due to the increasing pervasiveness of the mobile web, it is possible to send and receive mails with mobile devices. Content of digital communication should be encrypted to prevent eavesdropping and manipulation. Corresponding procedures use cryptographic keys, which have to be exchange previously. It has to be ensured, that a cryptographic key really belongs to the person, who it is supposedly assigned to. Within the scope of this thesis a concept for a smartphone application to exchange cryptographic keys was designed. The concept consists of a specification of a component-based framework, which can be used to securely exchange data in general. This framework was extended and used as the basis for a smartphone application. The application allows creating, managing and exchanging cryptographic keys. The Near Field Communication is used for the exchange. Implemented security measures prevent eavesdropping and specific manipulation. In the future the concept and the application can be extended and adjusted to be used in other contexts.
We present the conceptual and technological foundations of a distributed natural language interface employing a graph-based parsing approach. The parsing model developed in this thesis generates a semantic representation of a natural language query in a 3-staged, transition-based process using probabilistic patterns. The semantic representation of a natural language query is modeled in terms of a graph, which represents entities as nodes connected by edges representing relations between entities. The presented system architecture provides the concept of a natural language interface that is both independent in terms of the included vocabularies for parsing the syntax and semantics of the input query, as well as the knowledge sources that are consulted for retrieving search results. This functionality is achieved by modularizing the system's components, addressing external data sources by flexible modules which can be modified at runtime. We evaluate the system's performance by testing the accuracy of the syntactic parser, the precision of the retrieved search results as well as the speed of the prototype.
Geschäftsprozessmanagement (GPM) gilt in der heutigen Unternehmensentwicklung als einer der wichtigsten Erfolgsfaktoren und wird von modernen Unternehmen auch als solcher wahrgenommen [vgl. IDS Scheer 2008]. Bereits 1993 waren Geschäftsprozesse für Hammer und Champy der zentrale Schlüssel zur Reorganisation von Unternehmen [vgl. Hammer, Champy 1993, S. 35]. Den Paradigmenwechsel von der Aufbau- zur Ablauforganisation und letztendlich zur etablierten "Prozessorganisation" wurde von Gaitanides schon 1983 erstmals beschrieben [vgl. Gaitanides 2007].
Trotz einer breiten und tiefen Behandlung des Themengebiets "Geschäftsprozessmanagement" in der wissenschaftlichen Literatur, gestaltet es sich schwierig, einen schnellen Überblick in Bezug auf Vorgehensweisen zur Einführung von Geschäftsprozessmanagement zu erhalten. Dies ist im Wesentlichen der Tatsache geschuldet, dass in der Literatur "Geschäftsprozessmanagement" in unterschiedlichen wissenschaftlichen Bereichen wie z.B. der Organisationslehre [vgl. z.B. Vahs 2009; Schulte-Zurhausen 2005], der Betriebswirtschaft [vgl. z.B. Helbig 2003; Schmidt 2012] oder der Informatik bzw. Wirtschaftsinformatik [vgl. z.B. Schmelzer, Sesselmann 2008; Schwickert, Fischer 1996] behandelt und der Aufbau eines GPMs anhand unterschiedlicher Themenschwerpunkte beschrieben wird. Insbesondere gestaltet sich die Suche nach Literatur zu Geschäftsprozessmanagement speziell für kleine und mittlere Unternehmen (KMU) und zu Einführungsmethoden von BPM in KMU als schwierig. Die Kombination "Vorgehensweisen zur Einführung von Geschäftsprozessmanagement bei KMU" ist in der wissenschaftlichen Literatur nicht aufzufinden. Mit der vorliegenden Arbeit soll ein erster Ansatz geschaffen werden, diese Lücke zu schließen. Diese Arbeit zielt darauf ab, anhand einer Auswahl von Vorgehensweisen zur Einführung von Geschäftsprozessmanagement deren charakteristische Eigenschaften zu analysieren und einander gegenüberzustellen. Zudem erfolgt eine Bewertung auf die Anwendbarkeit einzelner Vorgehensweisen auf kleine und mittlere Unternehmen anhand zuvor erhobener, für KMU wichtiger Anforderungen an BPM und dessen Einführung.
Auf Basis der dieser Arbeit zugrundeliegenden Bewertungskriterien schneidet die Vorgehensweise nach Schulte-Zurhausen im Gesamtergebnis am besten ab. Dennoch ist festzustellen, dass jede der untersuchten Vorgehensweisen Stärken und Schwächen bzgl. der Eignung für ein KMU aufweist. Dies hat zur Folge, dass bei der Einführung eines Geschäftsprozessmanagements jede der untersuchten Vorgehensweisen einer Anpassung und Adaption auf die Situation eines KMUs bedarf. Aus diesem Grund empfiehlt der Autor dieser Arbeit einem KMU, eine Vorgehensweise als grundlegende Vorgehensweise der Einführung festzulegen (in diesem Fall die Vorgehensweise nach Schulte-Zurhausen) und diese durch jeweils geeignete Aspekte der weiteren Vorgehensweisen anzureichern bzw. zu vervollständigen.
Augmented Reality (AR) is getting more and more popular. To augment information into the field of vision of the user using HMDs, e.g. front shields of a car, glasses, displays of a smartphone or tablets are the main use of AR technology. It is necessary to get the position and orientation (pose) of the camera in space to augment correctly.
Nowadays, this is solved with artificial markers. These known markers are placed in the room and the system is taught to this set up. The next step is to get rid of these artificial markers. If we are calculating the pose without such markers we are talking about marker-less tracking. Instead of artificial markers we will use natural objects in the real world as reference points to calculate the pose. Thus, this approach can be used flexibly and dynamically. We are no longer dependent on artificial markers but we need much more knowledge about the scenery to find the pose. This is compensated by technical actions and/or the user himself. However, both solutions are neither comfortable nor efficient for the usage of such a system. This is why marker-less 3D tracking is still a big field of research.
This sets the starting point for the bachelor thesis. In this thesis an approach is proposed that needs only a quantity of 2D Feature from a given camera image and a quantity of 3D Feature of an object to find the initial Pose. With this approach, we got rid of the technical and user assistance. 2D and 3D Features can be detected in any way you like.
The main idea of this approach is to build six correspondences between these quantities. With those we are able to estimate the pose. Each 3D Feature is mapped with the estimated pose onto image coordinates, whereby the estimated pose can be evaluated. Each distance is measured between the mapped 3D Feature and the associated 2D Feature. Each correspondency is evaluated and the results are summed up to evaluate the whole pose. The lower this summed up value is, the better the pose. It has been shown to have a correct pose with a value around ten pixels.
Due to lots of possibilities to build six correspondences between the quantities, it is necessary to optimize the building process. For the optimization we will use a genetic algorithm.
During the test case the system worked quite reliably. The hit rate was around 90% with a runtime of approximately twelve minutes. Without optimization it can take easily some years.
Diese Bachelor-Thesis beschäftigt sich mit der Entwicklung eines Programms, welches den Zahnarzt durch die AR bei seiner Behandlung am Patienten unterstützen soll. Um eine angemessene theoretische Grundlage zu schaffen, wird zunächst der aktuelle Stand der Technik erläutert, der für dieses Projekt relevant ist. Daraufhin werden mögliche zukünftige Technologien vorgestellt, welche die hypothetische Basis dieser Arbeit darstellen. In dem darauffolgenden Unterkapitel wird die Auswahl der Systeme erläutert, die für dieses Projekt verwendet wurden. Der Hauptteil beschäftigt sich zunächst mit dem Vorgehen in der Vorbereitungs- und Planungsphase, um daraufhin den Programmablauf der Applikation sukzessiv vorzustellen. Dabei wird auch auf die Probleme eingegangen, die während des Programmierens entstanden sind. In dem reflektierenden Auswertungsteil werden Verbesserungsvorschlägen und Zusatzfunktionen für das geschriebene Programm präsentiert.
This master thesis deals basically with the design and implementation of a path planning system based on rapidly exploring search trees for general-n-trailers. This is a probabilistic method that is characterized by a fast and uniform exploration. The method is well established, however, has been applied only to vehicles with simple kinematics to date. General-n-trailers represent a particular challenge as their controllability is limited. For this reason the focus of this thesis rests on the application of the mentioned procedure to general-n-trailers. In this context systematic correlations between the characteristics of general-n-trailers and the possibilities for the realization and application of the method are analyzed.
This thesis deals with the development and evaluation of a concept of novel interaction with ubiquitous user interfaces. To accomplish the evaluation of this interaction concept, a prototype was implementated by using an existing head-mounted display solution and an android smartphone.
Furthermore, in the course of this thesis, a concrete use case for this prototype " the navigation through a city block with the aid of an electronic map " was developed and built as an executable application to help evaluate the quality of the interaction concept. Therefore, fundamental research results were achieved.
This bachelor thesis deals with the concept of a smartphone application for emergencies. It describes the basic problem and provides a conceptual approach.
The core content of this thesis is a requirement analysis of the newly to be designed emergency application. Furthermore the functional and non-functional requirements such as usability are specified to give insights for the concept of the application. In addition, single sub-functions of the mHealth applications of the University Koblenz which exists or are still under construction can be integrated in the future emergency application. Based on the catalog of requirements a market analysis for strengths and weaknesses of existing emergency application systems is realized. In the to-be concept the findings were summarized and possible architectural sketches for future emergency applications were given. Furthermore, one conclusion of dealing with this topic is that a design alone is not sufficient to guarantee a good working app. That is why the requirements for the thesis were expanded by the connection to and integration of rescue centers in the architecture of the emergency app.
At the end of the thesis, the reader will receive a comprehensive overview of the provision of emergency data to the rescue control centers by different transmission channels. Furthermore, conditions for the system requirements are also presented as possible scenarios of the architecture of the whole system of the emergency application. The generic and modular approach guarantees that the system is open for future development and integration of functions of other applications.
Im Rahmen dieser Masterarbeit wird ein umfassender Überblick über die Vielfalt der Sicherheitsmodelle gegeben, indem ausgewählte Sicherheitsmodelle beschrieben, klassifiziert und miteinander verglichen werden.
Sicherheitsmodelle beschreiben in einer abstrakten Weise die sicherheitsrelevanten
Komponenten und Zusammenhänge eines Systems. Mit den Sicherheitsmodellen können komplexe Sachverhalte veranschaulicht und analysiert werden.
Da Sicherheitsmodelle unterschiedliche Sicherheitsaspekte behandeln, beschäftigt
sich diese Arbeit mit der Ausarbeitung eines Klassifizierungsschemas, welches
die strukturelle und konzeptuelle Besonderheiten der Modelle in Bezug auf die zugrundeliegenden Sicherheitsaspekte beschreibt. Im Rahmen des Klassifizierungsschemas werden die drei grundlegenden Modellklassen gebildet: Zugriffskontrollmodelle, Informationsflussmodelle und Transaktionsmodelle.
Sicherheitsmodelle werden in einem direkten und indirekten Vergleich gegenüber gestellt. Im letzten Fall werden sie einer oder mehrerer Modellklassen des Klassifizierungsschemas zugeordnet. Diese Klassifizierung erlaubt, Aussagen über die betrachteten Sicherheitsaspekte und die strukturellen bzw. konzeptuellen Besonderheiten eines Sicherheitsmodells in Bezug auf die anderen Sicherheitsmodelle
zu machen.
Beim direkten Vergleich werden anhand der ausgewählten Kriterien die Eigenschaften
und Aspekte der Sicherheitsmodelle orthogonal zu den Modellklassen
betrachtet.
Human detection is a key element for human-robot interaction. More and more robots are used in human environments, and are expected to react to the behavior of people. Before a robot can interact with a person, it must be able to detect it at first. This thesis presents a system for the detection of humans and their hands using a RGB-D camera. First, a model based hypotheses for possible positions of humans are created to recognize a person. By using the upper parts of the body are used to extract, new features based on relief and width of a person- head and shoulders are extracted. The hypotheses are checked by classifying the features with a support vector machine (SVM). The system is able to detect people in different poses. Both sitting and standing humans are found, by using the visible upper parts of the person. Moreover, the system is able to recognize if a human is facing or averting the sensor. If the human is facing the sensor, the color information and the distance between hand and body are used to detect the positions of the person- hands. This information is useful for gestures recognition and thus can further enhances human-robot interaction.
Iterative Signing of RDF(S) Graphs, Named Graphs, and OWL Graphs: Formalization and Application
(2013)
When publishing graph data on the web such as vocabulariesrnusing RDF(S) or OWL, one has only limited means to verify the authenticity and integrity of the graph data. Today's approaches require a high signature overhead and do not allow for an iterative signing of graph data. This paper presents a formally defined framework for signing arbitrary graph data provided in RDF(S), Named Graphs, or OWL. Our framework supports signing graph data at different levels of granularity: minimum self-contained graphs (MSG), sets of MSGs, and entire graphs. It supports for an iterative signing of graph data, e. g., when different parties provide different parts of a common graph, and allows for signing multiple graphs. Both can be done with a constant, low overhead for the signature graph, even when iteratively signing graph data.
Autonomous systems such as robots already are part of our daily life. In contrast to these machines, humans an react appropriately to their counterparts. People can hear and interpret human speech, and interpret facial expressions of other people.
This thesis presents a system for automatic facial expression recognition with emotion mapping. The system is image-based and employs feature-based feature extraction. This thesis analyzes the common steps of an emotion recognition system and presents state-of-the-art methods. The approach presented is based on 2D features. These features are detected in the face. No neutral face is needed as reference. The system extracts two types of facial parameters. The first type consists of distances between the feature points. The second type comprises angles between lines connecting the feature points. Both types of parameters are implemented and tested. The parameters which provide the best results for expression recognition are used to compare the system with state-of-the-art approaches. A multiclass Support Vector Machine classifies the parameters.
The results are codes of Action Units of the Facial Action Coding System. These codes are mapped to a facial emotion. This thesis addresses the six basic emotions (happy, surprised, sad, fearful, angry, and disgusted) plus the neutral facial expression. The system presented is implemented in C++ and is provided with an interface to the Robot Operating System (ROS).
The goal of this Bachelor thesis was programming an existig six-legged robot, which should be able to explore any environment and create a map of it autonomous. A laser scanner is to be integrated for cognition of this environment. To build the map and locate the robot a suitable SLAM(Simultaneous Localization and Mapping) technique will be connected to the sensor data. The map is reported to be the robots base of path planning and obstancle avoiding, what will be developed in the scope of the bachelor thesis, too. Therefore both GMapping and Hector SLAM will be implemented and tested.
An exploration algorithm is described in this bachelor thesis for exploring the robots environment. The implementation on the robot takes place in the space of ROS(Robot Operating System) framework on a "Raspberry Pi" miniature PC.
A Kinect device has the ability to record color and depth images simultaneously. This thesis is an attempt to use the depth image to manipulate lighting information and material properties in the color image. The presented method of lighting and material manipulation needs a light simulation of the lighting conditions at the time of recording the image. It is used to transform information from a new light simulation directly back into the color image. Since the simulations are performed on a three-dimensional model, a way is searched to generate a model out of single depth image. At the same time the text will react to the problems of the depth data acquisition of the Kinect sensor. An editor is designed to make lighting and material manipulations possible. To generate a light simulation, some simple, real-time capable rendering methods and lighting modells are proposed. They are used to insert new illumination, shadows and reflections into the scene. Simple environments with well defined lighting conditions are manipulated in experiments to show boundaries and possibilities of the device and the techniques being used.
This thesis describes the conception, implementation and evaluation of a collaborative multiplayer game for preschoolers for mobile devices.
The main object of this thesis is to find out, if mobile devices like smartphones and tablet computers are suitable for the interaction of children. In order to develop this kind of game relevant aspects were researched. On this basis a game was designed which was finally tested by preschoolers.
From September 4 to 11, 1992, a fiirst meeting between Ukrainian and German scientists interested in mathematical and computer modeling of social processes was held at Vorzel' near Kiev. The meeting had been planned for nearly three years by Igor V. Chernenko and Mikhail V. Kuz'min, then members of the research group on mathematical modeling in sociology at the Institute of Sociology of the Academy of Science of the Ukrainian Republic, and had to be postponed twice due to the political development in the former Soviet Union, but thanks to the organizers' perseverance (and in spite of a strike of the airport personell at Kiev Borispol Airport on the eve of the conference) the conference could at last be realized.rnThe main purpose of the conference was to discuss a synergetic interpretation of large-scale destructive social processes as catastrophic phenomena in self-organized systems.
This paper originates from the FP6 project "Emergence in the Loop (EMIL)" which explores the emergence of norms in artificial societies. Part of work package 3 of this project is a simulator that allows for simulation experiments in different scenarios, one of which is collaborative writing. The agents in this still prototypical implementation are able to perform certain actions, such as writing short texts, submitting them to a central collection of texts (the "encyclopaedia") or adding their texts to texts formerly prepared by other agents. At the same time they are able to comment upon others' texts, for instance checking for correct spelling, for double entries in the encyclopaedia or for plagiarisms. Findings of this kind lead to reproaching the original authors of blamable texts. Under certain conditions blamable activities are no longer performed after some time.
Customization is a phenomenon which was introduced quite early in information systems literature. As the need for customized information technology is rising, different types of customization have emerged. In this study, customization processes in information systems are analyzed from a perspective based on the concept of open innovation. The objective is to identify how customization of information systems can be performed in an open innovation context. The concept of open innovation distinguishes three processes: Outside-in process, inside-out process and coupled process. After categorizing the selected journals into three core processes, the findings of this analysis indicated that there is a major concentration on outside-in processes. Further research on customization in coupled and inside-out processes is recommended. In addition, the establishment of an extensive up-to-date definition of customization in information systems is suggested.
This paper consists of the observation of existing first aid applications for smartphones and comparing them to a first aid application developed by the University of Koblenz called "Defi Now!". The main focus lies on examining "Defi Now!" in respect to its usability based on the dialogue principles referring to the seven software ergonomic principles due to the ISO 9241-110 standard. These are known as suitability for learning, controllability, error tolerance, self-descriptiveness, conformity with user expectations, suitability for the task, and suitability for individualization.
Therefore a usability study was conducted with 74 participants. A questionnaire was developed, which was to be filled out by the test participants anonymously. The test results were used for an optimization of the app referring its' usability.
Various best practices and principles guide an ontology engineer when modeling Linked Data. The choice of appropriate vocabularies is one essential aspect in the guidelines, as it leads to better interpretation, querying, and consumption of the data by Linked Data applications and users.
In this paper, we present the various types of support features for an ontology engineer to model a Linked Data dataset, discuss existing tools and services with respect to these support features, and propose LOVER: a novel approach to support the ontology engineer in modeling a Linked Data dataset. We demonstrate that none of the existing tools and services incorporate all types of supporting features and illustrate the concept of LOVER, which supports the engineer by recommending appropriate classes and properties from existing and actively used vocabularies. Hereby, the recommendations are made on the basis of an iterative multimodal search. LOVER uses different, orthogonal information sources for finding terms, e.g. based on a best string match or schema information on other datasets published in the Linked Open Data cloud. We describe LOVER's recommendation mechanism in general and illustrate it alongrna real-life example from the social sciences domain.
Concept for a Knowledge Base on ICT for Governance and Policy Modelling regarding eGovPoliNet
(2013)
Abstract The EU project eGovPoliNet is engaged in research and development in the field of information and communication technologies (ICT) for governance and policy modelling. Numerous communities pursue similar goals in this field of IT-based, strategic decision making and simulation of social problem areas. Though, the existing research approaches and results so far are quite fragmented. The aim of eGovPoliNet is to overcome the fragmentation across disciplines and to establish an international, open dialogue by fostering the cooperation between research and practice. This dialogue will advance the discussion and development of various problem areas with the help of researchers from different disciplines, who share knowledge, expertise and best practice supporting policy analysis, modelling and governance. To support this dialogue, eGovPoliNet will provide a knowledge base, which's conceptual development is the subject of this thesis. The knowledge base is to be filled with content from the area of ICT for strategic decision making and social simulation, such as publications, ICT solutions and project descriptions. This content needs to be structured, organised and managed in a way, so that it generates added value and the knowledge base is used as source of accumulated knowledge, which consolidates the previously fragmented research and development results in a central location.
The aim of this thesis is the development of a concept for a knowledge base, which provides the structure and the necessary functionalities to gather and process knowledge concerning ICT solutions for governance and policy modelling. This knowledge needs to be made available to users and thereby motivate them to contribute to the development and maintenance of the knowledge base.
This bachelor thesis deals with the topic "user-friendly design of applications (apps)" on mobile devices, a subdomain of software-ergonomics. In the process, two applications are being analyzed with the aim of developing a solution on how support on a mobile device should be conducted. This study focuses primarily on appropriate gestures to coordinate the 'help function' on a mobile device. The study results show that the test persons request a customized help function, but reject an extensive help description, as this seems to be overwhelming for the user.
The purpose of this bachelor- thesis is to teach Lisa - a robot of the university of Koblenz- AGAS department developed for participation in the @home league of the RoboCup - to draw. This requires the expansion of the robbie software framework and the operation of the robot- hardware components. Under consideration of a possible entry in the Open Challenge of the @home RoboCup, the goals are to detect a sheet of paper using Lisa- visual sensor, a Microsoft Kinect and draw on it using her Neuronics Katana robot arm. In addition, a pen mounting for the arm- gripper has to be constructed.
Outlined within this thesis are the procedures utilized to convert an image template into movement of the robotic arm, which in turn leads to drawing of a painting by the pen attached to the arm on a piece of paper detected by the visual sensor through image processing. Achieved were the parsing and drawing of an object made up of an indefinite amount of straight lines from a SVG-file onto a white sheet of paper, detected on a slightly darker surface and surrounded by various background objects or textures.
Pedestrian Detection in digital images is a task of huge importance for the development of automaticsystems and in improving the interaction of computer systems with their environment. The challenges such a system has to overcome are the high variance of the pedestrians to be recognized and the unstructured environment. For this thesis, a complete system for pedestrian detection was implemented according to a state of the art technique. A novel insight about precomputing the Color Self-Similarity accelerates the computations by a factor of four. The complete detection system is described and evaluated, and was published under an open source license.
Das Vertrauen von jungen Erwachsenen in politische Beiträge aus Rundfunk, Print- und Digitalmedien
(2013)
Die Kernfrage dieser Bachelorarbeit ist, ob das Vertrauen in Medien auf die politische Einstellung wirkt und ob Mediennutzung auf diese Wirkungsrichtung Einfluss nimmt. Hierbei werden sowohl Mediengattungen als auch einzelne Medienformate differenziert betrachtet. Die politische Einstellung wird anhand der Einstellungsdimensionen Effektivität der Regierung, Legitimität der Regierung, Einflussüberzeugung, Responsivität der politischen Akteure und Integrität der politischen Akteure operationalisiert. Hierbei wird der Fokus auf junge Erwachsene gelegt, welchen verbreitet Politikverdrossenheit nachgesagt wird.
Zur Prüfung des Zusammenhangs zwischen Medienvertrauen und der politischen Einstellung wird eine quantitative Online-Befragung der Studenten/ Studentinnen der Universität Koblenz (N = 496) durchgeführt. Zur Datenauswertung werden Regressionsanalysen sowie die ANOVA angewandt. Die Ergebnisse weisen nicht auf eine allgemeine negative politische Grundhaltung junger Erwachsenen hin. Zudem indizieren die Resultate, dass das Vertrauen in Medien einen signifikanten Effekt auf die politische Einstellung hat (p ≤ .05). Mediennutzung hat hingegen unzureichende Erklärungskraft. Auch in zukünftigen Studien würde es sich anbieten, das Medienvertrauen als zentrale unabhängige Variable zu untersuchen, wobei ein Generationenvergleich unterschiedlicher Bildungsschichten empfehlenswert wäre.
In this thesis, a first prototype of a mobile instruction device with mixed reality (MR) funktionality is developed. This system shall be capable to support training on the job through interaction with the work item. The concept corresponds to a didactic approach presented by Martens-Parree that combines constructivism with situated learning. As an application example, the training of glider pilots checked out on a new type was chosen. Whether the MR device could increase the competence, or facilitiate the completion of certain tasks, was examined in a survey with fifteen testers. The results of the study show that in general, the didactic approach of Martens-Parree is valid. While an increase in fact knowledge has been observed, it was not (yet) possible to demonstrate an increase in skills with respect to the work tasks.
This study investigates crowdfunding, a new form of financing projects. In the past years more and more crowdfunding platforms emerged. The main question is if crowdfunding is able to compete with the traditional types of financing social projects. The history and development of crowdfunding is presented in this paper. The different crowdfunding models are explained. An overview of German crowd funding platforms is presented. Based on successful social crowdfunding projects a list of key success factors is listed and described. In a case study a concept for financing a social project through crowdfunding is developed upon the previous studies.
In a software reengineering task legacy systems are adapted computer-aided to new requirements. For this an efficient representation of all data and information is needed. TGraphs are a suitable representation because all vertices and edges are typed and may have attributes. Further more there exists a global sequence of all graph elements and for each vertex exists a sequence of all incidences. In this thesis the "Extractor Description Language" (EDL) was developed. It can be used to generate an extractor out of a syntax description, which is extended by semantic actions. The generated extractor can be used to create a TGraph representation of the input data. In contrast to classical parser generators EDL support ambiguous grammars, modularization, symbol table stacks and island grammars. These features simplify the creation of the syntax description. The collected requirements for EDL are used to determine an existing parser generator which is suitable to realize the requirements.
After that the syntax and semantics of EDL are described and implemented using the suitable parser generator. Following two extractors one for XML and one for Java are created with help of EDL. Finally the time they need to process some input data is measured.
This thesis deals with problems, which occure when rendering stereoscopic contents. These problems are elaborated, simulated with the help of a program developed in this thesis and evaluated by a group of volunteers. Thereby it shall be determined, whether the errors are noticeable and how much they influence the 3D effect of the stereoscopic images. Each error is simulated using different camera assemblies and evaluated depending on the choosen assembly.
E-KRHyper is a versatile theorem prover and model generator for firstorder logic that natively supports equality. Inequality of constants, however, has to be given by explicitly adding facts. As the amount of these facts grows quadratically in the number of these distinct constants, the knowledge base is blown up. This makes it harder for a human reader to focus on the actual problem, and impairs the reasoning process. We extend E-Hyper- underlying E-KRhyper tableau calculus to avoid this blow-up by implementing a native handling for inequality of constants. This is done by introducing the unique name assumption for a subset of the constants (the so called distinct object identifiers). The obtained calculus is shown to be sound and complete and is implemented into the E-KRHyper system. Synthetic benchmarks, situated in the theory of arrays, are used to back up the benefits of the new calculus.
This work deals with the migration of software systems towards the use of the character set defined in the Unicode standard. The work is performed as a case study on the document-management-system PROXESS. A conversion process will be designed that defines the working-steps of the migration for the entire system as well as an arbitrary decomposition of the system into individual modules. The working-steps for each module can be performed chronologically independent of each other to a great extend. For the conversion of the implementation, an approach of automatic recognition of usage patterns is applied. The approach aims at searching the abstract syntax tree for sequences of program instructions that can be assigned to a certain usage pattern. The usage pattern defines another sequence of instructions that acts as an sample solution for that usage pattern. The sample solution demonstrates the Unicode-based management of strings for that usage pattern. By applying a transformation rule, the original sequence of instructions is transferred to the sequence of instructions exposed by the sample solution of the related usage pattern. This mechanism is a starting point for the development of tools that perform this transformation automatically.
The annotation of digital media is no new area of research, instead it is widely investigated. There are many innovative ideas for creating the process of annotation. The most extensive segment of related work is about semi automatic annotation. One characteristic is common in the related work: None of them put the user in focus. If you want to build an interface, which is supporting and satsfying the user, you will have to do a user evaluation first. Whithin this thesis we want to analyze, which features an interface should or should not have to meet these requirements of support, user satisfaction and beeing intuitive. After collecting many ideas and arguing with a team of experts, we determined only a few of them. Different combination of these determined variables form the interfaces, we have to investigate in our usability study. The results of the usability leads to the assumption, that autocompletion and suggestion features supports the user. Furthermore coloring tags for grouping them into categories is not disturbing to the user, but has a tendency of being supportive. Same tendencies emerge for an interface consisting of two user interface elements. There is also an example given for the definition differences of being intuitive. This thesis leads to the concolusion that for reasons of user satisfaction and support it is allowed to differ from classical annotation interface features and to implement further usability studies in the section of annotation interfaces.
Die Bedeutung von Social Software (SSW) nimmt nicht nur im Privatleben vieler Menschen zu. Auch Unternehmen haben mittlerweile die Potentiale dieser Systeme erkannt und setzen vermehrt auf Web 2.0 Technologien basierende Systeme im Unternehmenskontext ein. So brachte eine Studie der Association for Information and Image Management (AIIM) im Jahr 2009 hervor, dass über 50 % der Befragten Enterprise 2.0 (E2.0), d.h. der Einsatz von SSW im Unternehmen, als kritischen Faktor des Unternehmenserfolges ansahen. Auch durch diesen Trend mit verursacht stieg, laut einer Studie des Beratungsunternehmens IDC, die Menge an digital verfügbaren Informationen innerhalb einer Zeitspanne von fünf Jahren (2006-2011) um den Faktor zehn. Wo früher galt, "Je mehr Information, desto besser.", bereitet heute das Managen dieser schieren Flut an Informationen vielen Unternehmen Probleme (bspw. in Bezug auf die Auffindbarkeit von Informationen). SSW bietet mit neuen Funktionen, wie Social Bookmarking, Wikis oder Tags, das Potential, Informationen durch Nutzerbeteiligung besser zu strukturieren und zu organisieren. In der vorliegenden Arbeit wird am Beispiel der Forschungsgruppe für Betriebliche Anwendungssysteme (FG BAS) gezeigt, wie man vorhandene Informationsstrukturen erfassen, analysieren und darauf basierend Empfehlungen für den Einsatz von SSW herleiten kann. Den Rahmen für dieses Vorgehen bildet ein von Henczel (2000) entwickeltes Modell zur Durchführung eines Information Audits. Hervorzuhebende Ergebnisse der Arbeit stellen zum Einen das Erfassungsmodell für Informationen und Prozesse dar (Informationsmatrix) und zum Anderen das Visualisierungsmodell der erfassten Daten.
Development of an Android Application for the Recognition and Translation of Words in Camera Scenes
(2012)
This bachelor- thesis describes the conception and implementation of a translation software for the Android platform. The specific feature of the software is the independent text recognition based on the view of the camera. This approach aims to enhance and accelerate the process of translation in certain situations. After an introduction into text recognition, the underlying technologies and the operation system Android useful applications are described. Then the concept of the software is created and the implementation examined. Finally an evaluation is conducted to identify strengths and weaknesses of the software.
Dualizing marked Petri nets results in tokens for transitions (t-tokens). A marked transition can strictly not be enabled, even if there are sufficient "enabling" tokens (p-tokens) on its input places. On the other hand, t-tokens can be moved by the firing of places. This permits flows of t-tokens which describe sequences of non-events. Their benefiit to simulation is the possibility to model (and observe) causes and effects of non-events, e.g. if something is broken down.
In this paper, we demonstrate by means of two examples how to work with probability propagation nets (PPNs). The fiirst, which comes from the book by Peng and Reggia [1], is a small example of medical diagnosis. The second one comes from [2]. It is an example of operational risk and is to show how the evidence flow in PPNs gives hints to reduce high losses. In terms of Bayesian networks, both examples contain cycles which are resolved by the conditioning technique [3].
The paper deals with a specific introduction into probability propagation nets. Starting from dependency nets (which in a way can be considered the maximum information which follows from the directed graph structure of Bayesian networks), the probability propagation nets are constructed by joining a dependency net and (a slightly adapted version of) its dual net. Probability propagation nets are the Petri net version of Bayesian networks. In contrast to Bayesian networks, Petri nets are transparent and easy to operate. The high degree of transparency is due to the fact that every state in a process is visible as a marking of the Petri net. The convenient operability consists in the fact that there is no algorithm apart from the firing rule of Petri net transitions. Besides the structural importance of the Petri net duality there is a semantic matter; common sense in the form of probabilities and evidencebased likelihoods are dual to each other.
Standards are widely-used in the computer science and IT industry. Different organizations like the International Organization for Standardization (SO) are involved in the development of computer related standards. An important domain of standardization is the specification of data formats enabling the exchange of information between different applications. Such formats can be expressed in a variety of schema languages thereby defining sets of conformant documents. Often the use of multiple schema languages is required due to their varying expressive power and different kind of validation requirements.rnThis also holds for the Specification Common Cartridge which is maintained by the IMS Global Learning Consortium. The specification defines valid zip packages that can be used to aggregate different learning objects. These learning objects are represented by a set of files which are a part of the package and can be imported into a learning management system. The specification makes use of other specifications to constrain the contents of valid documents. Such documents are expressed in the eXtensible Markup Language and may contain references to other files also part of the package. The specification itself is a so-called domain profile. A domain profile allows the modification of one or more specifications to meet the needs of a particular community. Test rules can be used to determine a set of tasks in order to validate a concrete package. The execution is done by a testsystem which, as we will show, can be created automatically. Hence this method may apply to other package based data formats that are defined as a part of a specification.
This work will examine the applicability of this generic test method to the data formats that are introduced by the so called Virtual Company Dossier. These formats are used in processes related to public e-procurement. They allow the packaging of evidences that are needed to prove the fulfillment of criteria related to a public tender. The work first examines the requirements that are common to both specifications. This will introduce a new view on the requirements by introducing a higher level of abstraction. The identified requirements will then be used to create different domain profiles each capturing the requirements of a package based data format. The process is normally guided by supporting tools that ease the capturing of a domain profile and the creation of testsystems. These tools will be adapted to support the new requirements. Furtheron the generic testsystem will be modified. This system is used as a basis when a concrete testsystem is created.
Finally the author comes to a positive conclusion. Common requirements have been identified and captured. The involved systems have been adapted allowing the capturing of further types of requirements that have not been supported before. Furthermore the background of the specifications quite differ. This indicates that the use of domain profiles and generic test technologies may be suitable in a wide variety of other contexts.
With the reaccreditation of the degree programs in the department of computer science at the University of Koblenz-Landau new trendsetting degree programs will be offered. For further planning and design of the individual degree programs the opinion of the students is a considerable indicator. Information about the new degree programs aren"t available during the accreditation process. Students have an interest in knowing about the new degree programs and the new examination regulation and therefore being part in the decision process would be desirable. The concept of e-participation is an opportunity to satisfy this need. It offers the possibility to discuss topics of the accreditation and to bring in own ideas and opinions into the decision process. This bachelor thesis describes an e-participation project at the University of Koblenz-Landau about the accreditation of the degree programs of the faculty of computer science. By using the reference framework of Scherer and Wimmer (2011) the project will be carried out. Furthermore the accreditation process will be modeled to get a better understanding of the whole process and to identify the possibilities for e-participation project integration. The results of this project are going to be covered by an online survey about the e-participation platform. Using the results of the survey and the experiences gained after the project recommendations are given for further e-participation projects. Moreover the reference framework of Scherer and Wimmer (2011) will be analyzed critically.
Parallelmanipulatoren, welche den Stewartmechanismus nutzen, ermöglichen die präzise Ausführung von Aufgaben in einem begrenzten Arbeitsraum. Durch die Nutzung von sechs Freiheitsgraden wird eine hohe Flexibilität der Positionierung erreicht. Die robuste Konstruktion sorgt zudem für ein sehr gutes Verhältnis von Gewicht zu Nutzlast.
Diese Bachelorarbeit befasst sich mit der Entwicklung einer flexiblen Softwarelösung zur Ansteuerung einer Stewartplattform. Dies umfasst ein Modell der Plattform, welches zu Testzwecken dient. Es werden zunächst die mathematischen Grundlagen der Inversen Kinematik erarbeitet aufbauend auf einem zuvor definierten Bewegungsmodell. Es folgt die Entwicklung einer generischen Architektur zur Übermittlung und Auswertung von Steuerkommandos vom PC. Die Implementierung geschieht in C und wird in verschiedene Module aufgeteilt, welche jeweils einen Aufgabenbereich der Positionskontrolle oder der Hardwarekommunikation abdecken. Es wird zudem eine graphische Nutzeroberfläche vorgestellt, über die man die Position der Plattform manuell verändern kann. Eine automatische Ansteuerung wird im folgenden Anwendungsbeispiel beschrieben, wo die Plattform mit frequentiellen Beschleunigungswerten einer Achterbahnsimulation beliefert wird.
Die Messung der Produktivität von Dienstleistungen ist trotz zunehmender Relevanz immer noch ein wenig durchdrungenes Forschungsfeld. Ursachen hierfür sind vor allem in den besonderen Merkmalen von Dienstleistungen - Immaterialität und Integrativität - zu suchen. Eine typische Dienstleistung der B2B Softwarebranche ist die Anpassung von Systemen an die Bedürfnisse des Kunden - das sogenannte Customizing. Die Ausschöpfung des Customizing-Potentials von Standardsoftware und -produkten und eine stärkere Einbeziehung des Kunden in Innovationsprozesse werden jedoch dadurch erschwert, dass die Produktivität dieser Dienstleistung nur unzureichend mess- und somit bewertbar zu sein scheint.
Der vorliegende Beitrag beschreibt die Entwicklung eines Modells zur Messung der Produktivität von Dienstleistungen auf der Basis unterschiedlicher Vorstudien im Rahmen des CustomB2B Projektes an der Universität Koblenz-Landau.
Virtual Goods + ODRL 2012
(2012)
This is the 10th international workshop for technical, economic, and legal aspects of business models for virtual goods incorporating the 8th ODRL community group meeting. This year we did not call for completed research results, but we invited PhD students to present and discuss their ongoing research work. In the traditional international group of virtual goods and ODRL researchers we discussed PhD research from Belgium, Brazil, and Germany. The topics focused on research questions about rights management in the Internet and e-business stimulation. In the center of rights management stands the conception of a formal policy expression that can be used for human readable policy transparency, as well as for machine readable support of policy conformant systems behavior up to automatic policy enforcement. ODRL has proven to be an ideal basis for policy expressions, not only for digital copy rights, but also for the more general "Policy Awareness in the World of Virtual Goods". In this sense, policies support the communication of virtual goods, and they are a virtualization of rules-governed behavior themselves.
Aspect-orientation in PHP
(2012)
Diese Diplomarbeit hat das Thema der fehlenden Cross-Cutting-Concerns(CCCs)-Unterstützung in PHP zum Inhalt. Die Basis bilden zu definierende Anforderungen an eine AOP-Realisierung im PHP-Umfeld. Es wird analysiert, wie und ob verwandte Sprachen und Paradigmen es gestatten, CCCs zu unterstützen. Darüber hinaus wird die Möglichkeit erörtert, AOP in PHP ohne PHP-Erweiterung zu realisieren. Weiter werden die bisherigen Ansätze, AOP in PHP umzusetzen, qualitativ untersucht. Die vorliegende Arbeit zielt darauf ab, eine eigene AOP-PHP-Lösung zu präsentieren, die nicht die Schwächen existierender Lösungen teilt.
Regarding the rapidly growing amount of data produced every year and the increasing acceptance of Enterprise 2.0 enterprises have to care about the management of their data more and more. Content created and stored in an uncoordinated manner can lead to data-silos (Williams & Hardy 2011, p.57), which result in long search times, inaccessible data and in consequence monetary losses. The "expanding digital universe" forces enterprises to develop new archiving solutions and records management policies (Gantz et al. 2007, p.13). Enterprise Content Management (ECM) is the research field that deals with these challenges. It is placed in the scientific context of Enterprise Information Management. This thesis aims to find out to what extent current Enterprise Content Management Systems (ECMS) support these new requirements, especially concerning the archiving of Enterprise 2.0 data. For this purpose, three scenarios were created to evaluate two different kinds of ECMS (one Open Source - and one proprietary system) chosen on the basis of a short marketrnresearch. The application of the scenarios reveals that the system vendors actually face the industry- concerns: both tools provide functionality for the archiving of data arising from online collaboration and also business records management capabilities but the integration of those topics is not, or is only inconsistently solved. At this point new questions - such as, "Which datarngenerated in an Enterprise 2.0 is worth being a record?" - arise and should be examined in future research.
Procedural content generation, the generation of video game content using pseudo-random algorithms, is a field of increasing business and academic interest due to its suitability for reducing development time and cost as well as the possibility of creating interesting, unique game spaces. Although many contemporary games feature procedurally generated content, the author perceived a lack of games using this approach to create realistic outer-space game environments, and the feasibility of employing procedural content generations in such a game was examined. Using current scientific models, a real-time astronomical simulation was developed in Python which generates star and planets object in a fictional galaxy procedurally to serve as the game space of a simple 2D space exploration game where the player has to search for intelligent life.
Schema information about resources in the Linked Open Data (LOD) cloud can be provided in a twofold way: it can be explicitly defined by attaching RDF types to the resources. Or it is provided implicitly via the definition of the resources´ properties.
In this paper, we analyze the correlation between the two sources of schema information. To this end, we have extracted schema information regarding the types and properties defined in two datasets of different size. One dataset is a LOD crawl from TimBL- FOAF profile (11 Mio. triple) and the second is an extract from the Billion Triples Challenge 2011 dataset (500 Mio. triple). We have conducted an in depth analysis and have computed various entropy measures as well as the mutual information encoded in this two manifestations of schema information.
Our analysis provides insights into the information encoded in the different schema characteristics. It shows that a schema based on either types or properties alone will capture only about 75% of the information contained in the data. From these observations, we derive conclusions about the design of future schemas for LOD.
Quadrokopter sind Helikopter mit vier in einer Ebene angeordneten Rotoren. Kleine unbemannte Modelle, die oft nur eine Schubkraft von wenigen Newton erzeugen können, sind im Spielzeug- und Modellbaubereich beliebt, werden aber auch von Militär und Polizei als Drohne für Aufklärungs- und Überwachungsaufgaben eingesetzt. Diese Diplomarbeit befasst sich mit den theoretischen Grundlagen der Steuerung eines Quadrokopters und entwickelt darauf aufbauend eine kostengünstige Steuerplatine für einen Modellquadrokopter.
Die theoretischen Grundlagen enthalten eine Untersuchung der Dynamik eines frei fliegenden Quadrokopters, bei der Bewegungsgleichungen hergeleitet und mit den Ergebnissen verglichen werden, die in "Design and control of quadrotors with application to autonomous flying" ([Bou07]) vorgestellt wurden. Weiterhin wird die Funktionsweise verschiedener Sensoren beschrieben, die zur Bestimmung der aktuellen räumlichen Ausrichtung geeignet sind, und es werden Verfahren besprochen, mit denen die Ausrichtung aus den Messwerten dieser Sensoren abgeschätzt werden kann. Zusätzlich wird in den Schiefkörper der Quaternionen eingeführt, in dem dreidimensionale Rotationen kompakt dargestellt und effizient verkettet werden können.
Daran anschließend wird die Entwicklung einer einfachen Steuerplatine beschrieben, die sowohl einen autonomen Schwebeflug als auch Fernsteuerung ermöglicht. Die Platine wurde auf einem X-Ufo-Quadrokopter der Firma Silverlit entwickelt und getestet, der daher ebenfalls vorgestellt wird. Die eingesetzten Bauteile und deren Zusammenspiel werden besprochen. Dabei ist insbesondere die WiiMotionPlus hervorzuheben, die als kostengünstiges Gyrosensormodul verwendet wird. Daneben werden verschiedene Aspekte der Steuersoftware erläutert: die Auswertung der Sensordaten, die Zustandsschätzung mit Hilfe des expliziten komplementären Filters nach Mahony et al. ([MHP08]), die Umsetzung des Ausrichtungsreglers sowie die Erzeugung der Steuersignale für die Motoren. Sowohl die Steuersoftware als auch Schaltplan und Platinenlayout der Steuerplatine liegen dieser Arbeit auf einer CD bei. Schaltplan und Platinenlayout sind zusätzlich im Anhang der Arbeit abgedruckt.
In this master thesis some new helpful features will be added to the Spanning Tree Simulator. This simulator was created by Andreas Sebastian Janke in his bachelor thesis [Jan10b] in 2010. It is possible to visualize networks which are defined in a configuration file. Each of them is a XML representation of a network consisting of switches and hosts. After loading such a file into the program it is possible to run the Spanning Tree Algorithm IEEE 802.1D. In contrast to the previous version only the switches are implemented as threads. When the algorithm is finished a spanning tree is built. This means, that messages cannot run into loops anymore. This is important because loops can cause a total breakdown of the communication in a network, if the running routing protocols cannot handle loops.
Magnetic resonance (MR) tomography is an imaging method, that is used to expose the structure and function of tissues and organs in the human body for medical diagnosis. Diffusion weighted (DW) imaging is a specific MR imaging technique, which enables us to gain insight into the connectivity of white matter pathways noninvasively and in vivo. It allows for making predictions about the structure and integrity of those connections. In clinical routine this modality finds application in the planning phase of neurosurgical operations, such as in tumor resections. This is especially helpful if the lesion is deeply seated in a functionally important area, where the risk of damage is given. This work reviews the concepts of MR imaging and DW imaging. Generally, at the current resolution of diffusion weighted data, single white matter axons cannot be resolved. The captured signal rather describes whole fiber bundles. Beside this, it often appears that different complex fiber configurations occur in a single voxel, such as crossings, splittings and fannings. For this reason, the main goal is to assist tractography algorithms who are often confound in such complex regions. Tractography is a method which uses local information to reconstruct global connectivities, i.e. fiber tracts. In the course of this thesis, existing reconstruction methods such as diffusion tensor imaging (DTI) and q-ball imaging (QBI) are evaluated on synthetic generated data and real human brain data, whereas the amount of valuable information provided by the individual reconstruction mehods and their corresponding limitations are investigated. The output of QBI is the orientation distribution function (ODF), where the local maxima coincides with the underlying fiber architecture. We determine those local maxima. Furthermore, we propose a new voxel-based classification scheme conducted on diffusion tensor metrics. The main contribution of this work is the combination of voxel-based classification, local maxima from the ODF and global information from a voxel- neighborhood, which leads to the development of a global classifier. This classifier validates the detected ODF maxima and enhances them with neighborhood information. Hence, specific asymmetric fibrous architectures can be determined. The outcome of the global classifier are potential tracking directions. Subsequently, a fiber tractography algorithm is designed that integrates along the potential tracking directions and is able to reproduce splitting fiber tracts.
Activity recognition with smartphones is possible by using its internal sensors without using any external sensor. First of all, previous works and their techniques will be regarded and from these works an own implementation for the activity recognition will be derived. Most of the previous works only use the accelerometer for the activity recognition task. For this reason, this bachelor thesis analyzes the benefit of further sensors, such as the magnetic field, the linear acceleration or the gyroscope. The activity recognition is performed by classification algorithms. Decision Tree, Naive Bayes and Support Vector machines will be used. Sensor data of subjects will be collected and saved by using an own developed application. This data is needed as training data for the classification algorithms.
The result is a model which represents the structure of the data. To validate the model, a test dataset will be used which is different from the training dataset. The results confirm previous works which indicated that the activity recognition task is possible by only using the accelerometer. Orientation, gyroscope and linear acceleration cannot be used for all problems of the activity recognition. Apart from that, the Decision Tree seems to be the best classification algorithm if the model has no training data of the current user.
The objective of this contribution is to conceptually analyze the potentials of entrepreneurial design thinking as being a rather new method for entrepreneurship education. Based on a literature review of different design thinking concepts we carve out a generic design thinking model upon we conceptually build a new model that considers entrepreneurial thinking as a valuable characteristic.
The results of our work show that the characteristics of entrepreneurial design thinking can enhance entrepreneurship education by supporting respective action fields of entrepreneurial learning. In addition we reveal that entrepreneurial design thinking offers beneficial guidelines for the design of entrepreneurship education programs.
Cloud-Computing ist aktuell ein besonderer Trend in der IT-Branche. ERP-Systeme sind in den heutigen Unternehmen ebenfalls nicht mehr wegzudenken. Durch die Analyse ausgewählter Literatur wird aufgezeigt, dass Cloud-Computing als Betriebsmodell für ERP-Systeme besonderer Untersuchung bedarf, da beim Zusammenspiel dieser Technologien noch unterschiedliche Herausforderungen geklärt werden müssen. Darauf aufbauend werden mit Hilfe von drei verschiedenen Praxispartnern Fallstudien zu deren Cloud-ERP-Lösungen erstellt, um in einem nächsten Schritt die theoretische Literatur mit den praktischen Ergebnissen zu vergleichen.
Ziel dieser Arbeit ist es mit Hilfe der Forschungsfragen, differenzierte Nutzenaspekte von Cloud-ERP-Lösungen aufzudecken und zu erklären, wie die Theorie mit praktischen Erfahrungswerten von Experten übereinstimmt. Durch die Fallstudien wird deutlich, dass sich die drei unterschiedlichen Cloud-ERP-Anbieter vor allem durch den Umfang ihrer Software und die Unternehmensgrößen der Zielgruppen im Markt differenzieren. Zusätzlich zeigt sich im Analyseteil und Fazit der Arbeit, dass über die in der Theorie identifizierten Nutzenaspekte hinaus, zusätzliche Nutzenaspekte durch die Kombination von Cloud-ERP-Lösungen generiert werden können und dass es aktuell schon eine spezielle Zielgruppe für Cloud-ERP-Lösungen gibt. Für die Zukunft bleibt abzuwarten, wie sich der Cloud-ERP-Markt weiterentwickeln wird und welche weiteren Funktionalitäten in die Cloud ausgelagert werden können, sodass sich On-Demand ERP-Systeme zu einer Konkurrenz von On-Premise-Lösungen entwickeln können.
Within the scope of this bachelor thesis, a survey targeting the alumni of the Department 4: Computer Science of the University of Koblenz-Landau was planned, realized and evaluated. The goal was to support the Task Force Bachelor Master that was in charge of the re-accreditation process of the study courses. At first, the theoretical fundament of the survey design was acquired via desk research. Moreover, the analysis of past surveys of similar character lead to getting an impression of the requirements. Under consideration of recent changes, a survey prototype was created and improved following the insights from a pre-test. Using the open source tool LimeSurvey, the final version was implemented. The platform was used as the technical basis of the survey. The recipients included members of alumni clubs as well as recent alumni from the last years.
The survey lead to insights about the satisfaction of the alumni with their study course and the study situation in general. Furthermore, the opinion regarding two new master courses, E-Government and Web Science, was requested. The feedback from four of the study courses was enough to give significant results, for the other five courses it was only possible to interpret the general statements. All in all, there was a high rate of satisfaction with the studies.
Additionally, it was possible to collect suggestions for improvements and criticism. The main topics were internationality, emphasis on study topics, freedom of choice/specialization and relevancy to practice. As a result of the survey, a recommendation was verbalized, that should lead to an improvement in quality and need of the teaching in the Department 4: Computer Science in combination with the detailed results.
The Multimedia Metadata Ontology (M3O) provides a generic modeling framework for representing multimedia metadata. It has been designed based on an analysis of existing metadata standards and metadata formats. The M3O abstracts from the existing metadata standards and formats and provides generic modeling solutions for annotations, decompositions, and provenance of metadata. Being a generic modeling framework, the M3O aims at integrating the existing metadata standards and metadata formats rather than replacing them. This is in particular useful as today's multimedia applications often need to combine and use more than one existing metadata standard or metadata format at the same time. However, applying and specializing the abstract and powerful M3O modeling framework in concrete application domains and integrating it with existing metadata formats and metadata standards is not always straightforward. Thus, we have developed a step-by-step alignment method that describes how to integrate existing multimedia metadata standards and metadata formats with the M3O in order to use them in a concrete application. We demonstrate our alignment method by integrating seven different existing metadata standards and metadata formats with the M3O and describe the experiences made during the integration process.
The E-Government research area has gained in importance in Europe and specially Germany in the last few years, causing the number of researchers, institutes and publications to increase rapidly. This makes it difficult for outsiders to get an overview of the relevant actors in the E-Government field. This issue can be addressed by implementing a research map for the E-Government field, where all relevant actors and objects and their information are shown on the map according to their location. In order to give a complete overview, information which was valid at a certain time in the past needs to be available on the research map. This can be only achieved if the contents of the research map are historicized. This means that a new version of an object needs to be created and saved in the database, if changes occur to the object. Older versions need to be retained on the database, so that the user is able to navigate the website based on temporal information. Past experience has shown that the temporal aspects of historicization should be managed and planned during the conceptual phase of the website rather than during implementation. This Bachelor thesis proposes a concept for the E-Government research map which includes the modeling of relevant temporal dimensions needed to historicize the contents of the research map.
Opinion Mining : Using Twitter as a source of opinion for the prediction of stock market prices
(2012)
Neben den theoretischen Grundkonzepten der automatisierten Fließtextanalyse, die das Fundament dieser Arbeit bilden, soll ein Überblick in den derzeitigen Forschungsstand bei der Analyse von Twitter-Nachrichten gegeben werden. Hierzu werden verschiedene Forschungsergebnisse der, derzeit verfügbaren wissenschaftlichen Literatur erläutert, miteinander verglichen und kritisch hinterfragt. Deren Ergebnisse und Vorgehensweisen sollen in unsere eigene Forschung mit eingehen, soweit sie sinnvoll erscheinen. Ziel ist es hierbei, den derzeitigen Forschungsstand möglichst gut zu nutzen.
Ein weiteres Ziel ist es, dem Leser einen Überblick über verschiedene maschinelle Datenanalysemethoden zur Erkennung von Meinungen zu geben. Dies ist notwendig, um die Bedeutung der im späteren Verlauf der Arbeit eingesetzten Analysemethoden in ihrem wissenschaftlichen Kontext besser verstehen zu können. Da diese Methoden auf verschiedene Arten durchgeführt werden können, werden verschiedene Analysemethoden vorgestellt und miteinander verglichen. Hierdurch soll die Machbarkeit der folgenden Meinungsauswertung bewiesen werden. Um eine hinreichende Genauigkeit bei der folgenden Untersuchung zu gewährleisten, wird auf ein bereits bestehendes und evaluiertes Framework zurückgegriffen. Dieses ist als API 1 verfügbar und wird daher zusätzlich behandelt. Der Kern Inhalt dieser Arbeit wird sich der Analyse von Twitternachrichten mit den Methoden des Opinion Mining widmen.
Es soll untersucht werden, ob sich Korrelationen zwischen der Meinungsausprägung von Twitternachrichten und dem Börsenkurs eines Unternehmens finden lassen. Es soll dabei die Stimmungslage der Firma Google Inc. über einen Zeitraum von einem Monat untersucht und die dadurch gefunden Erkenntnisse mit dem Börsenkurs des Unternehmens verglichen werden. Ziel ist es, die Erkenntnisse von (Sprenger & Welpe, 2010) und (Taytal & Komaragiri, 2009) auf diesem Gebiet zu überprüfen und weitere Fragestellungen zu beantworten.
In this bachelor thesis a tangible augmented reality game was developed, which should have a additional benefit compared to conventional computer or augmented reality games. The main part of the thesis explains the game concept, the development and the evaluation of the game. In the evaluation the flow-experience, as measurement for the games" amusement, was analysed with a user test and the developed game was compared with other smartphone games. Also augmented reality, tangible user interface and tangible augmented reality was introduced and the advantages and disadvantages was explained. The history of augmented reality was introduced too.
In dieser Diplomarbeit wurde ein System entwickelt, dass eine Navigation für Fußgänger ermöglicht. Wie beabsichtigt wurde das System für die Benutzung auf einem iPhone realisiert.
Obwohl die Karten der neuen Generation vonNAVTEQ sich noch in Entwicklung befinden, konnten erste Eindrücke gesammelt werden, wie die Navigation der Fußgänger in Zukunft aussehen wird. Das System arbeitet aber auch mit Kartendaten von OpenStreetMap, die die klassische Repräsentation der Kartendaten haben. Die Positionierung kann später, wenn der Galileo-Empfänger zur Verfügung steht, umgestellt werden und Positionsdaten mit höherer Präzision für die Navigation bereitstellen. Die Routenberechnung konnte mit einem CH-ähnlichen Verfahren durch Vorberechnung beschleunigt werden und erlaubt trotzdem eine Änderung des Profils, ohne dass eine neue Vorberechnung nötig ist. Anders als beim einfachen CH, wo nach einer Änderung des Profils für die Berechnung der Routenkosten eine neue Vorberechnung nötig ist, die im Vergleich zu der Berechnung einer Route aufwendig ist. In Abbildung 8.1 ist die demonstrative Navigation mit dem fertigen System abgebildet. Diese zeigt eine Berechnung der Routen für 3 unterschiedliche Profile, die unterschiedliche Steigungsgrade bevorzugen. Daraus ergeben sich unterschiedliche Routen.
3D-Curve-Skeletons are often used, because the object surface repesentation is less complex and also needs less computing power in further processing, compared to the representation created by the Medial Axis Transformation introduced 1967 by Harry Blum.
This theses aims at developing a 3D curve skelton approximation algorithm that keeps these advantages and is also able to handle different scenarios of the object surface input data.
The principles of project management are due to the influences of economic conditions and technological development in transition [Wills 1998 & Jonsson et al. 2001]. The increasing in-ternationalization, shortened time to market, changing labor costs and the increasing involve-ment of professionals distributed geographical locations are drivers of the transformations of the project landscape [Evaristo/van Fenema 1999]. Resulting from this, the use of collabora-tive technologies is a crucial factor for the success of a project. [Romano et al. 2002]. Previous research on the use of collaborative technologies for project management purposes focuses especially the development of model-like, universal system architectures to identify the requirements for a specially designed collaborative project management system. This thesis investigates the challenges and benefits that arise when an organisation imple-ments business software for the purpose of collaborative project management.
In der Diplomarbeit soll die Verwendung und Möglichkeit zur Einbindung eines Eyetrackers in der Bildersuche untersucht werden. Eyetracker sind Geräte zur Blickerfassung. Sie werden häufig in Design- und Usabilitystudien verwendet, um Informationen über den Umgang der Benutzer mit dem Produkt zu untersuchen. Seit einiger Zeit werden Augenbewegungen auch zur Erkennung von benutzerrelevanten Informationen und Bereichen verwendet, wie zum Beispiel bei dem Projekt Text 2.05 [4]. Hierbei werden Blickrichtung und -fixierung benutzt, um eine Interaktion mit dem Leser eines Textes auf eine möglichst einfache, dabei aber subtile Weise zu ermöglichen.
This paper critically examines the Google Calendar. For this purpose, the offered functions of the core product are studied on privacy aspects. On one hand, it is identified, to which extent the product could infringe the users" privacy, on the other accuring risks are discussed. Furthermore, the functions in terms of their use for both, the service provider Google and for the user, are considered. A detailed analysis demonstrates the critical aspects, in which we have to decide between privacy and functionality. The identified solutions to minimize discussed risks of IT security mechanisms, are presented, discussed and analyzed in terms of their feasibility. Afterwards the individual solutions are summarized in a security concept and other requirements are explained. Finally, a Firefox-Addon which implements the described solution concept should be created, to resolve the existing weaknesses to the best of its ability. Ultimately, the functionality of the addon with technical implementation is illustrated in detail.
In this thesis the feasibility of a GPGPU (general-purpose computing on graphics processing units) approach to natural feature description on mobile phone GPUs is assessed. To this end, the SURF descriptor [4] has been implemented with OpenGL ES 2.0/GLSL ES 1.0 and evaluated across different mobile devices. The implementation is multiple times faster than a comparable CPU variant on the same device. The results proof the feasibility of modern mobile graphics accelerators for GPGPU tasks especially for the detection phase in natural feature tracking used in augmented reality applications. Extensive analysis and benchmarking of this approach in comparison to state of the art methods have been undertaken. Insights into the modifications necessary to adapt and modify the SURF algorithm to the limitations of a mobile GPU are presented. Further, an outlook for a GPGPU-based tracking pipeline on a mobile device is provided.
Computers assist humans in many every-day situations. Their advancing miniaturisation broadens their fields of use and leads to an even higher significance and spread throughout society. Already, these small and powerful machines are wide-spread in every-day objects and the spread increases still as the mobility-aspect grows in importance. From laptops, smartphones and tables to systems worn on the body (wearable computing) or even inside the body as cyber-implants, these systems help humans actively and context-sensitively in the accomplishment of their every-day business.
A part of the wearable-computing-domain is taken up by the development of Head-mounted displays (HMD). These helmets or goggles feature one or more displays enabling their users to see computer-rendered images or images of their environment enriched with computer-generated information. At the moment, most of this HMD feature LC-Displays, but newer systems start appearing that allow the projection of the image onto the user's retina. Newest break-throughs in the field of study already produced contact lenses with an integrated display. The data shown by a HMD is compiled using a multitude of sensors, like a Head-Tracker or a GPS. Increasing computational performance and miniaturisation lead to a wide spread of HMD in a lot of fields.rnThe multiple scenarios in which a HMD can be used to help improve human-perception and -interaction led the "Institut für Integrierte Naturwissenschaften" of the University of Koblenz-Landau to come up with a HMD on the basis of Apple's iOS-devices featuring Retina Displays. The high pixel density of these displays combined with condensor lenses into a HMD offer a highly immersive environment for stereoscopic imagery, while other systems only display a relatively small image projected a few feet away of the user. Furthermore, the iPhone/ iPod Touch and iPad exhibit a lot of potential given by their variety of offered sensors and computational power. While producing a similarly feature-rich HMD is very costy, using simple iPod Touches 4th Gen as the basis of a HMD results in a very inexpensive solution with a high potential. The increasing popularity and spread of Apple devices would reduce the costs even more, as users of the HMD could simply integrate their device into the system. A software designed with the specific intent to support a large variety of Apple iOS-devices that could easily be extended to support newer devices, would allow for a universal use of such a HMD-solution as the new device could simply replace an old device.rnrnThe focus of this thesis is the conception and development of an application designed for Apple's iOS 5 operating system that will be used in a HMD evolving around the use of Apple iOS-devices featuring Retina Displays. The Rollercoaster2000-project depicting a ride in a virtual rollercoaster will be used as the application's core. A server will syncronize the display of clients conntected to it which are combined to form a HMD. Furthermore the gyroscope of the iOS-devices combined into a HMD will be used to track the wearer's head-movements. Another feature will be the use of the devices cameras as a mean of orientation while wearing the HMD.
As a first step in the realization of a software meeting the set specifications is the introduction of the Objective-C programming languages used to develop iOS-Applications. In conjunction with the compiler and runtime environment, Objective-C makes up the base of the second step, the introduction of the iOS-SDK. Aimed with this iOS-app-development-knowledge, the last part of the thesis consists of the ascertainment of requirements and development of a software complying to the goals of a software written specifically for the used in a HMD.
Particle swarm optimization is an optimization technique based on simulation of the social behavior of swarms.
The goal of this thesis is to solve 6DOF local pose estimation using a modified particle swarm technique introduced by Khan et al. in 2010. Local pose estimation is achieved by using continuous depth and color data from a RGB-D sensor. Datasets are aquired from different camera poses and registered into a common model. Accuracy and computation time of the implementation is compared to state of the art algorithms and evaluated in different configurations.
This thesis addresses the implementation of a particle simulation of an explosion. The simulation will be displayed via ray tracing in near real time. The implementation makes use of the openCL standard. The focus of research in this thesis is to analyse the performance of this combination of components.
The natural and the artificial environment of mankind is of enormous complexity, and our means of understanding this complex environment are restricted unless we make use of simplified (but not oversimplified) dynamical models with the help of which we can explicate and communicate what we have understood in order to discuss among ourselves how to re-shape reality according to what our simulation models make us believe to be possible. Being both a science and an art, modelling and simulation isrnstill one of the core tools of extended thought experiments, and its use is still spreading into new application areas, particularly as the increasing availability of massive computational resources allows for simulating more and more complex target systems.
In the early summer of 2012, the 26th European Conference on Modelling andrnSimulation (ECMS) once again brings together the best experts and scientists in the field to present their ideas and research, and to discuss new challenges and directions for the field.
The 2012 edition of ECMS includes three new tracks, namely Simulation-BasedrnBusiness Research, Policy Modelling and Social Dynamics and Collective Behaviour, and extended the classical Finance and Economics track with Social Science. It attracted more than 110 papers, 125 participants from 21 countries and backgrounds ranging from electrical engineering to sociology.
This book was inspired by the event, and it was prepared to compile the most recent concepts, advances, challenges and ideas associated with modelling and computer simulation. It contains all papers carefully selected from the large number of submissions by the programme committee for presentation during the conference and is organised according to the still growing number tracks which shaped the event. The book is complemented by two invited pieces from other experts that discussed an emerging approach to modelling and a specialised application. rnrnWe hope these proceedings will serve as a reference to researchers and practitioners in the ever growing field as well as an inspiration to newcomers to the area of modelling and computer simulation. The editors are honoured and proud to present you with this carefully compiled selection of topics and publications in the field.
Die Diskussion zum Thema Mindestlohn ist ein stets aktuelles und findet zur Jahreswende 2011/2012, in der diese Arbeit entstanden ist, von der Politik und Wirtschaft besonders viel Aufmerksamkeit. Die Aktualität dieses Themas und ihre Dynamik werden besonders darin bemerkbar, dass bei der Untersuchung der deutschen Literatur zu diesem Thema viele der Aussagen und Thesen nicht mehr zutreffen. Das eingangs aufgeführte Zitat von der amtierenden Bundesministerin für Arbeit und Soziales, Ursula von der Leyen, bringt zum Ausdruck, dass mittlerweile in der Politik ein Konsens darüber existiert, dass vollzeitbeschäftigte Arbeitnehmer in der Lage sein müssen, ihren Lebensunterhalt aus ihrem Einkommen zu sichern. Dies stellt für die christlich-demokratische Regierungspartei einen Dogmenwechsel dar. Während die CDU in den letzten Jahrzehnten auf die Tarifbindung gesetzt und einen Mindestlohn kategorisch abgelehnt hat, geht sie nun dazu über, Lohnuntergrenzen für alle Branchen zum Ziel ihrer Regierungsarbeit zu machen. Dies ist in starkem Maße darauf zurückzuführen, dass auf dem Arbeitsmarkt in den letzten Jahren die Lohnspreizung, die traditionell in Deutschland niedrig war, eine stark divergente Entwicklung genommen hat.
Ein weiterer Grund ist die abnehmende Rolle der Tarifbindung der letzten Jahre. Die Folge dieser Entwicklungen ist, dass 1,2 Millionen Menschen, somit vier Prozent der Beschäftigten, für Löhne unter fünf Euro. Weitere 2,2 Millionen Menschen arbeiten für Stundenlöhne unter sechs Euro, 3,7 Millionen Menschen verdienen unter sieben Euro und 5,1 Millionen Menschen arbeiten für Löhne unter acht Die Frage inwieweit ein Leben in Würde unter diesen Voraussetzungen möglich ist, beschäftigt große Teile der Gesellschaft. Denn damit sind das Volumen und die Lohnhöhe des Niedriglohnsektors auf ein Niveau gesunken, welche gesellschaftlich und politisch nicht mehr einfach zu vertreten sind. Zur Abwendung dieser Entwicklung wird das wirtschaftspolitische Instrument Mindestlohn, als probates Mittel, häufig in die Diskussion gebracht. So haben in der Vergangenheit viele Staaten den Mindestlohn auf unterschiedliche Art eingesetzt. Die Einführung eines flächendeckenden Mindestlohns in der Bundesrepublik wird vor allem mit den folgenden Zielen befürwortet.
Der Mindestlohn soll zum einen gewährleisten, dass Vollzeitbeschäftigte ein Einkommen erzielen, dass mindestens ihrem soziokulturellen Existenzminimum entspricht. Andererseits soll die Einführung des Mindestlohns die Notwendigkeit des Aufstockens mit dem Arbeitslosengeld II hemmen und somit die öffentlichen Kassen entlasten. Die Gegner des Mindestlohns lehnen die Einführung eines flächendeckenden allgemeinverbindlichen Mindestlohns, vor allem aufgrund arbeitsmarkttheoretischer Überlegungen, kategorisch ab. Diese vertreten die Ansicht, dass die Mechanismen des Arbeitsmarktes eine selbstregulierende Wirkung besitzen und ergänzt um die Tarifautonomie ausreichend geregelt sind. Eine drohende Vernichtung bestehender Arbeitsplätze und eine ausbleibende Schaffung neuer Arbeitsplätze werden als Folge der Einführung von Mindestlohn weiterhin argumentiert.
Hinzu kommt, dass in der Theorie je nach Denkschule und Position eine entgegengesetzte Auswirkung auf den Arbeitsmarkt prognostiziert werden kann. Vor dem Hintergrund der aktuellen Debatte untersucht die vorliegende Arbeit die Auswirkung der Einführung eines Mindestlohns. Um eine objektive Wertung für die vorliegende Problemstellung zu finden, wird mit Hilfe von Netlogo die computergestützte agentenbasierte Simulation benutzt. Anhand eines fiktiven Marktes mit fiktiven Akteuren/"Agenten" wird der Versuch unternommen, eine modellhafte Darstellung des Arbeitsmarktes abzubilden. Insbesondere soll untersucht werden, inwieweit die Einführung eines Mindestlohns, branchenspezifisch oder flächendeckend, den Beschäftigungsgrad und die Höhe der Löhne beeinflusst.