004 Datenverarbeitung; Informatik
Refine
Year of publication
- 2014 (70) (remove)
Document Type
- Bachelor Thesis (42)
- Master's Thesis (14)
- Doctoral Thesis (7)
- Part of Periodical (4)
- Diploma Thesis (2)
- Report (1)
Has Fulltext
- yes (70) (remove)
Keywords
- OpenGL (3)
- Android (2)
- Graphik (2)
- Smartphone (2)
- VOXEL (2)
- Wasseroberfläche (2)
- 3D-Visualisierung (1)
- Anforderungskatalog (1)
- Annotation (1)
- App (1)
Institute
Community-Plattformen im Internet verwenden codebasierte Governance, um ihre hohe Anzahl an Nutzerbeiträgen zu verwalten. Dazu gehören alle Arten von Funktionalitäten, mit denen die Community Nutzerbeiträge in irgendeiner Form direkt oder indirekt beurteilen kann. Diese Arbeit erklärt zunächst die Bedeutung codebasierter Governance und der verschiedenen dafür nutzbaren Funktionalitäten. Anschließend werden die erfolgreichsten 50 Community-Plattformen auf codebasierte Governance untersucht. Das Ergebnis zeigt die Zusammenhänge zwischen dem Aufbau einer Plattform, der Beschaσffenheit der Nutzerbeiträge und der darauf ausübbaren codebasierten Governance auf.
In this bachelor thesis, the question of whether or not a jump'n run game with sensor control for android devices is useful, is handled. To this end, a game was developed, which is once controlled with and without sensors at different levels. In a second version, the game is completely controlled by means of sensors, so that the controls can later be compared. It is explained how the game was planned, designed and investigated. In addition, it is checked whether games with sensor control already exist. The engine, which was used to developed the game, is also introduced. Finally, the evaluation is carried out for an elaborated user test on the playability of the game in terms of control.
In den letzten Jahren ist eine steigende Verbreitung von Touchscreen-Geräten zu verzeichnen. Ihre Bedienung unterscheidet sich grundlegend von der mit Maus und Tastatur. Durch die Eingabe mit Gesten oder mehreren Fingern kann es schwierig sein den Aktionen eines Anderen zu folgen. Probleme entstehen durch die Verdeckung des Bildschirms mit der Eingabehand. Sieht man nur den Bildschirminhalt, zum Beispiel bei einer Videoübertragung, gehen Informationen über die Eingabe verloren.
In dieser Arbeit wird ein System entwickelt, das die kollaborative Arbeit an voneinander entfernten Touchscreen-Geräten verbessern soll. Dazu wird aus den Tiefendaten eines Kinect Sensors eine grafische Repräsentation der Eingabehand erstellt. Durch Einblendung dieser Visualisierung soll es einem Anwender erleichtert werden den Eingaben eines Anwenders zu folgen. Bedienkonzepte, wie zum Beispiel Gesten, sollen dadurch besser vermittelt werden. Außerdem soll so die Möglichkeit geschaffen werden, Informationen über eine gemeinsame Problematik effizienter auszutauschen. Deshalb wurde ein Testsystem mit zwei Arbeitsplätzen entwickelt. Darin übernimmt ein Anwender die Rolle des Erklärenden und führt einen zweiten Anwender, den Ausführenden, durch verschiedene Testszenarien. Hierbei stehen ihm bei einem Teil der Aufgaben die Visualisierung der Hand zur Verfügung, während er in anderen Aufgaben nur verbal mit seinem Gegenüber kommunizieren kann.
Im Rahmen einer Evaluation wird das System auf seine Effizienz zur Bedienung von Touchscreen-Systemen überprüft. Des Weiteren wird untersucht, inwieweit die grafische Qualität den gestellten Anforderungen genügt, um einen Mehrwert für die Anwendung zu bieten.
Der Markt der mobilen Endgeräte entwickelt sich schnell weiter und es kommen schon Kinder im frühsten Alter mit solchen Technologien in Berührung. Daher ist es wichtig, Kinder richtig an die Geräte heranzuführen. Von Vorteil wäre eine Einbindung von Smartphones und Tablets, im Bezug auf den Lernprozess, in den Unterricht. Die vorliegende Arbeit behandelt deshalb das Konzept einer Lernspielapp, die durch Pädagogen konfiguriert werden kann. Die Evaluation soll Aufschluss über die Motivation der Kinder geben und die Aufgeschlossenheit der Pädagogen gegenüber neuen Medien ermitteln.
German politicians have identified a need for greater citizen involvement in decision-making than in the past, as confirmed by a recent German parliamentarians study ("DEUPAS"). As in other forms of social interactions, the Internet provides significant potential to serve as the digital interface between citizens and decision-makers: in the recent past, dedicated electronic participation ("e-participation") platforms (e.g. dedicated websites) have been provided by politicians and governments in an attempt to gather citizens" feedback and comment on a particular issue or subject. Some of these have been successful, but a large proportion of them are grossly under-used " often only small numbers of citizens use them. Over the same time period, enthusiasm of Society for social networks has increased and is now commonplace. Many citizens use social networks such as Facebook and Twitter for all kinds of purposes, and in some cases to discuss political issues.
Social networks are therefore obviously attractive to politicians " from local government to federal agencies, politicians have integrated social media into their daily work. However, there is a significant challenge regarding the usefulness of social networks. The problem is the continuous increase in digital information: social networks contain vast amounts of information, and it is impossible for a human to manually filter the relevant information from the irrelevant (so-called "information overload"). Even using the search tools provided by social networks, it is still a huge task for a human to determine meanings and themes from the multitude of search results. New technologies and concepts have been proposed to provide summaries of masses of information through lexical analysis of social media messages, and therefore they promise an easy and quick overview of the information.
This thesis examines the relevance of these analyses" results, for the use in everyday political life, with the emphasis on the social networks Facebook and Twitter as data sources. Here we make use of the WeGov Toolbox and its analysis components that were developed during the EU project WeGov. The assessment has been performed in consultation with actual policy-makers from different levels of German government: policy-makers from the German Federal Parliament, the State Parliament North Rhine-Westphalia, the State Chancellery of the Saarland and the cities of Cologne and Kempten all took part in the study. Our method was to execute the analyses on data collected from Facebook and Twitter, and present the results to the policy-makers, who would then evaluate them using a mixture of qualitative methods.
The responses of the participants have provided us with some useful conclusions:
1) None of the participants believe that e-participation is possible in this way. But participants confirm that "citizen-friendliness" can be supported by this approach.
2) The most likely users for the summarisation tools are those who have experience with social networks, but are not "power users". The reason being is that "power users" already knew the relevant information provided by analysis tools. But without any experiences for social networks it is hard to interpret the analysis results the right way.
3) The evaluation has considered geographical aspects, and related this to e.g. a politician- constituency as a local area of social networks. Comparing the rural to the urban areas, it is shown that the amount of relevant political information in the rural areas is low. While the proportion of publicly available information in urban areas is relatively high, the proportion in the rural areas is much lower.
The findings that result from the engagement with policy-makers will be systematically surveyed and validated within this thesis.
Die Diplomarbeit "Entwicklung eines Telemedizinregister-Anforderungskatalog" behandelt die Erstellung eines Anforderungskatalogs für die Entwicklung eines im telemedizinischen Bereich anwendbaren Registers zur Unterstützung von Abrechnungsvorgängen. Diese werden im deutschen Gesundheitswesen zwischen telemedizinischen Dienstleistern und Kostenträgern in Zusammenhang mit der integrierten Versorgungsform durchgeführt, um die Finanzierung durchgeführter telemedizinischer Behandlungen abzurechnen. Dabei dient das Telemedizinregister als eine datenvorhaltende Speicherstelle, die Kopien von Behandlungsdaten telemedizinischer Dienstleister aufnimmt und deren Verarbeitungsprozesse im Register protokolliert. Den beteiligten Kostenträgern wird Zugriff auf dieses Telemedizinregister gewährt, um die Validität der Therapiedaten überprüfen zu können, die ihnen durch telemedizinische Dienstleister zur Analyse vorgelegt werden. Die Arbeit beschreibt die theoretischen Grundlagen der Bereiche Datenschutz und Telemedizin, mit denen Anforderungslisten und ein SOLL-Modell eines Telemedizinregisters erstellt werden. Dieses Modell setzt sich aus Datenmodellen und Prozessbeschreibungen zusammen und wird mit Hilfe eines praktischen Beispiels einer telemedizinischen Behandlung überprüft. Die Integration verschiedener Standards, welche bei Datenaustausch-Prozessen eingesetzt werden können, ist ein weiterer Teil zur Konzeptionierung des Telemedizinregisters, zu dem mögliche Anwendungsfelder zur Erweiterung der Funktionalität beschrieben werden.
Web 2.0 provides technologies for online collaboration of users as well as the creation, publication and sharing of user-generated contents in an interactive way. Twitter, CNET, CiteSeerX, etc. are examples of Web 2.0 platforms which facilitate users in these activities and are viewed as rich sources of information. In the platforms mentioned as examples, users can participate in discussions, comment others, provide feedback on various issues, publish articles and write blogs, thereby producing a high volume of unstructured data which at the same time leads to an information overload. To satisfy various types of human information needs arising from the purpose and nature of the platforms requires methods for appropriate aggregation and automatic analysis of this unstructured data. In this thesis, we propose methods which attempt to overcome the problem of information overload and help in satisfying user information needs in three scenarios.
To this end, first we look at two of the main challenges of sparsity and content quality in Twitter and how these challenges can influence standard retrieval models. We analyze and identify Twitter content features that reflect high quality information. Based on this analysis we introduce the concept of "interestingness" as a static quality measure. We empirically show that our proposed measure helps in retrieving and filtering high quality information in Twitter. Our second contribution relates to the content diversification problem in a collaborative social environment, where the motive of the end user is to gain a comprehensive overview of the pros and cons of a discussion track which results from social collaboration of the people. For this purpose, we develop the FREuD approach which aims at solving the content diversification problem by combining latent semantic analysis with sentiment estimation approaches. Our evaluation results show that the FREuD approach provides a representative overview of sub-topics and aspects of discussions, characteristic user sentiments under different aspects, and reasons expressed by different opponents. Our third contribution presents a novel probabilistic Author-Topic-Time model, which aims at mining topical trends and user interests from social media. Our approach solves this problem by means of Bayesian modeling of relations between authors, latent topics and temporal information. We present results of application of the model to the scientific publication datasets from CiteSeerX showing improved semantically cohesive topic detection and capturing shifts in authors" interest in relation to topic evolution.
This thesis deals with quality assurance of model-based SRS, in particular SRS-Models and SRS-Diagrams. The interesting thing about model-based SRS is that they are generated by a documentation generator based on the following input data: SRS-Model, SRS-Diagrams and texts external to the model. Therefore to assure the quality of the documentation the quality of their four factors must be assured, which are the SRS-Model, SRS-Diagrams, external texts and the documentation generator. The thesis" goal is to define a quality connotation for SRS-Models and -Diagrams and to show an approach for realizing automatically quality testing, measurement and assessment for the modelling tool Innovator.
Diffusion imaging captures the movement of water molecules in tissue by applying varying gradient fields in a magnetic resonance imaging (MRI)-based setting. It poses a crucial contribution to in vivo examinations of neuronal connections: The local diffusion profile enables inference of the position and orientation of fiber pathways. Diffusion imaging is a significant technique for fundamental neuroscience, in which pathways connecting cortical activation zones are examined, and for neurosurgical planning, where fiber reconstructions are considered as intervention related risk structures.
Diffusion tensor imaging (DTI) is currently applied in clinical environments in order to model the MRI signal due to its fast acquisition and reconstruction time. However, the inability of DTI to model complex intra-voxel diffusion distributions gave rise to an advanced reconstruction scheme which is known as high angular resolution diffusion imaging (HARDI). HARDI received increasing interest in neuroscience due to its potential to provide a more accurate view of pathway configurations in the human brain.
In order to fully exploit the advantages of HARDI over DTI, advanced fiber reconstructions and visualizations are required. This work presents novel approaches contributing to current research in the field of diffusion image processing and visualization. Diffusion classification, tractography, and visualizations approaches were designed to enable a meaningful exploration of neuronal connections as well as their constitution. Furthermore, an interactive neurosurgical planning tool with consideration of neuronal pathways was developed.
The research results in this work provide an enhanced and task-related insight into neuronal connections for neuroscientists as well as neurosurgeons and contribute to the implementation of HARDI in clinical environments.
The way information is presented to users in online community platforms has an influence on the way the users create new information. This is the case, for instance, in question-answering fora, crowdsourcing platforms or other social computation settings. To better understand the effects of presentation policies on user activity, we introduce a generative model of user behaviour in this paper. Running simulations based on this user behaviour we demonstrate the ability of the model to evoke macro phenomena comparable to the ones observed on real world data.
In der vorliegenden Arbeit wird die Integration einer Business Intelligence-Lösung in eine bestehende Social Software beschrieben. Dafür wird zunächst der Begriff Business Intelligence und Social Software, der Aufbau sowie deren Bestandteile näher erläutert. Danach erfolgt eine Analyse der IST-Situation der Zielgruppe durch Interviews, deren Auswertungen in der SOLL-Konzeptionierung in eine Anforderungsliste transformiert werden. Abschließend werden die herausgearbeiteten Anforderungen an der finalen Installation geprüft und getestet, um festzustellen ob die Erwartungen der Zielgruppe und ihre Vorstellungen von Business Intelligence realisierbar sind.
Das Ergebnis dieser Arbeit soll eine installierte Business Intelligence-Lösung in einer Social Software sein. Diese soll einen Überblick darüber geben, was mit der aktuellsten Version der Software bereits möglich ist und kritisch aufzeigen, wo es Stärken und Schwächen gibt, die bei zukünftigen Versionenrnbedacht werden sollten.
Modeling and publishing Linked Open Data (LOD) involves the choice of which vocabulary to use. This choice is far from trivial and poses a challenge to a Linked Data engineer. It covers the search for appropriate vocabulary terms, making decisions regarding the number of vocabularies to consider in the design process, as well as the way of selecting and combining vocabularies. Until today, there is no study that investigates the different strategies of reusing vocabularies for LOD modeling and publishing. In this paper, we present the results of a survey with 79 participants that examines the most preferred vocabulary reuse strategies of LOD modeling. Participants of our survey are LOD publishers and practitioners. Their task was to assess different vocabulary reuse strategies and explain their ranking decision. We found significant differences between the modeling strategies that range from reusing popular vocabularies, minimizing the number of vocabularies, and staying within one domain vocabulary. A very interesting insight is that the popularity in the meaning of how frequent a vocabulary is used in a data source is more important than how often individual classes and properties arernused in the LOD cloud. Overall, the results of this survey help in understanding the strategies how data engineers reuse vocabularies, and theyrnmay also be used to develop future vocabulary engineering tools.
Diese Bachelorarbeit behandelt die Zusammenführung der bereits vorliegenden Winkelrekonstruktions- und Simulationskomponente und erweitert diese mit Funktionen, um die Durchführung von systematischen Tests zu ermöglichen. Hierzu wird die Übergabe von Bildern aus der Simulationskomponente an die Winkelrekonstruktionskomponente ermöglicht. Des weiteren wird eine GUI zur Testlaufsteuerung und Parameterübergabe sowie eine Datenbankanbindung zur Speicherung der verwendeten Einstellungen und erzeugter Daten angebunden. Durch die Analyse der erzeugten Daten zeigt sich eine ausreichende durchschnittliche Präzision von 0.15° und eine maximale Abweichung der einzelnen Winkel von 0.6°. Der größte Gesamtfehler beläuft sich in den Testläufen auf 0.8°. Der Einfluss von fehlerhaften Parametern hat von variable zu Variable unterschiedliche Auswirkungen. So verstärkt ein Fehler in Höhe den Messfehler um ein vielfaches mehr, als ein Fehler in der Länge der Deichsel.
Object recognition is a well-investigated area in image-based computer vision and several methods have been developed. Approaches based on Implicit Shape Models have recently become popular for recognizing objects in 2D images, which separate objects into fundamental visual object parts and spatial relationships between the individual parts. This knowledge is then used to identify unknown object instances. However, since the emergence of aσordable depth cameras like Microsoft Kinect, recognizing unknown objects in 3D point clouds has become an increasingly important task. In the context of indoor robot vision, an algorithm is developed that extends existing methods based on Implicit Shape Model approaches to the task of 3D object recognition.
Anreizfaktoren der Wissensverwertung für Universitäten und kleine und mittelständische Unternehmen
(2014)
This scientific paper identifies and describes the incentives for the utilization of knowledge for universities and small and medium-sized companies. In addition, different models " for example the Knott/Wildavsky model " are continuously adapted, created and expanded, which leads to an new integrative model. The main problem is that companies have to integrate knowledge from external sources into their operations. According to the literature, this model of open innovation is considered as inevitable in order to remain competitive. This is especially the case for small and medium-sized companies. The reasons for this are illustrated as the possibilities of a successful collaboration. Germany has a relativley high involvement in knowledge and technology transfer in international comparison. Nevertheless, a number of companies assume their insitution won´t benefit from knowledge utilization.
The literature review revealed that there is no existing model, which combines the stages of knowledge utilization with the incentives of universities and companies. This paper closes the identified gap through the created integrative model. The formulated incentive factors can help universities and companies recognizing if a cooperated research is beneficial or not.
At the beginning the basic theoretical foundations are defined based on the literature review, followed by a description of the incentive factors of knowledge exploitation. On the one hand there is a distinction between tangible and intangible incentives. On the other hand there is a segmentation of extrinsic and intrinsic motivation. Both are important regarding to the motivation of employees and scientists. In the end, a knowledge utilization model is presented and adapted to the present case, before proceeding with the extension of the model providing an additional perspective and adding the incentive factors.
By the procedure described an integrative model is created. It can be useful to all affected people, universities and their scientists as well as small and medium-sized companies and their employees.
This thesis covers the mathematical background of ray-casting as well as an exemplary implementation on graphics processing units, using a modern programming interface. The implementation is embedded within an editor, which enables the user to activate optimizations of the algorithm. Techniques like transfer functions and local illumination are available for a more realistic visualization of materials. Moreover, the user interface gives access to features like importing volumes, let one define a custom transfer function, holds controls to adjust parameters of rendering and allows to activate further techniques, which are also subject of discussion in this thesis. Benefit of all shown techniques is measured, whether it is expected to be visual or on the part of performance.
This diploma thesis describes the concept and implementation of a software router for policy-based Internet regulation. It is based on the ontology InFO described by Kasten and Scherp. InFO is destined for a system-independent description of regulation mechanisms. Additionally, InFO enables a transparent regulation by linking background information to the regulation mechanisms. The InFO extension RFCO extends the ontology with router-specific entities. A software router is developed to implement RFCO at the IP level. The regulation is designed to be transparent by letting the router inform affected users about the regulation measures. The router implementation is exemplarily tested in a virtual network environment.
Remote rendering services offer the possibility to stream high quality images to lower powered devices. Due to the transmission of data the interactivity of applications is afflicted with a delay. A method to reduce delay of the camera manipulation on the client is called 3d-warping. This method causes artifacts. In this thesis different approaches of remote rendering setups will be shown. The artifacts and improvements of the warping method will be described. Methods to reduce the artifacts will be implemented and analyzed.
Abstract 3 This paper explains the convolution reverb, a method that enables users to add realistic sounding reverberation to audio material that was recorded in neutral sounding rooms. In particular, the possibility of computing the effect on the GPU using OpenCL is discussed, to make use of the high concurrency of the problem. This paper aims at the development of a VST plugin that utilizes the GPU accelerated convolution algorithm, so that it can be used for audio software solutions.
Aufgrund des branchenweiten Bedarfs den Konkurrenzkampf zu umgehen, entwickelten Kim und Mauborgne die Blue Ocean Strategy, um neue Märkte zu ergründen. Diese bezeichnen sie als einzigartig. Da jedoch weitere Strategien zur Ergründung neuer Märkte existieren, ist es das Ziel dieser Arbeit herauszufinden, anhand welcher Charakterisierungsmerkmale die Blue Ocean Strategy als einzigartig angesehen werden kann.
Die Strategie von Kim und Mauborgne soll daher mit Schumpeters schöpferischen Zerstörung, Ansoffs Diversifikationsstrategie, Porters Nischenstrategie und Druckers Innovationsstrategien verglichen werden. Für den Vergleich werden die Charakterisierungsmerkmale herangezogen, nach denen Kim und Mauborgne die Blue Ocean Strategy als einzigartig beurteilen. Auf Basis dieser Kriterien wird ein Metamodell entwickelt, mit dessen Hilfe die Untersuchung durchgeführt wird.
Der Vergleich zeigt, dass die Konzepte von Schumpeter, Ansoff, Porter und Drucker in einigen Kriterien der Blue Ocean Strategy ähneln. Keine der Strategien verhält sich jedoch in allen Punkten so wie das Konzept von Kim und Mauborgne. Während die Blue Ocean Strategy ein Differenzierung und Senkung der Kosten anstrebt, orientieren sich die meisten Konzepte entweder an einer Differenzierung oder an einer Kostenreduktion. Auch die Betretung des neuen Marktes wird unterschiedlich interpretiert. Während die Blue Ocean Strategy auf einen Markt abzielt, der unergründet ist und somit keinen Wettbewerb vorweist, werden bei den anderen Strategien oft bestehende Märkte als neu interpretiert, auf denen das Unternehmen bisher nicht agiert hat. Dies schließt die vorherige Existenz der Märkte jedoch nicht aus.
Auf Basis der durch den Vergleich gezogenen Erkenntnisse, kann somit die Blue Ocean Strategy als einzigartig bezeichnet werden.
Data Mining im Fußball
(2014)
The term Data Mining is used to describe applications that can be applied to extract useful information from large datasets. Since the 2011/2012 season of the german soccer league, extensive data from the first and second Bundesliga have been recorded and stored. Up to 2000 events are recorded for each game.
The question arises, whether it is possible to use Data Mining to extract patterns from this extensive data which could be useful to soccer clubs.
In this thesis, Data Mining is applied to the data of the first Bundesliga to measure the value of individual soccer players for their club. For this purpose, the state of the art and the available data are described. Furthermore, classification, regression analysis and clustering are applied to the available data. This thesis focuses on qualitative characteristics of soccer players like the nomination for the national squad or the marks players get for their playing performance. Additionally this thesis considers the playing style of the available players and examines if it is possible to make predictions for upcoming seasons. The value of individual players is determined by using regression analysis and a combination of cluster analysis and regression analysis.
Even though not all applications can achieve sufficient results, this thesis shows that Data Mining has the potential to be applied to soccer data. The value of a player can be measured with the help of the two approaches, allowing simple visualization of the importance of a player for his club.
The initial problem that motivated this thesis is the lock of possibility to represent finished theses by students of the research group BAS. Many finished thesis are only available in a printed version. Some of the students created their own websites but these are not uniform.
The first step to solve this problem is to create an overall research design. The research design of this thesis based on the construction-oriented approach of design science research by Hevner [2007]. The initial problem will be solved by creating a Web 2.0 website. Therefore the open source content management system Drupal is used. For the implementation of the target system, a set of requirements will be collected. This set of requirements will be collected by using various methods such like mock ups, interviews, collaboration scenarios and personas. To meet the collected requirements a set of additional modules will be added to the core version of Drupal. This advanced version of Drupal will be scenario and user tested. A result of this work is a deployable prototype, with which it is possible to present various theses. A further result will be user guides that describe the operation of the prototype. This thesis finishes with a conclusion and an outlook on the further use of the prototype.
Im Rahmen dieser Bachelor-Arbeit wurde ein IT-gestützter Prototyp (als Excel-Applikation) entwickelt, mit dem komplexe Entscheidungsfindungen auf Grundlage der Nutzwertanalyse durchgeführt werden können. Er eignet sich zur Bewertung aller Arten von betrieblichen Anwendungssystemen, darüber hinaus ist er auch für andere unternehmerische Entscheidungen verwendbar, da die zugrunde liegende Nutzwertanalyse universell einsetzbar ist. Der Prototyp berücksichtigt und identifiziert 13 Merkmalsgruppen mit insgesamt 100 Merkmalen für Groupware. Ein zusätzlich erstelltes 20-minütiges Tutorial-Video erläutert Schritt für Schritt dessen Nutzung und Funktionalität. Sämtliche Gruppen und Merkmale wurden von einem befragten externen Experten gewichtet. Mit Hilfe des erarbeiteten umfangreichen Kataloges lassen sich künftig Groupware-Produkte effizienter und aussagekräftiger vergleichen. Dieses Tool ist eine Weiterentwicklung im Bereich der Nutzwertanalyse und hilft dabei, intuitiv und inhaltlich-vergleichend Merkmale/Gruppen zu erstellen und eine Nutzwertanalyse durchzuführen. Damit wird ein Benchmark mit vielfältigen Filteroptionen erstellt, der eine tabellarische als auch graphische Auswertung ermöglicht.
Das durchgeführte Experten-Interview und die Auswertung der Fachliteratur haben aber auch deutlich gemacht, dass die Nutzwertanalyse nicht als einziges Argument bzw. Instrument zur Entscheidungsfindung beitragen darf. Zangemeister, Systemtechniker und Fachmann im Bereich der mehrdimensionalen Bewertung und Entscheidungsfindung, merkt hierzu an: "Nutzwertmodelle dürfen nicht als Ersatz, sondern zunächst als eine wichtige Ergänzung der übrigen Modelle betrachtet werden, die dem systematischen Abbau der Entscheidungsproblematik bei der Auswahl von Projektalternativen dienen können" [Zangemeister 1976, S.7]. Alles in allem bietet die Nutzwertanalyse aufgrund der strukturierten Zergliederung des Bewertungsprozesses in Teilaspekte eine qualitativ bessere Übersicht über ein zu bewertendes Problem und stellt eine aussagestarke Zusammenstellung und Auswertung mit detaillierten Informationen über die Bewertungsobjekte auf.
Next word prediction is the task of suggesting the most probable word a user will type next. Current approaches are based on the empirical analysis of corpora (large text files) resulting in probability distributions over the different sequences that occur in the corpus. The resulting language models are then used for predicting the most likely next word. State-of-the-art language models are based on n-grams and use smoothing algorithms like modified Kneser-Ney smoothing in order to reduce the data sparsity by adjusting the probability distribution of unseen sequences. Previous research has shown that building word pairs with different distances by inserting wildcard words into the sequences can result in better predictions by further reducing data sparsity. The aim of this thesis is to formalize this novel approach and implement it by also including modified Kneser-Ney smoothing.
In the man-machine interaction tracking and identification of individuals plays an important role. In this work, a framework for the service-robot Lisa, of the Active Vision Group, has been created to combine different methods for the detection, tracking and identification of individuals. First leg detection is performed to establish hypotheses for people using a 2D-laserscan. This assumption needs to be confirmed by an analysis of the Kinect point cloud. After successful confirmation online-boosting on RGB-data is performed for identification. The leg data will also be used with a linear Kalman filter to estimate the movement of people. Through the combination of of Kalman filter with leg detection and online-boosting people tracking should be enabled. Further receiving an interchange of persons should - by brief occlusion or faulty associate of legs - can be prevented.
The architecture of decentralized digital transaction systems with public transaction history provides no transaction monitoring to prevent unwanted transactions and to identify the transmitter and receiver of those transactions. With the introduction of a public list of unwanted addresses, it is possible to isolate these unwanted addresses by general exclusion and thereby to prevent unwanted transactions, as well as to identify the owners of unwanted addresses. The public list management can be performed by decentralized multiple instances using a trust network, so that the decentralized nature of the systems is maintained.
This work presents an application for simulation objects, which can change their aggregate states between solid and liquid using a temperature system. The focal points are the simulation of fluids with a particle system, the generation of a surface and the visualization of metal. The application should be interactive and match the real time conditions. Different types of Shader are used for the parallelized computations on the GPU. Also more options to use the application and possible improvements are presented.
Systems to simulate crowd-behavior are used to simulate the evacuation of a crowd in case of an emergency. These systems are limited to the moving-patterns of a crowd and are generally not considering psychological and/or physical conditions. Changing behaviors within the crowd (e.g. by a person falling down) are not considered.
For that reason, this thesis will examine the psychological behavior and the physical impact of a crowd- member on the crowd. In order to do so, this study develops a real-time simulation for a crowd of people, adapted from a system for video games. This system contains a behavior-AI for agents. In order to show physical interaction between the agents and their environment as well as their movements, the physical representation of each agent is realized by using rigid bodies from a physics-engine. The movements of the agents have an additional navigation mesh and an algorithm for collision avoidance.
By developing a behavior-AI a physical and psychological state is reached. This state contains a psychological stress-level as well as a physical condition. The developed simulation is able to show physical impacts such as crowding and crushing of agents, interaction of agents with their environment as well as factors of stress.
By evaluating several tests of the simulation, this thesis examines whether the combination of physical and psychological impacts is implementable successfully. If so, this thesis will be able to give indications of an agent- behavior in dangerous and/or stressful situations as well as a valuation of the complex physical representation.
Ziel dieser Ausarbeitung ist es, das Wippe-Experiment gemäß dem Aufbau innerhalb der AG Echtzeitsysteme unter Leitung von Professor Dr. Dieter Zöbel mithilfe eines LEGO Mindstorms NXT Education-Bausatzes funktionsfähig nachzubauen und das Vorgehen zu dokumentieren. Der dabei entstehende Programmcode soll didaktisch aufbereitet und eine Bauanleitung zur Verfügung gestellt werden. Dies soll gewährleisten, dass Schülerinnen und Schüler auch ohne direkten Zugang zu einer Hochschule oder ähnlichem Institut den Versuchsaufbau Wippe möglichst unkompliziert im Klassenraum erleben können.