37 research outputs found
Towards an experiment on perception of affective music generation using MetaCompose
MetaCompose is a music generator based on a hybrid evolutionary technique combining FI-2POP and multi-objective optimization. In this paper we employ the MetaCompose music generator to create music in real-time that expresses different mood-states in a game-playing environment (Checkers) and present preliminary results of an experiment focusing on determining (i) if differences in player experience can be observed when using affective-dynamic music compared to static music; and (ii) if any difference is observed when the music supports the game's internal narrative/state. Participants were tasked to play two games of Checkers while listening to two (out of three) different set-ups of game-related generated music. The possible set-ups were: static expression, consistent affective expression, and random affective expression.</p
MetaCompose: A Compositional Evolutionary Music Composer
This paper describes a compositional, extensible framework for music composition and a user study to systematically evaluate its core components. These components include a graph traversal-based chord sequence generator, a search-based melody generator and a pattern-based accompaniment generator. An important contribution of this paper is the melody generator which uses a novel evolutionary technique combining FI-2POP and multi-objective optimization. A participant-based evaluation overwhelmingly confirms that all current components of the framework combine effectively to create harmonious, pleasant and interesting compositions
Affective Music Generation and its effect on Player Experience
Procedural content generation for games --the automated creation of some type of asset-- has become increasingly popular in the last decade, both with academics and game developers.This interest has been mainly motivated by how complex games have become, requiring a huge amount of assets to be created to support them.While most focus has been put on generating levels, textures, and 3D models, there are also examples of games that generate music dynamically, which is the focus of this thesis (e.g. Spore).In games, unlike in traditional linear storytelling media such as novels or films, narrative events unfold in response to player input. Therefore, the music composer in an interactive environment needs to create music that is dynamic and non-repetitive.This thesis investigates how to express emotions and moods in music and how to apply this research to improve player experience in games.This focus on the emotional expression that procedurally generated music should express has also been identified by Collins as one of the missing features that currently prevent procedurally generated music being more widely used in the game industry.The research therefore focuses on investigating the expression of moods, and the effect affective music can have on the listener during game play. In this thesis three systems are described: the MetaCompose affective music generator, its prototype, and a system for co-evolution of improvisational modules (Primal-Improv).The characteristics of MetaCompose are: (i) expressing different affective states using a variety of AI-techniques, (ii) generating such music in real-time, and (iii) reacting in real-time to external stimuli.Its architecture is comprised of three main components: the composition generator, the real-time affective music composer and an archive of compositions.A novel feature of our approach is the separation of composition and affective interpretation: the system creates abstractions of music pieces (called “compositions”) and interprets these in real-time to achieve the desired affective expression, while simultaneously introducing stochastic variations.Notably, the abstraction generation component includes a graph traversal-based chord sequence generator, a search-based melody generator and a pattern-based accompaniment generator.The melody generation uses a novel constrained multi-objective evolutionary technique combining FI-2POP and Non-dominated Sorting Genetic Algorithm (NSGA-II).The thesis presents the results of several evaluation studies, evaluating both the design of the systems, their affective expression, and exploring the effects on player experience when integrating MetaCompose with the game of Checkers.<br/
Mood Dependent Music Generator
Music is one of the most expressive media to show and manipulate emotions, but there have been few studies on how to generate music connected to emotions. Such studies have always been shunned upon by musicians affirming that a machine cannot create expressive music, as it's the composer's and player's experiences and emotions that get poured into the piece. At the same time another problem is that music is highly complicated (and subjective) and finding out which elements transmit certain emotions is not an easy task.This demo wants to show how the manipulation of a set of features can actually change the mood the music transmits, hopefully awakening an interest in this area of research
Evaluating Musical Foreshadowing of Videogame Narrative Experiences
We experiment with mood-expressing, procedurally gener-ated music for narrative foreshadowing in videogames, in-vestigating the relationship between music and the player’s experience of narrative events in a game. We designed and conducted a user study in which the game’s music expresses true foreshadowing in some trials (e.g. foreboding music before a negative event) and false foreshadowing in others (e.g. happy music that does not lead to a positive event). We observed players playing the game, recorded analytics data, and had them complete a survey upon completion of the gameplay. Thirty undergraduate and graduate students participated in the study. Statistical analyses suggest that the use of musical cues for narrative foreshadowing induces a better perceived consistency between music and game narra-tive. Surprisingly, false foreshadowing was found to enhance the player’s enjoyment
Towards Diverse Non-Player Character behaviour discovery in multi-agent environments
This paper introduces a method for developing diverse Non-Player Character (NPC) behaviour through a multiagent genetic algorithm based on Map-Elites. We examine the outcomes of implementing our system in a test environment, with a particular emphasis on the diversity of the evolved agents in the feature space. This research is motivated by how diverse NPCs are an important factor for improving player experience. We show how our multi agent map-elite algorithm is capable of isolating the evolved NPCs in the chosen feature space. Results showed that variation in agent fitness could be predicted with 40 % from agent genomes, when agents played 100 games each.</p
Adaptive Agents in 1v1 Snake Game with Dynamic Environment
This paper delves into the adaptability of Proximal Policy Optimization (PPO)-trained agents within dynamic environments. Typically, an agent is trained within a specific environment, learning to maximise reward acquisition and to navigate it effectively. However, alterations to this environment can lead to performance deficiencies. Existing research does not fully elucidate how the training of agents influences their adaptability in different environments and which parameters significantly impact this. This study aims to fill this gap, contributing to the creation of more versatile intelligent agents. The objective of this study is to explore how training agents in various environments affects their adaptability when introduced to unfamiliar environments. To this end, 36 models were trained using 36 different configurations to play a one-versus-one (1v1) Snake game. These models were subsequently compared against each configuration to measure their adaptability. The results reveal that map size substantially affect the adaptability of agents in different environments. Interestingly, the results showed that the most adaptive agents were not those trained on the most expansive and complex environment, but rather the simplest.</p
Can You Feel It?: Evaluation of Affective Expression in Music Generated by MetaCompose
This paper describes an evaluation conducted on the MetaCompose music generator, which is based on evolutionary computation and uses a hybrid evolutionary technique that combines FI-2POP and multi-objective optimization. The main objective of MetaCompose is to create music in real-time that can express different mood-states. The experiment presented here aims to evaluate: (i) if the perceived mood experienced by the participants of a music score matches intended mood the system is trying to express and (ii) if participants can identify transitions in the mood expression that occur mid-piece. Music clips including transitions and with static affective states were produced by MetaCompose and a quantitative user study was performed. Participants were tasked with annotating the perceived mood and moreover were asked to annotate in real-time changes in valence. The data collected confirms the hypothesis that people can recognize changes in music mood and that MetaCompose can express perceptibly different levels of arousal. In regards to valence we observe that, while it is mainly perceived as expected, changes in arousal seems to also influence perceived valence, suggesting that one or more of the music features MetaCompose associates with arousal has some effect on valence as well
