14 research outputs found
A composition algorithm based on crossmodal taste-music correspondences
While there is broad consensus about the structural similarities between language and music, comparably less attention has been devoted to semantic correspondences between these two ubiquitous manifestations of human culture. We have investigated the relations between music and a narrow and bounded domain of semantics: the words and concepts referring to taste sensations. In a recent work, we found that taste words were consistently mapped to musical parameters. Bitter is associated with low-pitched and continuous music (legato), salty is characterized by silences between notes (staccato), sour is high pitched, dissonant and fast and sweet is consonant, slow and soft (Mesz2011). Here we extended these ideas, in a synergistic dialog between music and science, investigating whether music can be algorithmically generated from taste-words. We developed and implemented an algorithm that exploits a large corpus of classic and popular songs. New musical pieces were produced by choosing fragments from the corpus and modifying them to minimize their distance to the region in musical space that characterizes each taste. In order to test the capability of the produced music to elicit significant associations with the different tastes, musical pieces were produced and judged by a group of non musicians. Results showed that participants could decode well above chance the taste-word of the composition. We also discuss how our findings can be expressed in a performance bridging music and cognitive science. © 2012 Mesz, Sigman and Trevisan.Fil:Mesz, B. Universidad de Buenos Aires. Facultad de Ciencias Exactas y Naturales; Argentina.Fil:Sigman, M. Universidad de Buenos Aires. Facultad de Ciencias Exactas y Naturales; Argentina.Fil:Trevisan, M. Universidad de Buenos Aires. Facultad de Ciencias Exactas y Naturales; Argentina
Analysing the impact of music on the perception of red wine via Temporal Dominance of Sensations
Several studies have examined how music may affect the evaluation of food and drink, but the vast majority have not observed how this interaction unfolds in time. This seems to be quite relevant, since both music and the consumer experience of food/drink are time-varying in nature. In the present study we sought to fix this gap, using Temporal Dominance of Sensations (TDS), a method developed to record the dominant sensory attribute at any given moment in time, to examine the impact of music on the wine taster’s perception. More specifically, we assessed how the same red wine might be experienced differently when tasters were exposed to various sonic environments (two pieces of music plus a silent control condition). The results revealed diverse patterns of dominant flavours for each sound condition, with significant differences in flavour dominance in each music condition as compared to the silent control condition. Moreover, musical correspondence analysis revealed that differences in perceived dominance of acidity and bitterness in the wine were correlated in the temporality of the experience, with changes in basic auditory attributes. Potential implications for the role of attention in auditory flavour modification and opportunities for future studies are discussed
Marble melancholy: using crossmodal correspondences of shapes, materials, and music to predict music-induced emotions
Introduction: Music is known to elicit strong emotions in listeners, and, if primed appropriately, can give rise to specific and observable crossmodal correspondences. This study aimed to assess two primary objectives: (1) identifying crossmodal correspondences emerging from music-induced emotions, and (2) examining the predictability of music-induced emotions based on the association of music with visual shapes and materials.
Methods: To achieve this, 176 participants were asked to associate visual shapes and materials with the emotion classes of the Geneva Music-Induced Affect Checklist scale (GEMIAC) elicited by a set of musical excerpts in an online experiment.
Results: Our findings reveal that music-induced emotions and their underlying core affect (i.e., valence and arousal) can be accurately predicted by the joint information of musical excerpt and features of visual shapes and materials associated with these music-induced emotions. Interestingly, valence and arousal induced by music have higher predictability than discrete GEMIAC emotions.
Discussion: These results demonstrate the relevance of crossmodal correspondences in studying music-induced emotions. The potential applications of these findings in the fields of sensory interactions design, multisensory experiences and art, as well as digital and sensory marketing are briefly discussed.Peer Reviewe
A composition algorithm based on crossmodal taste-music correspondences
While there is broad consensus about the structural similarities between language and music, comparably less attention has been devoted to semantic correspondences between these two ubiquitous manifestations of human culture. We have investigated the relations between music and a narrow and bounded domain of semantics: the words and concepts referring to taste sensations. In a recent work, we found that taste words were consistently mapped to musical parameters. Bitter is associated with low-pitched and continuous music (legato), salty is characterized by silences between notes (staccato), sour is high pitched, dissonant and fast and sweet is consonant, slow and soft (Mesz2011). Here we extended these ideas, in a synergistic dialog between music and science, investigating whether music can be algorithmically generated from taste-words. We developed and implemented an algorithm that exploits a large corpus of classic and popular songs. New musical pieces were produced by choosing fragments from the corpus and modifying them to minimize their distance to the region in musical space that characterizes each taste. In order to test the capability of the produced music to elicit significant associations with the different tastes, musical pieces were produced and judged by a group of non musicians. Results showed that participants could decode well above chance the taste-word of the composition. We also discuss how our findings can be expressed in a performance bridging music and cognitive science. © 2012 Mesz, Sigman and Trevisan.Fil:Mesz, B. Universidad de Buenos Aires. Facultad de Ciencias Exactas y Naturales; Argentina.Fil:Sigman, M. Universidad de Buenos Aires. Facultad de Ciencias Exactas y Naturales; Argentina.Fil:Trevisan, M. Universidad de Buenos Aires. Facultad de Ciencias Exactas y Naturales; Argentina
