40 research outputs found

    Melody generator: A device for algorithmic music construction

    Get PDF
    This article describes the development of an application for generating tonal melodies. The goal of the project is to ascertain our current understanding of tonal music by means of algorithmic music generation. The method followed consists of four stages: 1) selection of music-theoretical insights, 2) translation of these insights into a set of principles, 3) conversion of the principles into a computational model having the form of an algorithm for music generation, 4) testing the “music ” generated by the algorithm to evaluate the adequacy of the model. As an example, the method is implemented in Melody Generator, an algorithm for generating tonal melodies. The program has a structure suited for generating, displaying, playing and storing melodies, functions which are all accessible via a dedicated interface. The actual generation of melodies, is based in part on constraints imposed by the tonal context, i.e. by meter and key, the settings of which are controlled by means of parameters on the interface. For another part, it is based upon a set of construction principles including the notion of a hierarchical organization, and the idea that melodies consist of a skeleton that may be elaborated in various ways. After these aspects were implemented as specific sub-algorithms, the device produces simple but well-structured tonal melodies

    Accents in equitone sequences

    Full text link

    Internal representation of simple temporal patterns

    No full text
    In this study the imitation of several periodically repeating simple temporal patterns consisting of two or more intervals varying in their duration ratios has been investigated. The errors that subjects typically made in their imitations and the systematic changes that occurred during repeated imitations indicate that both musically trained and untrained subjects map temporal sequences onto an interval structure the nature of which is revealed by studying which patterns are correctly and which incorrectly reproduced. A "beat-based " model for the perception of temporal sequences is proposed. This model states that the first step in the processing of a temporal sequence consists of a segmentation of the sequence into equal intervals bordered by events. This interval is called the beat interval. How listeners select this beat interval is only partly understood. In a second step, intervals smaller than the beat interval are expressed as a subdivision of the beat interval in which they occur. The number of within-beat structures that can be represented in the model is, however, limited. Specifically, only beat intervals that are subdivided into either equal intervals or intervals in a 1:2 ratio fit within the model. The partially hierarchical model proposed, though in need of further elaborations, shows why the number of temporal patterns that can be correctly conceptualized is limited. The relation of the model to other models is discussed. More than 3 decades ago, Fraisse (1946) discovered a remarkable phenomenon in the production and perception of durations. He found that subjects who were asked to produce by tapping temporal patterns consisting of 2-6 taps basically used only two durations. These two durations, called a long duration and a short duration, are distinct from each other; the longer duration is typically at least twice as long as the shorter one. Fraisse (1946) reported a range of long/ short ratios that varied from 2.18 to 3.25 depending on the length and complexity of the patterns tapped. Subsequently, he described some experiments in which subject

    Melody Generator: A Device for Algorithmic Music Construction

    No full text

    Time, Rhythms and Tension: In Search of the Determinants of Rhythmicity

    Full text link
    corecore