![]() The goal of the work is to support the development of intuitive drum sample browsing systems. This paper presents a system that computationally characterizes and organizes drum machine samples in two dimensions based on sound similarity. The task of organizing and selecting from these large collections can be challenging and time consuming, which points to the need for improved methods for user interaction. ![]() The use of electronic drum samples is widespread in contemporary music productions, with music producers having an unprecedented number of samples available to them. We suggest that listener-informed musical features can benefit MER in addressing emotional perception variability by providing reasons for listener similarities and idiosyncrasies. Musicians are found to attribute emotion change to musical harmony, structure, and performance technique more than non-musicians. Novel findings highlight the importance of less frequently discussed musical attributes, such as musical structure, performer expression, and stage setting, as perceived across different modalities. Some of the results confirm known findings of music perception and MER studies. Thematic analysis reveals many salient features and interpretations that can describe the cognitive processes. A follow-up lab study to uncover the reasons for such variability was conducted, where twenty-one listeners annotated their perceived emotions through a recording of the original performance and offered open-ended explanations. Analyses of inter-rater reliability yielded widely varying levels of agreement in the perceived emotions. In a live music concert setting, fifteen audience members annotated perceived emotion in valence-arousal space over time using a mobile application. These limitations can restrict the efficacy of MER systems and cause misjudgements. Results imply that the development of a synthesiser exploiting sound-shape associations is feasible, but a larger and more focused dataset is needed in followup studies.Ĭurrent music emotion recognition (MER) systems rely on emotion data averaged across listeners and over time to infer the emotion expressed by a musical piece, often neglecting time- and listener-dependent factors. ![]() This paper presents an exploratory study that asked participants to sketch visual imagery of sounds with a monochromatic digital drawing interface, with the aim to identify different representational approaches and determine whether timbral sound characteristics can be communicated reliably through visual sketches. However, the use of drawings to drive sound synthesis has not been explored to its full extent. ![]() A large body of research shows consistency in human associations between sounds and shapes. Cross-modal mappings, for example between gestures and sound, have been suggested as a more intuitive control mechanism. This makes it difficult to realise sound ideas in a straightforward way. Sound synthesiser controls typically correspond to technical parameters of signal processing algorithms rather than intuitive sound descriptors that relate to human perception of sound. ![]()
0 Comments
Leave a Reply. |