University of Plymouth
Plymouth PL4 8AA
Tel: +44 (0)1752 232579
23 April 2009:
Speaker: Noris Mohd Norowi
Venue: Room 303, Roland Levinsky Building
Interest on automated genre classification systems is growing following the increase in musical digital data collections. Most of these systems have been researched and developed to classify Western musical genres such as pop, rock or classical. However, adapting these systems for the classification of Traditional Malay Musical (TMM) genres such as Gamelan, Inang and Zapin, is difficult due to the differences in musical structures and modes. This study investigates the effects of various factors and audio feature set combinations towards the classification of TMM genres. Results from experiments conducted in several phases show that factors such as dataset size, track length and location¸ together with various combinations of audio feature sets comprising Short Time Fourier Transform (STFT), Mel-Frequency Cepstral Coefficients (MFCCs) and Beat Features affect classification. Based on parameters optimized for TMM genres, classification performances were evaluated against three groups of human subjects: experts, trained and untrained. Performances of both machine and human were shown to be comparable.
20 March 2009:
Venue: Room 301, Roland Levinsky Building
The idea of this talk is to briefly review every stage of the research and present some of its outcomes so far. We will be talking about some approaches to modelling musical performances, their limitations and the ways to overcome them. One of our experiments indicated that listeners could differentiate whether music was performed by a human musician or generated by computer. But why? What does a human performance has that even the most “intelligent” performance generated by computer lacks? The same experiment gave us a hint to help us solve the puzzle: the amateur musician was more easily identified than the professional one. If you could guess the reason for that, you might be able to understand better the way our body behaves when playing a guitar, the mechanics of the guitar itself and how the most innate of the human properties could be used to simulate music performance with a human feel.
4 September 2008:
Venue: Room 215, Babbage Building
For several years I have been composing with different musical structures, different mathematical models, and also sometimes without either. Is the structure a guarantee for quality in form and expression or level of abstraction? What stages are there in the process of composing, and what decisions will the composer have to take? What makes interactive music different? (http://www.pernoergaard.dk/eng/bagscene/joergen.html)
23 July 2008:
Speaker: Arne Eigenfeldt
Venue: Room 309, Roland Levinsky Building
Abstract: Complexity in realtime music performance systems has reached such levels that former technical of constrained randomness are no longer proving adequate. Dr. Arne Eigenfeldt, of Simon Fraser University, Canada, will discuss his research into encoding musical knowledge in software, with specific reference to Kinetic Engine, a realtime rhythm ensemble that composes complex polyphonic rhythms using multi-agents.
15 May 2008:
Speaker: Tony Belpaeme
Venue: Roland Levinsky Building, Room 309
23 April 2008:
Speaker: Ian Pace
Venue: Sherwell Building, Upper Lecture Theatre
This seminar will be given at the piano, and followed by a half-hour performance of the pieces used as examples.
20 Mar 2008:
Venue: Roland Levinsky Building, Room 008
The word "Plunderphonics" was coined by composer, improviser John Oswald in 1985. It refers to acts of direct audio piracy - the wholesale lifting of recorded music for the purpose of creating new compositions. Since 1985 user-friendly technological developments have made this whole field much easier, the possibilities greater, and the issues wider. In this talk Sam Richards will summarise the field, play some examples, look into some aesthetic and cultural implications, and introduce and play his recent recorded suite. "Four Ripoffs". Sam Richards is an improviser, composer, pianist, folklorist, writer and University of Plymouth lecturer. His musical background includes work with Cornelius Cardew, Hugh Davis, Alfred Nieman, Ewan MacColl, Peggy Seeger, and touring extensively in Britain and Europe.His published books are "Sonic Harvest: Towards Musical Democracy" amd "John Cage As..."
13 Mar 2008:
Venue: Smeaton 206 (Future Music Lab)
Cinema has established itself around the idea of reality simulation as image in motion main features; for this reason, the script establishes the key for the chain of still images within the film. Current predominant audio-visual production sticks to this movement representation format. Historically a posterior process, sound dimension adds up to this pattern. Bodyweave LAB proposes the development of an on-line digital interface that operates simultaneously in the sequencing of still images and sound units. It will allow new articulations of these elements, detaching them from script structure with the intent of engaging participants in a chorographical and ludic process of aesthetical investigation.
27 Feb 2008:
Venue: Cookworthy Building, Room 402
If the mental representations involved in the perception of music and language are in some way shared, this would constrain theorising about what kinds of evolutionary explanations could be used to account for their origins. In this talk, I'll sketch a theory of sequential representations that could be applied to both music and language, but which isn't motivated in terms of selection pressures specifically relating to either. In essence, the idea is that the brain attempts to exploit parallelisms in sequential input to generate representations with minimal redundancy. The most economical representation of input would be achieved when it can be grouped hierarchically into a binary branching tree structure, a form characteristic of sentences, which can also be found in a broad range of musical styles from pop music to traditional Irish jigs. I will argue that in both music and language, grammatical markers govern how smaller elements can be combined into larger ones. In the case of language, the markers aren't overtly visible in the form of words, but are what determine whether they're nouns, verbs, and so on. By contrast, the relevant markers contained in a piece of music are overt, and perceived as themes. I will argue that in both cases, it is patterns of repetition in these markers that trigger structure building processes. In support of this, I'll provide some evidence that these processes can be interfered with in the case of language by introducing conflicts between overt and covert repetitions. Bringing together evidence from a broad range of sources, I'll also provide examples of the same structural principles in domains that are neither wholly musical nor wholly linguistic, including phenomena as disparate as rhetoric, poetry, cinema and religious practices. The evidence suggests that these principles play a fundamental role in organising our perception of patterns generally.
23 Jan 2008:
Speaker: David Plans Casal (Brunel)
Venue: Roland Levinsky Building, Room 209
Musical improvisation is driven mainly by the unconscious mind, engaging the dialogic imagination to reference the entire cultural heritage of an improvisor in a single flash. This workshop will introduce a case study of evolutionary computation techniques, in particular genetic co-evolution, as applied to the frequency domain using MPEG7 techniques, in order to create an artificial agent that mediates between an improvisor and her unconscious mind, to probe and unblock improvisatory action in live music performance or practice.
13 Dec 2007:
Speaker: Torsten Anders
Venue:Babbage Building, Room 410
Strasheela is a highly expressive constraint-based music composition system. The Strasheela user declaratively states a music theory and the computer generates music which complies with this theory. A theory is formulated as a constraint satisfaction problem (CSP) by a set of rules (constraints) applied to a music representation in which some aspects are expressed by variables (unknowns). Music constraint programming is style-independent and is well-suited for highly complex theories (e.g. a fully-fledged theory of harmony).
9 Nov 2007:
Speaker: Sue Denham
Venue: Babbage Building, Room 410
EmCAP is a multinational project funded by the EU to investigate the emergence of cognition through active perception. The project brings together work on innate and adult perceptual capabilities, particularly as they relate to musically relevant aspects of sounds, such as pitch and rhythmic structures, neurocomputational modelling of auditory perception and perceptual organisation, and implementations for applications in music technology. I will present an overview of the work on the project and then highlight in more detail some of the interesting discoveries and advances we have made during the course of the past year.
Details of the seminars in:
- 2006/2007 academic
- 2005/2006 academic year.
- 2004/2005 academic year.