Computer Music Research Logo


University of Plymouth
The House
Drake Circus
Plymouth PL4 8AA
United Kingdom
Tel: +44 (0)1752 232579

Reaching us...


Seminars (2009/2010)

Seminars presented by vistors and by members of the Computer Music Research team. Each seminar will be followed by an informal discussion open to the audience. Members of the University's academic community, partner colleges and collaborating institutions are welcome. For more information contact Alexis Kirke.

Note: the programme may change, so please consult this web site regularly for updates.



09 October 2009
Topic: A-Life for Music: on Making Music with Computer Models of Living Systems
Speaker: Eduardo Miranda
Venue: Room 214 Rolle Building
Time: 10:00 – 11:30

The field of Artificial Life, or A-Life studies all phenomena characteristic of natural living systems, through computational modelling, wetware-hardware hybrids, and other artificial media. Its scope is rather large, ranging from the investigation of the emergence of cognitive processes in natural or artificial systems to the development of life or life-like properties from inorganic components. A number of musicians, in particular composers, have started to turn to A-Life for inspiration and working methodology. For instance, a number of techniques to compose music with the aid of computers have been developed based on, or inspired, by A-Life models. In this seminar I will briefly review some of these techniques and then I will introduce my own work in this field. I will attempt to demonstrate why I find A-Life useful, inspiring and interesting for music composition. I shall focus particularly on a compositional method that I have developed using an A-Life modelling paradigm known as "cellular automata".


23 October 2009
Topic: A-Life for Music: on Making Music with Computer Models of Living Systems II
Speaker: Eduardo Miranda
Venue: Room 002 Babbage Building
Time: 10:00 – 11:30

Prof Miranda will present a continuation of the topic started in the last seminar.


06 November 2009
Topic: Artificial Social Composition: A Multi-Agent System for Composing Music Performances by Emotional Communication
Speaker: Alexis Kirke
Venue: Room 207 Roland Levinsky Building
Time: 10:00 – 11:30

There are systems available that can algorithmically compose music and others that can generate a computer expressive performance of a fixed piece of music. However there are few, which combine these two elements. We present a multi-agent system in which agents attempt to communicate their own - and influence others’ – (user-initialised) “affective” states with music. As a by-product of this process they compose music. This system: (a) composes music such that implicit in the music is its own expressive performance - it can be performed by a computer sequencer or MIDI-player and sound significantly less inhuman in performance than the output of an algorithmic composition approach which is not combined with expressive performance; (b) Allows users to specify compositions using a novel emergent and affective based modality; (c) Utilizes performance research into emotional performance to apply it to combined performance/composition; and (d) allows users to specify performance/compositions in simplified ways. We also present the first steps in a novel multi-human interface for the system, first with a single agent. This agent can have human EEG recordings embedded into it, and then use simple EEG emotion detection algorithms to track the agent’s emotional states to generate an emotional performance / composition.  


20 November 2009
Topic: Music Neurotechnology for Sound Synthesis using Artificial Spiking Neurons
Speakers: John Matthias and Eduardo Miranda
Venue: Room 011 Roland Levinsky Building
Time: 10:00 – 11:30

Music Neurotechnology is a new research area that is emerging at the crossroads of Neurobiology, Engineering Sciences and Music. Examples of ongoing research into this new area include the development of brain-computer interfaces to control music systems and systems for automatic classification of sounds informed by the neurobiology of the human auditory apparatus. In this paper we introduce neurogranular sampling, a new sound synthesis technique based on spiking neuronal networks (SNN). We focus on a SNN model developed by Izhikevich, which reproduces spiking and bursting behavior of known types of cortical neurons. The neurogranular sampler works by taking short segments (or sound grains) from sound files and triggering them when any of the neurons fire.



04 December 2009
Topic: Computer Music Wetware Project
Speakers: Anna Troisi, Antonino Chiaramonte and Eduardo Miranda
Venue: Room 304 Roland Levinsky Building
Time: 10:00 – 11:30

The field of Computer Music has evolved in tandem with the field of Computer Science. Computers have been programmed to play music as early as the beginning of the 1950’s. Therefore, it is likely that future developments in Computer Science will continue to have an impact in music. ICCMR is exploring ways in which unconventional modes of computation may provide new directions for future developments in Computer Music. In short, unconventional computation takes the computation (or part of it) into the real world, thereby harnessing the immense parallelism and non-algorithmic openness of physical systems. There has been a growing interest in research into the development of hybrid wetware-silicon devices for non-linear computations using cultured brain cells. The ambition of this project is to harness the intricate dynamics of in vitro neuronal networks to build a wetware “semi-living” computer music instrument, using in vitro neuronal networks. The instrument will function in two modes: MIDI controller mode and audio mode. In this seminar we will introduce the techniques that we have developed to produce sound from the behaviour of the living neurons.



18 December 2009
Topic: Experimental Music as an Abundant Economy of the Imagination
Speaker: Sam Richards
Venue: Room 301 Roland Levinsky Building
Time: 10:00 – 11:30

In this presentation the experimental approach to music is historically and socially contextualised as an antidote to the theology of the individual which has marked post-Renaissance thought, culture, arts and music, but reached a high point in Romanticism and Modernism. The deeper implications of the experimental attitude are drawn out in terms of the cultural, social, economic and political relevance.




29 January 2010
Topic: An Artificial Intelligence Approach to Concatenative Sound Synthesis
Speaker: Noris Mohd Norowi 
Venue: Room 210 Roland Levinsky Building
Time: 10:00 – 11:30
Concatenative synthesis uses a large database of source sounds, segmented into units, and a unit selection algorithm, which produces the sequence of units that best matches the sound or phrase to be synthesised, called the target. The selection is performed according to a set of descriptors of the units, which are characteristics extracted automatically from the source sounds, or higher-level descriptors attributed to them manually. In this seminar I will report on my research into using Artificial Intelligence to produce meaningful selections of units to synthesise target sounds.


12 Feb 2010
Topic: Articulating Noise and the Breakdown of the Interpretive Order
Speaker: Mike McInerney
Venue: Room 203 Smeaton Building
Time: 10:00 – 11:30

In The Languages of Art (1976), Nelson Goodman defined the features of a notational scheme which are necessary for it to function as part of a symbol system – a system whose notation refers directly and precisely to its corresponding subject matter. Though there is much that is counter-intuitive about Goodman’s specifications, his research demonstrates that any system of musical notation defines its vocabulary of ‘musical’ sounds and processes, and can only make use of sounds, and concepts of sound, from within that system. The sonic other – noise – lies outside the arena of prescriptive notation. The composer who wishes to work with sound in all its richness must either work with recorded sound or re-consider the roles of composer, interpreter and score. Taking analytical cues from Derrida, Gadamer and C. S. Pierce, among others, it is possible to re-evaluate the work of Anestis Logothetis (1921 - 1993) as a creative and perceptive response to the problems of sound and notation. Over a period of more than 40 years he developed a system of notation and practice of interpretation which, in expanding the sphere of permitted sound, brought to the fore the matter of interpretative gathering around a score which might otherwise pass unobserved. His oeuvre of more than 100 beautifully drawn scores and considerable polemical writing on sound and interpretation reveal an aesthetic which permits an expansion of the sonic vocabulary, makes possible greater focus on sonic nuance and retains the faithful reading whilst encouraging a greater stress upon the autonomy and independent musical practice of the performer. This seminar attempts to lay out the landscape of Logothetis’ work and explain its relevance to contemporary anxieties and curiosity about noise and identity. It will draw upon my own experience of the work as an interpreting performer and continuing research into, and translation from, his theoretical writings on music.

26 February 2010
Topic: Approaches to Using Rules as a Composition Method
Speaker: Örjan Sandred (University of Manitoba, Canada)
Venue: Room 011 Roland Levinsky Building
Time: 10:00 – 11:30

This seminar will give an overview of the author’s experience in using rule-based computing in music composition. After discussing both imposed and voluntary constraints in existing music, a very short description of a rule-based computer system will be given. The concept of musical dimensions will be discussed, and used to illustrate the complexity of implementing musical structures into a computer system. A central section in the article will focus on different approaches to the design of musical rules. This will be followed by a discussion of the relationship between pitch and rhythm, and the role of motifs and gestures. Finally two examples from one of the author’s compositions will illustrate how rules can be used to formalize music.


16 March 2010
Topic: Cellula Automata Sound Synthesis with the Multitype Voter Model
Speaker: Jaime Serquera
Venue: Room 304 Roland Levinsky Building
Time: 10:00 – 11:30

In this seminar I will report on my research into sound synthesis using cellular automata, specifically the multitype voter model. The mapping process adopted is based on digital signal processing analysis of automata evolutions and consists in mapping histograms onto spectrograms. The main problem of cellular automata is the difficulty of control and, consequently, sound synthesis methods based on these computational models normally present a high aleatoric factor in the output. I have achieved a significant degree of control as to predict the type of sounds that we can obtain. I am able to develop a flexible sound design process with emphasis on the possibility of controlling over time the spectrum complexity.


26 March 2010
Topic: The Notes Between the Notes in Some Forms of Popular and Jazz Music
Speaker: Bhesham Sharma
Venue: Room 206 Roland Levinsky Building
Time: 10:00 – 11:30

Although much has been written about popular music and jazz, one element that has yet to receive its deserved attention is the persistence of microtonal gestures. Indeed, one antecedent to both popular and jazz music, African American traditional music of the Southern States, still reflect certain musical practices that bear connections to what we might call non-Western musical practices. In this discussion, we will explore a select number of Southern American songs, particularly of the 1930s and 1940s, and highlight their relationship to earlier American, and West African musical practices. We will then explore how this microtonal continuum has informed the recorded performances of artists such as Ornette Coleman, and Whitney Houston.


Past Seminars

18 June 2009
Topic: Electroacoustic Music as Devotion to Nature
Speaker: Anna Troisi
Venue: Room 301, Roland Levinsky Building

In a world where become difficult to remember the origin of music, it seems to be really hard to move into electroacoustic music as into a natural world of sounds. This is because of the concept that every alteration of nature implies something that is far from nature. If we ask if it is more "natural" a violin sound instead of a theremin sound I will answer that are both human inventions and we made them using items taken from the world where we are. So every instrument is part of our natural world. In my personal way of moving into music I really try to create musical experiences instead of just "music" using what I call natural instruments I make by myself. This is my way to play electroacoustic instruments as if they were acoustic.


Details of the seminars in:

- 2007/2009 academic year.
- 2006/2007 academic year.

- 2005/2006 academic year.

- 2004/2005 academic year.