Computer Music Research Logo

 


University of Plymouth
The House
Drake Circus
Plymouth PL4 8AA
United Kingdom
Tel: +44 (0)1752 232579

Reaching us...

 


Seminars (2010/2011)

Seminars presented by vistors and by members of the Computer Music Research team. Each seminar will be followed by an informal discussion open to the audience. Members of the University's academic community, partner colleges and collaborating institutions are welcome. For more information contact Alexis Kirke.

Note: the programme may change, so please consult this web site regularly for updates.

 

TERM 1


07 October 2010

Topic: Application of Intermediate Multi-Agent Systems to Integrated Algorithmic Composition and Expressive Performance of Music

Speaker: Alexis Kirke

Venue: RLB 011

Time: 14:00 – 15:30

Abstract: We present a novel application of Multi-Agent Systems (MAS) to computer-aided composition. The application is novel for two reasons. (a) Firstly because it is applying MAS whose agents utilise intermediate levels of processing – “Intermediate MAS”. This differentiates it from MAS made up of low-processing agents (often called Swarm MAS) or made up of heavy-duty processing agents. Most MAS for music creation have involved the agents being treated as particles moving in a musical parameter space - Low processing MAS - or have investigated how agents can work together simultaneously as separate artificial musicians to produce parallel parts of the same piece of music- utilising more heavy-duty processing for the agents. The MAS presented in this talk – the Intermediate Performance Composition System (IPCS, pronounced “ipp-siss”) – has agents that are more simple in terms of their musical intelligence and interaction than heavy duty MAS, but not so simple as to be classed as low-processing swarm systems. IPSC agents do not work as parallel composers/improvisers - but take it in turns to communicate with each other, and as a result a number of the agents develop separate compositions through the process of communication. 



04 November 2010

Topic: Trends in Sound and Music Computing Research I

Speaker: Noris Morowi

Venue: RLB 302

Time: 14:00 – 15:30

Abstract: Sound and Music Computing (SMC) research approaches the whole sound and music communication chain from a multidisciplinary point of view. By combining scientific, technological and artistic methodologies it aims at understanding, modelling and generating sound and music through computational approaches. The central focus of the research into SMC is sound and music. Sound is the resonance of objects and materials that we can listen to. Music is the intended organisation of sounds for particular uses in social and cultural contexts. The sound and music communication chain covers all aspects of the relationship between sonic energy and meaningful information, both from sound to sense (as in musical content extraction or perception), and from sense to sound (as in music composition or sound synthesis). This definition is generally considered to include all types of sounds and human communication processes. The seminar will focus on the latest SMC Summer School 2010, which the speaker attended in Barcelona in the summer. Morowi will report on the latest trends and developments in this field. This seminar has two parts. The second part takes place on 18 November.

 


18 November 2010

Topic: Trends in Sound and Music Computing Research II

Speaker: Noris Morowi

Venue: Rolle 214

Time: 14:00 – 15:30

Abstract: This is the second part of the seminar initiated by Morowi on 04 November. Please refer to the abstract on the previous seminar.

 


02 December 2010

Topic: Tracing the Compositional Process

Speaker: Hanns Holger Rutz

Venue: RLB 302

Time: 14:00 – 15:30

Abstract: Composition is viewed as a process that has its own temporal dimension. This process can sometimes be highly non-linear, sometimes is carried out in realtime during a performance. A model is proposed that unifies the creational and the performance time and that traces the history of the creation of a piece. This model is based on a transformation that enhances data structures to become persistent. Confluent persistence allows navigation to any previous version of a piece, to create version branches at any point, and to combine different versions with each other. This concept is tuned to integrate two important aspects, retroactivity and multiplicities. Three representative problems are posed: How to define dependencies on entities that change over time, how to introduce changes ex-post that affect future versions, and how to continue working on parallel versions of a piece. Solutions based on our test implementation in the Scala language are presented. Our approach opens new possibilities in the area of music analysis and can conflate disparate notions of composition such as tape composition, interactive sound installation, and live improvisation. They can be represented by the same data structure and both offine and realtime manipulations happen within the same transactional model.

 


16 December 2010

Topic: Game of Life Music

Speakers: Eduardo Miranda and Alexis Kirke

Venue: RLB 208

Time: 14:00 – 15:30

Abstract: The speakers report on the outcomes of their research into rendering music from the Game of Life cellular automata, or GoL. Music is a time-based art form, where sequences of musical notes and rhythms form patterns of sonic structures organised in time. The authors suggest that GoL is appealing for music because it produces sequences of coherent patterns, some of which can be very complex, yet controlled by remarkably simple rules. The chapter introduces three methods for rendering music from GoL. The first is based on a Cartesian representation of music in a two-dimensional plane, where the coordinates represent sets of three musical notes. The second method extends the first by using a three-dimensional plane, whose coordinates represent sets of four notes, instead of three. The last rendering method uses polar co-ordinates to map “living” cells into the musical domain.

 

 

TERM 2

 

13 January 2011

Topic: Neurogranular Sampler 

Speaker: John Matthias

Venue: RLB 206

Time: 14:00 – 15:00

Abstract: Music Neurotechnology is a new research area that is emerging at the crossroads of Neurobiology, Engineering Sciences and Music. Examples of ongoing research into this new area include the development of brain-computer interfaces to control music systems and systems for automatic classification of sounds informed by the neurobiology of th human auditory apparatus. In this paper we introduce neurogranular sampling, a new sound synthesis technique based on spiking neuronal networks (SNN). We have implemented a neurogranular sampler using the SNN model developed by Izhikevich, which reproduces spiking and bursting behavior of known types of cortical neurons. The neurogranular sampler works by taking short segments (or sound grains) from sound files and triggering them when any of the neurons fire.

 


27 January 2011

Topic: Rethinking the SuperCollider Client

Speaker: Hanns Holger Rutz

Venue: RLB 206

Time: 14:00 – 15:00

Abstract: We present ScalaCollider, a new client framework to connect to the SuperCollider sound synthesis server. It builds on top of the general-purpose language Scala. Scala's ambition is to allow for the development of scalable systems, being equally comfortable both for small-scale scripting and large-scale modular projects. Following an overview and comparison of the currently available clients for SuperCollider, we introduce the most important features of Scala, and show how its specific language elements can be exploited to design an elegant client that supports UGen graph composition, handles proxy objects, and restores part of the clarity lost in the entanglement of the original SuperCollider client's class library. The problems of type-safety and approaches to concurrency are discussed, and an outlook on ScalaCollider-Proc is given, a high-level extension for declarative sound process specification.

 

 

 

10 March 2011

Topic: Electro-acoustic music notation

Speaker: Christian Dimpker

Venue: RLB 206

Time: 14:00 – 15:00

Abstract: In opposition to serious instrumental music, most electro-acoustic opera are not based on notation. Composers that worked on this field waived the written elaboration already in the early years of its existence. However, because traditional musicology rests on the study of scores, problems have evolved concerning the analysis of these non-notated opera. The general processes of electro-acoustic music are well understood and explained in the essential literature on the topic, but the application of these techniques in the works of art is enciphered due to the lack of an accessible visual depiction. The development of a coherent notation system could solve the problems of electro-acoustic music by providing access to the complex processes for musicologists, composers and listeners.


Details of the seminars in:

- 2009/2010 academic year.
- 2007–2009 academic years.
- 2006/2007 academic year.

- 2005/2006 academic year.

- 2004/2005 academic year.