Computer Music Research Logo


University of Plymouth
The House
Drake Circus
Plymouth PL4 8AA
United Kingdom
Tel: +44 (0)1752 232579

Reaching us...


Seminars (2012/2013)

Seminars presented by vistors and by members of the Computer Music Research team. Each seminar will be followed by an informal discussion open to the audience. Members of the University's academic community, partner colleges and collaborating institutions are welcome.



04 October 2012 (week 10)
Speaker: Mrs Noris Mohd Norowi (Plymouth University)
Topic: Issues in Concatenative Sound Synthesis
Venue: Roland Levinsky 207
Time: 14:00 – 15:30

Abstract: Concatenative Sound Synthesis is a data-driven sound synthesis method that uses large corpus of source sounds. This seminar will present the technical overview of a concatenative sound synthesis system and will discuss several issues that exist in current concatenative sound synthesis systems, namely the challenges in developing an order-dependent feature selection process and the handling of homosonic and equidistant sound segments during unit selection. Solutions to these challenges, i.e. the inclusion of a robust algorithm to automatically assign consistent weights to all features through the use of Analytic Hierarchy Process, and the use of concatenative distance will also be presented.

18 October 2012 (week 12)
Speaker: Mr Christian Dimpker (Plymouth University)
Topic: Klanggruppen: A composition for violoncello
Venue: Fitzroy 212
Time: 14:00 – 15:30

Abstract: »Klanggruppen« [sound groups] is a composition for violincello alone, which was recently premiered. In this seminar the piece will be analysed concerning the composition techniques used to construct the work and the extended playing techniques that shape its form. It will be shown how traditional methods and techniques, such as the baroque postulation of the unity of mood within a movement, the twelve-tone technique or pitch class matrices may be transferred in order to compose a piece that does not intend to produce affect and mainly makes use of concrete sounds. Additionally, an insight into notation systems that enable the depiction of unconventional manners of sound production will be provided.

01 November 2012 (week 14)
Speaker: Dr Alexis Kirke (Plymouth University)
Topic: Open Outcry: a Semi-Deteministic 'Reality Opera' where Traders exchange Stocks live by Call-and-Response Singing
Venue: Roland Levinsky 301
Time: 14:00 – 15:30

Abstract: The opera Open Outcry, by Alexis Kirke and Greg B. Davies, involves 12 operatic performers trade real money and competing for profits in an artificial market by singing to each other and thus creating music which acoustically expresses the behaviour of the market in real time. The opera will be performed on 15th November 2012 at The Mansion House in the City of London, sponsored by Barclays, directed by Alessandro Talevi. The musical trading phrases were composed by Alexis using genetic algorithm computer music techniques. The market was designed by Greg, a Behavioural Finance expert at Barclays. This talk will explain the background of the project as well as some of the techniques used.

15 November 2012 (week 16)
Topic: Sam Richards (Plymouth University)
Venue: Ideologies of the First and Last Draft
Venue: Rolle Building 015
Time: 14:00 – 15:30

Abstract: "Go with the first thought" is a common idea in psychotherapy, automatic writing and musical improvisation. Why? The assumption is that the first utterance has a special significance. What are the sources of this idea which, incidentally, only arises in its modern form during the Industrial Age. Sam Richards will suggest some historical sources including references to mediumship, hypnosis, surrealism and beat poetry. We have long known that "free" improvisation could only be free if we could wipe our brains clean before every performance. But what is the ideology that holds it together? And how does that impact on how we play and teach improvisation?

29 November 2012 (week 18)
Speaker: Dr Duncan Williams (Plymouth University)
Topic: What's timbre got to do with it?
Venue: Roland Levinsky 209
Time: 14:00 – 15:30

Abstract: Timbre is a somewhat elusive attribute of sound - most dictionaries define timbre by what it is not, rather than what it is. As listeners we have the ability to decode musical timbre both autonomously and intuitively, but find it very difficult to describe, let alone quantify. This seminar will present some of the methods by which psychoacousticians approach the task of qualifying and quantifying timbre, with some suggestions as to why such techniques might prove useful for practicing musicians and sound engineers. Examples from psychoacoustic testing, musicology, and the popular recording industry will be used to illustrate these applications, and the scope of work still to be done.

13 December 2012 (week 20)
Speaker: Prof Eduardo Miranda (Plymouth University)
Topic: The Algorithmic Composition Aspects of Sound to Sea
Venue: Roland Levinsky 210
Time: 14:00 – 15:30

Abstract: Eduardo R Miranda introduces Sound to Sea, his major new work that revisits the magnificent British choral tradition, through a myriad of different cultural references including the literary works of Horace, Shakespeare and Mark Twain and the music of Elgar, Mozart, Messiaen and Stravinsky. In this talk Prof Miranda will focus on the composition of the second movement, Raster Plot, which was composed with data generated from a computer simulations of a network of neurones.


10 January 2013 (week 24)
Speaker: Joe Browning (SOAS, University of London)
Topic: Materials, Networks, Mimesis: Some perspectives on new 'nature music' for shakuhachi
Venue: Babbage 405
Time: 14:00 – 15:30

Abstract: This paper presents one aspect of ongoing research into the significance of nature – as idea and material entity - in the global shakuhachi (Japanese bamboo flute) scene. It uses case studies of three composers to explore music written for the shakuhachi around the turn of the millennium in the USA, focussing on compositions inspired in various ways by the natural world and inhabited by the sounds of landscapes, birds and other creatures. Drawing on the ideas of Michael Taussig and Georgina Born, I use ethnographic material to outline several interconnected arguments. I focus on the compositional process, as described by the composers themselves, including their accounts of how musical materials, instruments and technological processes can exceed their control and expectations. I argue that the act of composition is affiliative - creating new connections between entities including composers, instruments, various musical repertoires, natural phenomena, and technologies – and that all these entities exert a kind of social agency on each other. I also suggest that complex and multi-layered musical mimesis is at the heart of this affiliative dynamic. This includes not just the imitation of environmental sounds such as birdsong, but also the translation into sounEd of sights and movements related to natural phenomena, the transformation of musical material via technological processes, and the echoes in these new compositions of sounds and stories from the traditional shakuhachi repertoire.

These ideas suggest alternative perspectives on composition, emphasising the process rather than the finished piece, and foregrounding the network of entities involved, rather than placing the composer at the creative and analytical centre. And together they help to explain the role of new music in reworking traditional ideas about the naturalness of the shakuhachi, and its connections with Japanese landscapes, as the instrument travels around the world.

24 January 2013 (week 26)
Speaker: Dr Marcelo Gimenes (UNICAMP - Brazil & University of Huddersfield)
Topic: Motivational States in Artificial Musical Societies
Venue: Babbage 416
Time: 14:00 – 15:30

Abstract: Dr Gimenes investigates the emergence and self-organization of musical knowledge and styles in virtual societies by means of computer simulation. A new interactive music system (CoMA - Comunidades Musicais Autônomas, i.e., Autonomous Musical Communities) is currently being implemented in order to support artificial agents to interact independently of external control. This talk introduces one of the main models of this system responsible for determining the behaviour of artificial agents and allowing the occurrence of different dynamics of social interaction that have the potential to directly influence the evolution of musical styles.

07 February 2013 (week 28)
Speaker: Mr Joel Eaton (Plymouth University)
Topic: Mapping brainwaves to direct real-time notation
Venue: Babbage 405
Time: 14:00 – 15:30

Abstract: ICCMR has a history of research into brain-computer music interfacing (BCMI), but where are we now? How did we get here? And where are we heading? The search for meaning in brainwave information has long been a primary focus as a crucial attribute for control in BCMI research. Utilising minuscule, and often unpredictable electrical signals that lie within our brains for means of finite control introduces a number of challenges. This seminar will present methods of dealing with some of these obstacles and how we can best appropriate the meaning in EEG information that we can currently generate. Examples of two works that are currently in development will be used to illustrate these factors which might prove useful for future neurofeedback and BCMI composition.

21 February 2013 (week 30)
Speaker: Dr Matthias Mauch (Queen Mary University of London)
Topic: Making Musical Sense and Science from Digital Data
Venue: Roland Levinsky 206
Time: 14:00 – 15:30

Abstract: Beat-tracking musical recordings, automatic chord identification, downbeat-detection, piano note transcription, singing pitch estimation — as a Music Informatics researcher it is my job to teach these tasks to computers. In my talk I’m going to showcase some funky examples of how the resulting technology can be used to aid music listening and making, including SongPrompter and from my time in Japan, and Driver’s Seat, made at, and One of the greatest applications of Music Informatics is to make science. Therefore, the last part of my talk will focus on two recent projects: the first on singing intonation, and the second on the evolution of the pop charts:

07 March 2013 (week 32)
Speaker: Dr David Bessell (Plymouth University)
Topic: Dynamic Convolution Modelling Synthesis
Venue: Roland Levinsky 207
Time: 14:00 – 15:30

Abstract: David Bessell will talk about two methods of audio synthesis, one primarily related to percussion sounds and one related to synthesis of the singing voice. These two approaches to synthesis have in common some hybrid mixing of techniques and concepts taken from physical modelling and frequency domain/convolution processing. In each case these techniques contain some interesting variants on more traditional synthesis architectures using similar means.

21 March 2013 (week 34)
Speaker: Tim Blackwell (Goldsmiths College University of London)
Topic: Is it complicated enough yet?
Venue: Babbage 405
Time: 14:00 – 15:30

Abstract: Quite Possibly. This talk considers the impact of complexity science and of the mathematics of patterns on computational compositions and performance. Music, insofar as it is aural structure, can vary between the tightly organised and the chaotic. Somewhere between these regimes lie unexplored domains of complexity. A measure, or set of measure, for these domains would be of great value. For example, the fitness impasse of evolutionary music could be negotiated, and live algorithms (autonomous computer performers) might enjoy a natural machine aesthetic. I outline a current study of human perception of music complexity and how complexity science in general might influence music computational activities, and indeed help us avoid the situation alluded in the title.

Details of the seminars in:

- 2011/2012 academic year.
- 2010/2011 academic year.
- 2009/2010 academic year.
- 2007–2009 academic years.
- 2006/2007 academic year.

- 2005/2006 academic year.

- 2004/2005 academic year.