Computer Music Research Logo


University of Plymouth
The House
Drake Circus
Plymouth PL4 8AA
United Kingdom
Tel: +44 (0)1752 232579

Reaching us...



Brief Overview of our Current Research

The EPSRC-funded Digital Music Research UK Roadmap identified six key research themes (, which are central to ICCMR’s research strategy.

ICCMR welcomes post-graduate students wishing to join the project listed below. However, if you have ideas for new projects that might fit our expertise, please do not hesitate to contact us.


Music: Biology, Creativity and Computing

Physarum polycephalum

The development of new technologies - in the broader sense of the term - for music is at the heart of ICCMR’s aims. However, this is a means to an end, which is twofold: on the one hand, our research is aimed at gaining a better understanding of the impact of technology on creativity. On the other hand, our research is also aimed at looking into ways in which music mediated by technology may contribute to human development and wellbeing, in particular with respect to health and disability. Computing technology has been pivotal to the music industry within the last 70 years and it is likely that future technological developments will continue to have an impact on music. An important emerging development in computing research is the increasing tightening of the coupling between silicon machines and biological ones. Future computing technology will increasingly interface directly with our bodies (e.g., with the nervous system) and media other than silicon media will be increasingly harnessed to act as computers (e.g., bacteria-based computing).

(Leading researcher: Eduardo Miranda)


Biomedical Applications of Audio Technology

In Vitro

This research combines perceptually driven signal processing and statistical processing techniques to develop new audio solutions for biomedical applications. Current activity includes developing routines for meaningful sonication in early detection and analysis of cell-level motility, using multi-modal modelling to raise awareness of biological conditions (tumours, motor neurone disease), and psychological / psychoacoustic testing to inform perceptual metering and subsequent perceptually targeted signal processing routines. Prosodic discrimination, and real-time timbral enhancement are also being investigated to develop improved automated routines for targeted speech and signal processing (hearing aids, forensic audio). There is a strong cross-over between these interests and the world of sound recording (stereophony, spatial audio, field recording) and perceptual audio evaluation (commercial recording, acoustic optimisation). Current collaborators include University of Harvard, University of York, and University of California Los Angeles.

(Leading researcher: Duncan Williams)


Jazz Studies and Popular Music

Katherine Williams

This research addresses the parameters of improvisation and composition, and how these can be affected by common perceptions of jazz. Duke Ellington’s recorded output and working methods are taken as a starting point for an investigation into commonly held beliefs about jazz history and scholarship. When pieces of music involve improvisation, what level of input can be attributed to the composers? And the performer? And what role has recording technology played in making improvisation a repeatable artifact? These questions and topics invite a rethinking the philosophical implications of jazz fans and scholars owning collections of improvised jazz. Katherine Williams was recently (June 2015) awarded the Ella Fitzgerald Charitable Trust/Jazz Educators Network Jazz Research Fellowship to support this research. Also, we are interested in extending this research into the domains of popular music and digital cultures, and the figure of the singer-songwriter, gender, and music and geography. ICCMR welcomes research students or academic collaborations on these topics.

(Leading researcher: Katherine Williams)


Music as Computational Media

Cloud Chamber

Music and other media are normally thought of as forms of entertainment. Using algorithmic composition techniques, computers can create music. Also there are computer languages like Supercollider designed to help people create music like Supercollider. However can we use music to make computations? Can we build programming languages with music? In this research we have developed Pulsed Melodic Affective Processing (PMAP), which allows MIDI to be used to perform calculations. Rather than binary pulses running through the processing of a robot or AI, in PMAP systems there are melodies. This enables computations involving artificial emotions, as well as potential human-computer interaction benefits. We are currently developing a computer language called MUSIC (Music-Utilizing Script Input Code) that allows a computer to be programmed using musical structures. This has applications in teaching programming, particularly to those with accessibility problems.

(Leading researcher: Alexis Kirke)


Bio-computer Music

Immersive Sound Wall

The field of computer music is evolving in tandem with advances in computer science. We at the ICCMR are interested in how the developing field of unconventional computation may provide new pathways for music and music technologies. Research in unconventional computation searches for novel algorithms and computing architectures inspired by or physically implemented in chemical, biological and physical systems. The plasmodium of Physarum polycephalum is a unicellular organism, visible to the unaided eye, with a myriad of diploid nuclei, which can be used as a biological computing substrate. The organism is amorphous, and although without a brain or any serving centre of control is able to respond to the environmental conditions that surround it. Our research aims to develop computational paradigms that harness Physarum polycephalum to create and work with music.

(Leading researchers: Edward Braund, Eduardo Miranda)


Assistive Music Neurotechnology

We are developing Brain-Computer Music Interface (BCMI) technology aimed at special needs and Music Therapy, in particular for people with severe physical disability, but able brain function. At present there are a number of systems available for recreational music making and Music Therapy for people with physical disabilities, but these systems are controlled primarily with gestural devices, which are not suitable for those with more complex physical conditions. Severe brain injury, spinal cord injury and Locked-in Syndrome result in weak, minimal or no active movement. To many with disability, BCMI technology has the potential to enable more active participation in recreational and therapeutic opportunities. ICCMR is well known internationally for its groundbreaking work in the field of BCMI. We have implemented a number of proof-of-concepts systems, which have attracted the attention of the scientific community and press worldwide. We are currently collaborating with the medical community in order to establish protocols for usage of our systems and test them in real clinical scenarios.

(Leading researchers: Eduardo Miranda, Joel Eaton)

Brain-Computer Music Interface for Monitoring and Inducing Affective States

Dynamic Convolution Spectrum

The BCMI-MIdAS (Brain-Computer Music Interface for Monitoring and Inducing Affective States) is a collaborative project between the Universities of Plymouth and Reading. The work is funded by two EPSRC grants, with additional support from the host institutions. The project aims to use coupled EEG-fMRI to inform a Brain-Computer Interface for music. The central purpose of the project is to develop technology for building innovative intelligent systems that can monitor our affective state, and induce specific affective states through music, automatically and adaptively. This is a highly interdisciplinary project, which will address several technical challenges at the interface between science, technology and performing arts/music, incorporating computer-generated music and machine learning.

(Leading Researchers: Eduardo Miranda, Alexis Kirke, Duncan Williams, Slawomir Nasuto (University of Reading)) Project website:


Articulatory Vocal Synthesis


Speech synthesis is continually playing a more prominent role in the area of HCI (Human-Computer Interaction). Currently the area is dominated by two methods of synthesis, sample based concatenative synthesis and additive synthesis based formant synthesis. Articulatory synthesis is currently a less popular method that employs physical models of the human voice apparatus to simulate the physical phenomena of speech production. One of the reasons for its less prevalent use is due to the complex non-linear relationship between the parametric controls of these models and their resulting output, which makes them very difficult to produce useable TTS (Text To Speech) algorithms for, and impractical to program manually. It is because of this complexity that articulatory synthesis holds the potential to overcome the limitations of the two dominant systems, and in the future may allow for more expressive and dynamic speech synthesis. In order to overcome the hurdle of how best to control these synthesisers our research is looking into the use of Evolutionary Computing based algorithms to evolve suitable parameters for them.

(Leading researchers: Jared Drayton, Eduardo Miranda)


Evolutionary Computer Music


ICCMR is a pioneer in adopting a computational neo-Darwinian approach to study and make music. We are developing Evolutionary Computation and Artificial Life techniques to model the evolution of music in surrogate societies of artificial agents and robotic simulations. These systems are programmed with the cognitive and physical abilities deemed necessary to evolve music, rather than with preconceived music rules, knowledge and procedures. We developed a computational model that simulates the role of imitation in the development of music. This model has recently been implemented as a robotic simulation, which made an impact in the scientific community, resulting in press coverage by New Scientist. We are currently developing a more sophisticated model inspired by cognition theory to simulate the role of complexity in the evolution of music. We are also investigating the role of emotions in sound-based communication systems.

(Leading researchers: Eduardo Miranda, Alexis Kirke)


Cultural Evolution of Humpback whale song

Humpback Whale Song is one of the most impressive vocal displays in the animal kingdom. It is also constantly evolving, changing from season to season. This project seeks to understand more about the interaction between these creatures that allows this complex song to emerge. To achieve this, we plan to implement the agent based modelling and evolutionary computation methods developed here at ICCMR. A collaborative project, researchers at ICCMR and the Marine Institute are working closely with marine biologists at the School of Biology, University of St. Andrews, Scotland, and the Cetacean Ecology and Acoustics Laboratory (CEAL) at The University of Queensland, Australia. Through the analysis of Humpback Whale Song recordings, the project seeks to create a rule set that can be implemented in a computational agent based model. In effect, creating a number of ‘virtual whales’ that interact in a digital ocean. This project is funded by The Leverhulme Trust.

(Leading researchers: Michael Mcloughlin, Eduardo Miranda, Alexis Kirke, Simon Ingram (Marine Institute))


Gesture, Performance and Musical Experience


Gestures and body movements have a key role in music cognition and the ecological knowledge of the gestural repertoire of a traditional musical instrument contributes to the formation of multimodal embodied musical meaning. The objective of this project is to understand in which ways the musical gestures of a person playing an instrument can affect the musical experience of the listeners/observers and use motion-capture technologies to analyse the relationship between body movements of the performer and musical features. The outcomes of the analysis will be then employed to explore the use of gesture itself as a musical feature in composition and to find new methods of controlling electronic music parameters in live performances.

(Leading researchers: Federico Visi, Rodrigo Schramm, Visiting Fellow from UFRGS, Brazil)


Out of the Lab into the Real-world

Converting basic research outcomes into real world applications through practice-based research is pivotal for our success. ICCMR’s highly interdisciplinary research environment facilitates this by bringing together scientist/engineers and musicians/composers. The outcomes of this research include scholarly articles on the use of new technology in music, musical compositions and/or live performances applying new concepts, methodologies and technologies. We used our systems based on neo-Darwinian evolutionary theory to compose a number of successful pieces, such as Grain Streams for piano and live electronics, which has been performed in concerts in Annecy, Buenos Aires, Porto Alegre, Banff, Gothenburg, Edinburgh and Chicago, to cite but a few. More recently, we applied our model of the spiking behaviour of brain activity to implement the Fragmented Orchestra installation, a prize-winning music work, which spanned 24 sites in the UK. New composition and performances are currently in development applying our brain-computer interfacing technology, new sound synthesis methods (e.g., in vitro neural networks technique and concatenative synthesis), and our new evolutionary models using complexity and machine-simulated emotion.

(Leading researchers: Simon Ible, Eduardo Miranda, Alexis Kirke).


Retro Currents in Contemporary Audio Synthesis


This is a practice-based exploration of retro trends, focussing on analogue modular synthesis and featuring a collaboration with industry leading practitioners such as legendary music producer Flood, Hollywood composer and sound designer Mel Wesson and producer Ed Buller. It is one of the more curious aspects of developments in audio synthesis and signal processing that there has emerged an increasingly strong emphasis on so called ‘old’ technologies. This can be seen manifesting in a variety of ways from the meticulous reproduction of hardware interfaces in digital plug in design to the revival of the manufacture of analogue hardware for signal processing and synthesis. Perhaps these technologies have never truly disappeared entirely but the recent significant trend for their widespread revival can be seen in the form of ‘eurorack’ modular synthesizers, the ‘lunchbox’ format for analogue signal processing and the continuing escalation of the second hand price for classic analogue synthesis hardware such as the moog modular and VCS3.

(Leading researcher: David Bessell)


Exploring Interactions through Open Source Methodologies


The aim of this research is to discover new potential in interactive sound and light systems by exploring them as mapped territories of information that are resonating, interfering or infecting their surroundings. These territories currently include the area of Human Computer Interaction (HCI) and continue down to the level of circuit components. The research questions the potential of noise in these systems from the starting point of information theory ad entropy and moves on to exploring conscious / unconscious interactions of both the flesh and the ‘machinic’. Within these systems information is coded / decoded to cross territorial borders where interactions can be accessible or hidden through practises such as steganography and cryptography. My research is practice based covering workshops (theory and practice), installations and performance through Open Source methodologies.

(Leading researcher: David Strang)