University of Plymouth
Brief Overview of our Current Research
The EPSRC-funded Digital Music Research UK Roadmap identified six key research themes (http://music.york.ac.uk/dmrn/roadmap/), which are central to ICCMR’s research strategy.
ICCMR welcomes post-graduate students wishing to join the project listed below. However, if you have ideas for new projects that might fit our expertise, please do not hesitate to contact us.
Music: Biology, Creativity and Computing
The development of new technologies - in the broader sense of the term - for music is at the heart of ICCMR’s aims. However, this is a means to an end, which is twofold: on the one hand, our research is aimed at gaining a better understanding of the impact of technology on creativity. On the other hand, our research is also aimed at looking into ways in which music mediated by technology may contribute to human development and wellbeing, in particular with respect to health and disability. Computing technology has been pivotal to the music industry within the last 70 years and it is likely that future technological developments will continue to have an impact on music. An important emerging development in computing research is the increasing tightening of the coupling between silicon machines and biological ones. Future computing technology will increasingly interface directly with our bodies (e.g., with the nervous system) and media other than silicon media will be increasingly harnessed to act as computers (e.g., bacteria-based computing).
(Leading researcher: Eduardo Miranda)
Science of Heritage: Artificial Intelligence in Curatorial Practice
How is heritage experienced physically and emotionally? This project is aimed at addressing this question by combining (a) material culture approaches to the museum, collection and object with (b) pioneering biomedical techniques for the measuring emotional response of participants. The project, which is developed in collaboration with V&A Museum in London, is aimed at the development of AI methods to capture and analyse quantitative and quantitative data in connection with participants’ experience of museum collections. The objective is to gain a better understanding of how such data can be harnessed to form a more comprehensive understanding of curatorial practices. The research addresses the lack of scientifically produced data on the value of heritage broadly speaking in terms of understanding the physiological or emotional response towards heritage as it is situated and presented in museums. This project aims to bring together cutting-edge technologies measuring bio-signals; that is, actively or passively experienced bio-physiological responses (from heart rate, to neurophysiological cues), to the domain of Heritage. This project is partially funded by AHRC.
Smart Clothes for Performance
The field of wearable technology is expanding rapidly. Driven by advancements in smart and e-textiles, innovative design developments in the areas of fashion, digital art, and performance are at the forefront of this research. The objective of our project is to examine the possibilities of creating wearable pieces of technology to capture expression of movement in performance as streamlined, intuitive interfaces which are performer controlled. This investigation consists of conducting performances involving people wearing intelligent garments outfitted with sensors integrated into fabric and AI technology that renders movement into sounds- the garments will become intelligent interactive musical systems. These garments can be built with enhanced technological capabilities that function as an extension of the body- as an instrument, adept at autonomously making musical compositions by using gestures in a contemporary performance setting.
Creative Quantum Computing
Quantum computing promises new ways of thinking about computer music and the arts, through processes such as superposition and entanglement. Furthermore, these properties provide the potential for huge speed-ups in searching music databases or optimising musical structures. There have been significant attempts previously to use the equations of quantum mechanics for generating sound, and to sonify simulated quantum processes. For new forms of computation to be utilized in computer music, eventually hardware must be utilized. This has rarely happened with quantum computer music. One reason for this is that it is currently not easy to get access to such hardware. A second is that the hardware available requires some understanding of quantum computing (QC) theory. We were the first to utilise actual hardware QCs in computer music, and their research continues to investigate novel applications that both use the advantages of QCs, and provide new insights into the unique properties of the processes inside the QCs.
Assistive Music Technology for Dementia
Music can have a profound impact on people living with dementia. The brain processes relating to music seem unexpectedly resistant to the neurodegeneration. Music has been used to trigger positive emotions and enable memory recall, bringing families closer together, and people with late stage dementia out of previously non-responsive states temporarily. We have combined this with the fact the radio listening is most common in the age range of those who experience dementia the most, to generate research in the field of dementia, broadcasting and music. Two radio programmes have been broadcast on these topics, co-produced and co-written by the BBC and ICCMR. We are also looking at ways to combine artificial intelligence with music and broadcasting to create unique assistive technologies for people living with dementia. These include ways of helping people to perform vital daily tasks, reduce attacks of agitation, and enable them to remember key elements of information using music.
Jazz Studies and Popular Music
This research addresses the parameters of improvisation and composition, and how these can be affected by common perceptions of jazz. Duke Ellington’s recorded output and working methods are taken as a starting point for an investigation into commonly held beliefs about jazz history and scholarship. When pieces of music involve improvisation, what level of input can be attributed to the composers? And the performer? And what role has recording technology played in making improvisation a repeatable artifact? These questions and topics invite a rethinking the philosophical implications of jazz fans and scholars owning collections of improvised jazz. Katherine Williams was recently (June 2015) awarded the Ella Fitzgerald Charitable Trust/Jazz Educators Network Jazz Research Fellowship to support this research. Also, we are interested in extending this research into the domains of popular music and digital cultures, and the figure of the singer-songwriter, gender, and music and geography. ICCMR welcomes research students or academic collaborations on these topics.
(Leading researcher: Katherine Williams)
Music as Computational Media
Music and other media are normally thought of as forms of entertainment. Using algorithmic composition techniques, computers can create music. Also there are computer languages like Supercollider designed to help people create music like Supercollider. However can we use music to make computations? Can we build programming languages with music? In this research we have developed Pulsed Melodic Affective Processing (PMAP), which allows MIDI to be used to perform calculations. Rather than binary pulses running through the processing of a robot or AI, in PMAP systems there are melodies. This enables computations involving artificial emotions, as well as potential human-computer interaction benefits. We are currently developing a computer language called MUSIC (Music-Utilizing Script Input Code) that allows a computer to be programmed using musical structures. This has applications in teaching programming, particularly to those with accessibility problems.
(Leading researcher: Alexis Kirke)
Biocomputing & Music
The field of compter music is evolving in tandem with advances in computer science. We at the ICCMR are interested in how the developing field of unconventional computation may provide new pathways for music and music technologies. Research in unconventional computation searches for novel algorithms and computing architectures inspired by or physically implemented in chemical, biological and physical systems. The plasmodium of Physarum polycephalum is a unicellular organism, visible to the unaided eye, with a myriad of diploid nuclei, which can be used as a biological computing substrate. The organism is amorphous, and although without a brain or any serving centre of control is able to respond to the environmental conditions that surround it. Our research aims to develop computational paradigms that harness Physarum polycephalum to create and work with music.
(Leading researchers: Edward Braund, Eduardo Miranda)
Assistive Music Neurotechnology
We are developing Brain-Computer Music Interface (BCMI) technology aimed at special needs and Music Therapy, in particular for people with severe physical disability, but able brain function. At present there are a number of systems available for recreational music making and Music Therapy for people with physical disabilities, but these systems are controlled primarily with gestural devices, which are not suitable for those with more complex physical conditions. Severe brain injury, spinal cord injury and Locked-in Syndrome result in weak, minimal or no active movement. To many with disability, BCMI technology has the potential to enable more active participation in recreational and therapeutic opportunities. ICCMR is well known internationally for its groundbreaking work in the field of BCMI. We have implemented a number of proof-of-concepts systems, which have attracted the attention of the scientific community and press worldwide. We are currently collaborating with the medical community in order to establish protocols for usage of our systems and test them in real clinical scenarios.
(Leading researchers: Eduardo Miranda, Satvik Venkatesh)
Brain-Computer Music Interface for Monitoring and Inducing Affective States
The BCMI-MIdAS (Brain-Computer Music Interface for Monitoring and Inducing Affective States) is a collaborative project between the Universities of Plymouth and Reading. The work is funded by two EPSRC grants, with additional support from the host institutions. The project aims to use coupled EEG-fMRI to inform a Brain-Computer Interface for music. The central purpose of the project is to develop technology for building innovative intelligent systems that can monitor our affective state, and induce specific affective states through music, automatically and adaptively. This is a highly interdisciplinary project, which will address several technical challenges at the interface between science, technology and performing arts/music, incorporating computer-generated music and machine learning.
(Leading Researchers: Eduardo Miranda, Alexis Kirke, Duncan Williams, Slawomir Nasuto (University of Reading))
Articulatory Vocal Synthesis
Speech synthesis is continually playing a more prominent role in the area of HCI (Human-Computer Interaction). Currently the area is dominated by two methods of synthesis, sample based concatenative synthesis and additive synthesis based formant synthesis. Articulatory synthesis is currently a less popular method that employs physical models of the human voice apparatus to simulate the physical phenomena of speech production. One of the reasons for its less prevalent use is due to the complex non-linear relationship between the parametric controls of these models and their resulting output, which makes them very difficult to produce useable TTS (Text To Speech) algorithms for, and impractical to program manually. It is because of this complexity that articulatory synthesis holds the potential to overcome the limitations of the two dominant systems, and in the future may allow for more expressive and dynamic speech synthesis. In order to overcome the hurdle of how best to control these synthesisers our research is looking into the use of Evolutionary Computing based algorithms to evolve suitable parameters for them.
(Leading researchers: Jared Drayton, Eduardo Miranda)
Evolutionary Computer Music
ICCMR is a pioneer in adopting a computational neo-Darwinian approach to study and make music. We are developing Evolutionary Computation and Artificial Life techniques to model the evolution of music in surrogate societies of artificial agents and robotic simulations. These systems are programmed with the cognitive and physical abilities deemed necessary to evolve music, rather than with preconceived music rules, knowledge and procedures. We developed a computational model that simulates the role of imitation in the development of music. This model has recently been implemented as a robotic simulation, which made an impact in the scientific community, resulting in press coverage by New Scientist. We are currently developing a more sophisticated model inspired by cognition theory to simulate the role of complexity in the evolution of music. We are also investigating the role of emotions in sound-based communication systems.
(Leading researchers: Eduardo Miranda, Alexis Kirke, Marcelo Gimenes)
Cultural Evolution of Humpback whale song
Humpback Whale Song is one of the most impressive vocal displays in the animal kingdom. It is also constantly evolving, changing from season to season. This project seeks to understand more about the interaction between these creatures that allows this complex song to emerge. To achieve this, we plan to implement the agent based modelling and evolutionary computation methods developed here at ICCMR. A collaborative project, researchers at ICCMR and the Marine Institute are working closely with marine biologists at the School of Biology, University of St. Andrews, Scotland, and the Cetacean Ecology and Acoustics Laboratory (CEAL) at The University of Queensland, Australia. Through the analysis of Humpback Whale Song recordings, the project seeks to create a rule set that can be implemented in a computational agent based model. In effect, creating a number of ‘virtual whales’ that interact in a digital ocean. This project is funded by The Leverhulme Trust.
(Leading researchers: Eduardo Miranda, Alexis Kirke, Simon Ingram (University of Plymouth Marine Institute))
Retro Currents in Contemporary Audio Synthesis
This is a practice-based exploration of retro trends, focussing on analogue modular synthesis and featuring a collaboration with industry leading practitioners such as legendary music producer Flood, Hollywood composer and sound designer Mel Wesson and producer Ed Buller. It is one of the more curious aspects of developments in audio synthesis and signal processing that there has emerged an increasingly strong emphasis on so called ‘old’ technologies. This can be seen manifesting in a variety of ways from the meticulous reproduction of hardware interfaces in digital plug in design to the revival of the manufacture of analogue hardware for signal processing and synthesis. Perhaps these technologies have never truly disappeared entirely but the recent significant trend for their widespread revival can be seen in the form of ‘eurorack’ modular synthesizers, the ‘lunchbox’ format for analogue signal processing and the continuing escalation of the second hand price for classic analogue synthesis hardware such as the moog modular and VCS3.
(Leading researcher: David Bessell)