Eduardo R. Miranda

Eduardo is Professor of Computer Music and head of the ICCMR. He studied for an MSc in Music Technology at the University of York, UK. Subsequently he received a PhD on the topic of sound design with Artificial Intelligence from the University of Edinburgh, UK. Before joining the University of Plymouth, he worked at Sony Computer Science Laboratory in Paris as a research scientist in the fields of AI, speech and evolution of language. Eduardo is a composer working at the crossroads of music and science. His distinctive music is informed by his unique background as a classically trained composer and Artificial Intelligence (AI) scientist with an early involvement in electroacoustic and avant-garde pop music.

BCMI-MIdAS











About

The BCMI-MIdAS (Brain-Computer Music Interface for Monitoring and Inducing Affective States) is a collaborative project between the Universities of Plymouth and Reading. The work is funded by two 54-month EPSRC grants, with additional support from the host institutions. The project aims to use coupled EEG-fMRI to inform a Brain-Computer Interface for music. Principal investigators are, jointly, Professor Slawomir Nasuto and Professor Eduardo Miranda.

The central purpose of the project is to develop technology for building innovative intelligent systems that can monitor our affective state, and induce specific
affective states through music, automatically and adaptively. This is a highly interdisciplinary project, which will address several technical challenges at the interface between science, technology and performing arts/music (incorporating computer-generated music and machine learning).

Research questions

Research questions which will be investigated by the project include:

  • How can music change affective states and what are the specific musical traits (i.e., the parameters of a piece of music) that elicit such states?
  • How can we control such traits in a piece of music in order to induce specific affective states in a participant?
  • How can we effectively detect information about affective states induced by music in the EEG signal, going beyond EEG asymmetry and characterising information contained in synchronisation patterns?
  • How can we use the EEG to monitor the affective state induced by music on-line (i.e., in “real-time”)?
  • How can we produce a generative music system capable of generating music embodying musical traits aimed at inducing specific affective states, observable in the EEG of the participant?
  • How can we build an intelligent adaptive system for monitoring and inducing affective states
    through music on-line?

EPSRC

Quantum Computer Music

Computers are essential for the functioning of our society. Despite the incredible power of existing computers, computing technology is progressing beyond today’s conventional models Quantum Computing is surfacing as a promising disruptive technology. Quantum Computing is emerging as a technology game-changer, which is built on the principles of quantum mechanics. This technology is advancing at exponential speeds. However, access still requires tools and expertise that are largely confined to scientific laboratories. ICCMR is working towards facilitating early-access for potential users beyond the scientific community. In particular for those in creative economies. We are collaborating with Rigetti Computing, Berkeley, USA, to develop approaches to making music with quantum computers and build bespoke programming tools for musicians.

Eduardo R. Miranda

Eduardo is Professor of Computer Music and head of the ICCMR. He studied for an MSc in Music Technology at the University of York, UK. Subsequently he received a PhD on the topic of sound design with Artificial Intelligence from the University of Edinburgh, UK. Before joining the University of Plymouth, he worked at Sony Computer Science Laboratory in Paris as a research scientist in the fields of AI, speech and evolution of language. Eduardo is a composer working at the crossroads of music and science. His distinctive music is informed by his unique background as a classically trained composer and Artificial Intelligence (AI) scientist with an early involvement in electroacoustic and avant-garde pop music.

Shakespeare, CERN and Game of Thrones combine for new Contemporary Music Festival Opera

The opera features a libretto written in Vōv – a language created by Hollywood ‘conlanger’ David Peterson. Peterson is responsible for the Game of Thrones tongues Dothraki and High Valyrian and many others developed for the big and small screen, and will open the festival with a talk entitled ‘On Designing Languages for Would-be Worlds’.

Project offers new lease of life to disabled former musicians

Researchers at the University of Plymouth are bringing together a group of people who, because of mental health or physical injury, are no longer able to play their instruments. They are particularly interested in hearing from current or ex-military and other services (police, fire etc) personnel who would like to take up playing again.

OK computer: decoding the composer behind music generated from brainwaves and outer space

In a wide-ranging conversation, Eduardo talks to us about how computer music has developed since Charles Babbage originated the concept of a programmable machine, what motivates his research which links technology to humankind, and why studying at Plymouth is the perfect place to compose the music of the future and create the technology it is played on.

Sube Banerjee

Sube Banerjee is Executive Dean of the Faculty of Health. Before joining the University of Plymouth, Sube served as Professor of Dementia and Associate Dean at Brighton and Sussex Medical School, directing its Centre for Dementia Studies.

Clinically he works as an old age psychiatrist. He was trained at St Thomas’, Guy’s and the Maudsley Hospitals. Before joining BSMS in 2012 he was the Professor of Mental Health and Ageing at the Institute of Psychiatry, King’s College London. He served as the UK Department of Health’s senior professional advisor on dementia leading the development of its National Dementia Strategy.

Sube is active in health system development and works with industry and governments on health systems, policy and strategies to improve health for older adults with complex needs and those with dementia. An active researcher, he focusses on quality of life in dementia, evaluation of new treatments and services, and the interface between policy, research and practice. He has been awarded national and international awards for work in policy and research in dementia.

Alexis Kirke

Alexis is a Senior Research Fellow in Computer Music. He studied for a BSc (Hons) in Mathematics at the University of Plymouth. He subsequently received a PhD in the field of Artificial Neural Networks and a PhD in the field of Computer Music, both from the University of Plymouth. Alexis’ research interests include applications of music and the arts to technology and Human-Computer Interaction, computational modelling of musical performance, assistive technology for dementia, and unconventional computation.

Jörg Fachner

Dr. Jörg Fachner is Professor of Music, Health and the Brain and Co-Director of the Cambridge Institute of Music Therapy Research at Anglia Ruskin University in Cambridge. Working over 25 years in music therapy, psychology and medical research projects in Germany and Finland, he is a specialist for interdisciplinary research topics in the social, medical and music sciences. He is keen to use technology to investigate social interaction in music and is PI of social neuroscience research projects in the UK and Austria investigating dyadic brain activity of therapists and patients in music therapy with EEG hyperscanning. His research has been featured in a 2019 BBC One documentary My Dementia Choir and is requested as a music and science presenter around the world.

Helen Odell-Miller

Dr Helen Odell-Miller OBE is a Professor of Music Therapy, and Director of the Cambridge Institute for Music Therapy Research at Anglia Ruskin University, Cambridge. Her research and clinical work contributed to establishing music therapy as a profession worldwide, over 40 years and specifically to innovating approaches for older people living with dementia, and for adults with mental health issues. She founded music therapy in the adult NHS mental health service in Cambridge, and is currently Principal Investigator for Homeside, a large 5 country RCT trial investigating music and reading for people living at home with their family carer. She was one of the Commissioners for the Music and Dementia Strategy in the UK, produced by the International Longevity Centre, and launched at the House of Lords, London, in 2018. She has published widely including editing and co-editing books, and journal articles, appeared in the media, and has been an invited keynote speaker around the world.

RadioMe – The Next Generation of Radio?

Senior Research Fellow Alexis Kirke, Professor Eduardo Miranda and BBC Editor Mark Grinnell introduce Radio Me – a new Engineering and Physical Sciences Research Council (EPSRC)-funded project to improve life for people living at home with dementia.

Multimillion pound project will see AI remixing radio to help people living with dementia

A £2.7 million project is to use artificial intelligence to adapt and personalise live radio, with the aim of transforming life for people living alone with dementia.

Radio Me will address key causes of hospital admission for people with dementia, such as agitation and not taking medication correctly. As a result, it is hoped quality of life will improve, and people will be able to remain living independently at home for longer.

Multimillion pound project will see AI remixing radio to help people living with dementia

Radio listeners are used to getting information about the travel and weather – but now they could get personalised medical support, thanks to an innovative new scheme.

Radio Me, a project clinically led by the Centre of Dementia Studies at Brighton and Sussex Medical School (BSMS), aims to improve the lives of people living alone with dementia, allowing them to remain living independently at home for longer.

A user switching on the radio in the morning might find their usual local station. However, at a point dictated by the electronic diary, a DJ-like voice could override the real DJ and remind the listener to have a drink, take medicine, attend a memory café or anything else.

David Moffat

David Moffat is a Research Assistant in Audio Signal Processing and Artificial Intelligence. He received an MSc in Digital Music Processing from Queen Mary University of London and graduated from Edinburgh University with a BSc in Artificial Intelligence and Computer Science.  David previously worked as a postdoc within the Audio Engineering Group of the Centre for Digital Music at Queen Mary University London. His research focuses on intelligent and assistive mixing and audio production tools through the implementation of semantic tools and machine learning.

Alex Street

Alex is a neurologic music therapist and senior research fellow at the Cambridge Institute for Music Therapy Research. His research focus is on designing and delivering interventions to improve neurological function, mood and quality of life for people with various neurological conditions in acute, subacute and community settings. His research has been published in several peer reviewed scientific journals and he has presented internationally. Alex has a particular interest in developing and implementing technology to improve accessibility, self-delivery of exercises and to increase treatment dosage.

Gözel Shakeri

I am a post-doctoral researcher at the School of Computing Science. The topic of my PhD research was “Multimodal Feedback for Mid-air Gestures when Driving” which I successfully defended in late February 2020. My research interests lie in Human Computer Interaction, Sustainable Food Interaction, Signal Processing, and Machine Learning.

Satvik Venkatesh

Satvik holds a Bachelor of Technology in Information and Communication Technology from SASTRA, India, and a ResM in Computer Music from ICCMR. He currently is studying for a PhD in ICCMR on the topic of on intelligent and assistive mixing and audio for live radio broadcast. His research interests include Brain-Computer Music Interfaces, Unconventional Computing, and Artificial Intelligence for music. Satvik is also an accomplished musician and performer.

Edward Braund

Edward studied for a MRes in Computer Music and a PhD on the topic of biocomputing for music, both at ICCMR. Currently, he is a Lecturer in Computing, Audio, and Music Technology. His current research looks to the information processing abilities of chemical, biological, and physical systems to develop new types of processors, sensors, and actuators. Recent developments on this front include a method for producing biological memristors, approaches to implementing logic gates on biological substrates, an interactive bioprocessor for musical improvisation, and a range of biological sensors.

Alexis Kirke

Alexis is a Senior Research Fellow in Computer Music. He studied for a BSc (Hons) in Mathematics at the University of Plymouth. He subsequently received a PhD in the field of Artificial Neural Networks and a PhD in the field of Computer Music, both from the University of Plymouth. Alexis’ research interests include applications of music and the arts to technology and Human-Computer Interaction, computational modelling of musical performance, assistive technology for dementia, and unconventional computation.

David Moffat

David Moffat is a Research Assistant in Audio Signal Processing and Artificial Intelligence. He received an MSc in Digital Music Processing from Queen Mary University of London and graduated from Edinburgh University with a BSc in Artificial Intelligence and Computer Science.  David previously worked as a postdoc within the Audio Engineering Group of the Centre for Digital Music at Queen Mary University London. His research focuses on intelligent and assistive mixing and audio production tools through the implementation of semantic tools and machine learning.

Nuria Bonet

Nuria studied for a MusB (Hons) in Music and a MusM in Electroacoustic Composition at University of Manchester. She subsequently took an MSc in Acoustics and Music Technology from the University of Edinburgh. In 2018 Nuria received her PhD in Computer Music at ICCMR. She currently teaches at the University of Plymouth and conducts research into organology, sonification and the interplay between music and science at ICCMR.

RadioMe

ICCMR’s EPSRC-funded project RadioMe is aimed at developing broadcasting technology to improve the lives of people suffering from dementia. Given the popularity of radio among the age group most likely to be living with dementia, we are developing a way to seamlessly ‘remix’ live digital broadcast so that listeners will receive personalised reminders, information and music. Using sensors to measure physical signs like heart rate, as well as wireless speakers and an Internet connection, RadioMe output will be produced in users’ homes by AI software to be created at ICCMR. The RadioMe project is developed in partnership with University of Glasgow, Anglia Ruskin University, Alzheimer’s Society, BBC Radio Devon, MHA Care, Bauer Media and CereProc.

Research expertise to be shared with undergraduates through new Computing, Audio and Music Technology degree, launching 2020

Students will become experts in recording, mixing, mastering, acoustics, digital audio workstations, audio processing, sound synthesis, and many other areas. However, as well as providing learners with solid skills in traditional music and audio technology, the research-led degree will allow students to design and programme their own software. This unique aspect of the course means students will not be bound by the limitations of what is available commercially.

Pioneering partnership to develop Quantum Computing for creative applications

ICCMR is a research partner of Rigetti Computing to develop research into creativity and musical applications of Quantum Computing.

ICCMR’s research is awarded a prize at Prix Ars Electronica 2019

ICCMR’s research into Biocomputing and creativity received an Honorary Mention at the prestigious Prix Arts Electronica, Linz, Austria.

ICCMR is awarded a multimilion research grant from EPSRC

ICCMR’s new RadioMe Project is aimed at developing broadcasting technology to improve the lives of people suffering from dementia. The project is in partnership with the University of Glasgow, Brighton and Sussex Medical School, Anglia Ruskin University, Alzheimer’s Society, BBC, MHA Care, Bauer Media and CereProc.

Grand prize of the European Commission for Innovation in Technology, Industry and Society stimulated by the Arts, ICCMR award

The composition Biocomputer Rhythms, by Prof Eduardo Miranda, won an Honorary Mention at STARTS, an initiative of the European Commission to foster arts & sciences connections.

ICCMR’s research is featured in science documentary Humanity 4.0

Prof Eduardo Miranda talked to EBS TV in South Korea about ICCMR’s ground-breaking research into Music Neurotechnology.

Hedy Hurban

Hedy is a costume designer and composer, originally from Toronto, Canada. She has created original costumes and music for feature films that combine traditional concepts with contemporary materials and digital devices.  Hedy holds a BFA in Visual Arts from York Universiy, Canada and a ResM in Computer Music from ICCMR. Currently she is studying for a PhD at ICCMR. Her research examines how smart technology can be integrated into garments to harness the body as a musical instrument.

Impact of ICCMR research featured at Volvo film for Sky Atlantic

Ground-breaking research into Brain-Computer Music Interface is featured the film Music of the Mind, as part of Sky Atlantic Human Made Stories series.

Da Vinci Edition in Japan releases a CD of computer-aided symphonies made in ICCMR

Two computer-aided symphonies by Prof Eduardo Miranda are now available on CD with recordings of the premieres at our very own Peninsula Arts Contemporary Music Festival.

A new book edited by ICCMR’s Prof Eduardo Miranda is published

A pioneering new book on musical applications of unconventional computing has just been published by Springer. 

Satvik Venkatesh

Satvik holds a Bachelor of Technology in Information and Communication Technology from SASTRA, India, and a ResM in Computer Music from ICCMR. He currently is studying for a PhD in ICCMR on the topic of on intelligent and assistive mixing and audio for live radio broadcast. His research interests include Brain-Computer Music Interfaces, Unconventional Computing, and Artificial Intelligence for music. Satvik is also an accomplished musician and performer.

Rachel Horrel

Rachel graduated with a BA (Hons) in Music from the University of Plymouth and is currently studying for a ResM in Computer Music at ICCMR. Her research focuses musical composition with Brain-Computer Music Interfacing systems. Rachel is the music director and conductor of the University of Plymouth Concert Band.

Richard Abrahams

Richard Abrahams holds a BA (Hons) in Music Composition Darting College of Artsand a ResM in Computer Music from ICCMR. Currently he is a studying for a PhD at ICCMR. His is conducting research into neurological synaesthesia. He is developing a computer model for studying colour and sound as a single, dualistic sensory experience.  He is developing an artificial-synaesthesia system for composition of music.

Dieter Hearle

Dieter is a ResM in Computer Music student and holds a BA (Hons) in Music from the University of Plymouth. His background is as a practicing and performing musician who has played in many bands as a guitarist and bassist. His current band is Plymouth alternative rock band, Black Tree Suns. Dieter’s is conducting research into sonification of seafaring data. For his master’s project he is developing a musical composition with auditory renditions of data relayed from buoyes distributed alongside the coast of Plymouth.

Archer Endrich

Born in the USA, Archer is a composer of both acoustic and electroacoustic music. He migrated to the UK in 1971 and completed a Doctorate in Music Composition at the University of York.  He has been Coordinator and administrator of the Composers Desktop Project (CDP) since its inception in 1987.  CDP is one of the most comprehensive software tools for sound transformations and composition ever developed. He has authored most of its Reference Documentation and Tutorials.  Archer is a Visiting Research Fellow at ICCM where he conducts research into electroacoustic music composition and sound design.

Linas Baltas

Linas graduated from the Lithuanian Academy of Music and Theatre with BA in Music Composition. Subsequent postgraduate studies include Master in Music Composition and Licentiate of Arts in Music Composition also from the Lithuanian Academy of Music and Theatre. His compositions are regularly performed in Contemporary Art festivals in Lithuania, Germany, USA and UK. He currently is a Visiting Research Fellow at ICCMR where he conducts research on contemporary music composition.

Clive Mead

Clive holds a BA in Music Production from the University of Brighton and has over 25 years of experience as an artist, composer and producer. He has a background in electronic dance music and has also written and produced music in numerous styles for film and TV. He is a specialist in re-creating vintage music styles and produces sample packs in various genres for the industry’s leading sample publisher. His ResM research at ICCMR is focused on exploring the relationship between the technology available during different time periods and the composition and production process.

Ben Payne

Ben Payne holds a BA (Hons) in Sound & Music Production and a MRes in Computer Music from the University of Plymouth. He is currently undertaking a PhD in the field of immersive audio environments and virtual reality as part of the ICCMR

Samuel Pierce-Davies

Samuel Pearce-Davies is a solo musician and music programmer, specialising in neural networks for AI composition. Graduating from Falmouth University with a BA(hons) in Creative Music Technology and subsequently completing a ResM in Computer Music at Plymouth, Sam is now studying a PhD at ICCMR funded by the 3D3 Centre for Doctoral Training. His research is focused on artificial neural networks’ ability to transpose learned features from one data source to another and how this can be applied in a sonic context.

Srishti Singh

Srishti is studying for Bachelor of Technology in Electronics and Communication from Vellore Institute of Technology, Chennai, India. Currently, she is an international placement student at ICCMR, where she is investigating Quantum Computing applications in music.

Carlos Tarjano Santos

Carlos holds a Bachelor degree from Federal Centre for Technological Education of Rio de Janeiro, Brazil, and a Master in Production Engineering from Universidade Federal Fluminense, also in Rio de Janeiro. He is pursuing a PhD from Universidade Federal Fluminense on the topic of Artificial Intelligence emulation of real-world instruments and singing using neural networks. He currently is a Visiting Research Fellow at ICCMR, where he is developing part of his doctoral thesis.

Cândida Borges

Cândida Borges is a Brazilian contemporary musician and transmedia artist. Currently she is studying for a PhD at ICCMR, University of Plymouth, UK, and is a Visiting Scholar at Columbia University (New York, USA) and Fellow Researcher at Universidad de Antioquia (Medellin, Colombia). Cândida has been an Associate Professor of Music for the Federal University of the State of Rio de Janeiro (UNIRIO) since 2009, and an invited professor for international institutions. She holds a BA (2000) and a MA degree (2005) in Piano Performance from Rio de Janeiro Federal University (UFRJ). She also studies Specialization in Electronic Music Production at SAE Institute NYC (2013). Classically trained, she has been making music for films, ballet, theatre, collaborations with DJs and producers worldwide, and especially for her own career as a singer songwriter. NYC based for a multicultural environment exploration in her PhD project, she has been exploring in arts the subjects of migration, borders and new technology art.

Alberto Tates

Born in Tulcan, Ecuador, Alberto graduated with as BA in Software Engineering from Universidad de las Americas. He subsequently received a PgDip in Artificial Intelligence from University of Essex, UK. Alberto is currently studying for a ResM Computer Music at ICCMR. He is developing research into Brain-Computer Music Interfacing.

Musical Biocomputing

ICCMR studying the electrical properties of organisms in order to build innovative electronic systems. We are developing electronic components grown out of biological material.  We are learning how to control an organism known as Physarum polycephalum to build bioprocessors. And we are using them to build biocomputers for new kinds of Artificial Intelligence for musical creativity. At the core of ICCMR’s biocomputers are biomemristors. These are memristors made with Physarum polycephalum. The memristor is a relatively unknown electronic component: it is a resistor with memory. The memristor is exciting because its behaviour is comparable to the behaviour of biological neurones and certain processes in the brain, which is paving the way for the development of brain-like processors. 

Music Neurotechnology

Imagine if you could play a musical instrument with signals detected directly from your brain. Would it be possible to generate music that represents brain activity? What would the music of our brains sound like? These are some of the questions addressed by our research into Music Neurotechnology. Recent advances in the neurosciences have led to a deeper understanding of the behaviour of individuals and large groups of biological neurones. We are developing systems for musical creativity using biologically informed computational models of the brain. And ICCMR is a world pioneer of Brain-Computer Music Interfacing (BCMI) research. BCMI technology allows a person to control bespoke musical instruments and system by means of commands expressed by brain signals, which are detected through brain scanning technology. We are interested in developing BCMI for people with special needs and music therapy, in particular for people with a severe physical disability.

Origins and Evolution of Music

ICCMR is a pioneer in adopting a computational neo-Darwinian approach to study and make music. We are developing Evolutionary Computation and Artificial Life techniques to model the evolution of music. These systems are programmed with the cognitive and physical abilities deemed necessary to evolve music, rather than with preconceived music rules, knowledge and procedures. We are developing computational models that simulates the role of imitation in the development of music. ICCMR collaborates with the University of Plymouth’s Marine Institute, the University of St. Andrews’ School of Biology, and Cetacean Ecology and Acoustics Laboratory (CEAL) at The University of Queensland, Australia, to understand how Humpback whales evolve their songs. 

Artificial Intelligence Musicianship

Artificial Intelligence (AI) technology permeates the music industry, ranging from management systems for recording studios to recommendation systems for online commercialisation of music through the Internet. ICCMR is developing technology to enable machines to listen and create sounds interactively. We are interested in developing AI to harness human musical creativity rather than AI to replace musicians. To this end we are developing tools for models of AI musicianship, combining the development of computational models for audio analysis, interaction and sound synthesis. We are pioneering the development of AI tools for generating sound effects from spoken descriptions and images. 

Smart Garments for Opera

ICCMR is championing the development of smart garments for musical performance and opera through innovative fashion design with e-textiles and new materials. We are developing garments that are able to capture expression of movement in performance and use this information to control music system, lighting and stage design components. We are experimenting with performances involving people wearing intelligent garments outfitted with sensors integrated into the fabric and AI technology that renders movement into sounds. The garments become intelligent interactive musical systems. These garments can be built with enhanced technological capabilities that function as an extension of the body as a musical instrument. 

BCMI-MIdAS

Website: http://neuromusic.soc.plymouth.ac.uk/bcmi-midas/index.html

About

The BCMI-MIdAS (Brain-Computer Music Interface for Monitoring and Inducing Affective States) is a collaborative project between the Universities of Plymouth and Reading. The work is funded by two 54-month EPSRC grants, with additional support from the host institutions. The project aims to use coupled EEG-fMRI to inform a Brain-Computer Interface for music. Principal investigators are, jointly, Professor Slawomir Nasuto and Professor Eduardo Miranda.

The central purpose of the project is to develop technology for building innovative intelligent systems that can monitor our affective state, and induce specific
affective states through music, automatically and adaptively. This is a highly interdisciplinary project, which will address several technical challenges at the interface between science, technology and performing arts/music (incorporating computer-generated music and machine learning).

Research questions

Research questions which will be investigated by the project include:

  • How can music change affective states and what are the specific musical traits (i.e., the parameters of a piece of music) that elicit such states?
  • How can we control such traits in a piece of music in order to induce specific affective states in a participant?
  • How can we effectively detect information about affective states induced by music in the EEG signal, going beyond EEG asymmetry and characterising information contained in synchronisation patterns?
  • How can we use the EEG to monitor the affective state induced by music on-line (i.e., in “real-time”)?
  • How can we produce a generative music system capable of generating music embodying musical traits aimed at inducing specific affective states, observable in the EEG of the participant?
  • How can we build an intelligent adaptive system for monitoring and inducing affective states
    through music on-line?