The Xth Sense Wins Guthman Prize

marco-donnarumma_xth-workshop-2.jpgThe Xth Sense,( Marco Donnarumma), a biophysical, wearable interactive technology for musical performance and responsive milieux has been awarded the first prize at the prestigious Guthman New Musical Instrument Competition as “the world’s most innovative new musical instrument.” The GNMIC is an annual event to find the world’s best new ideas in musicality, design, and engineering. It takes place at Georgia Tech, Atlanta, US.

This critically acclaimed event seeks to provide a fertile platform for the advancement of the studies on New Musical Interfaces for Musical Expression (NIME).

The Xth Sense is a new and original, biophysical interactive system based on free, open source tools. The project goal is to investigate exploratory applications of biological sounds, namely muscle sounds, for musical performance and responsive milieux.

Complete information and a blog documenting the research can be viewed on-line.

The distribution of the framework is crucial to the investigation; the launch of the XS for public use is scheduled around April 2012. Whereas the software side and the complete hardware documentation will be freely downloadable on-line, the biophysical sensor DIY kit will be available for purchase. This consists of a low-cost pack that enables anyone to build, hack and extend such novel wearable device.

The central principle underpinning the Xth Sense (XS) is not to “interface” the human body to an interactive system, but rather to approach the human body as an actual and complete instrument in itself. Augmented musical instruments and physical computing techniques are generally based on the relation user>controller>system: the performer can interact with a control interface (a physical controller or sensor systems) and modify results and/or rules of a computing system. Sometimes this approach can confine and perhaps drive the kinetic expression of a performer, leaving less room for his/her physical energy and non-verbal communication. Besides, being that often the sonic outcome of such performances is digitally synthesised, the overall performance can lack of “liveness”.

The XS completely transcends the paradigm of the user interface by creating sonic matter and control data directly from the performer’s body. There’s no mediation between body movements and music because the raw sonic material originates within the fibres of the body, and the sound manipulations are driven by the different amount of energy produced by the performer.

Technical description

The XS fosters a new and authentic interaction between man and machines. By enabling a computer to sense and interact with the muscular potential of human tissues, the XS approaches the biological body as a means for computational artistry. During a performance muscle movements and blood flow produce subcutaneous mechanical oscillations, which are nothing but low frequency sound waves (Mechanomyographic signals or MMG). Two microphone sensors capture the sonic matter created by the performer’s limbs and send it to a computer. This develops an understanding of the performer’s kinetic behaviour by *listening* to the friction of his flesh. Specific gesture, force levels and patterns are identified in real time by the computer; then, according to this information, it manipulates algorithmically the sound of the flesh and diffuses it through a variety of multi-channel sound systems.
The neural and biological signals that drive the performer’s actions become analogous expressive matter, for they emerge as a tangible sound.

The XS can be played as a traditional musical instrument, i.e. analog sounds can be produced and modified by adequately exciting the instrument, but it can also be used as a gestural controller to drive audio synthesis or sample processing. The XS can be used in both modes simultaneously. The most interesting performance feature of such system consists of the possibility to expressively control a multi-layered processing of the MMG audio signal by simply exerting diverse amounts of kinetic energy. For instance, stronger and wider gestures could be analysed and mapped so to generate sharp, higher resonating frequencies coupled with a very short reverb time, whereas weaker and more confined gestures could be deployed to produce gentle, lower resonances with longer reverb time.
The form and color of the sonic outcome is continuously shaped in real time with a very low latency (measured at 2.5ms), thus the interaction among the perceived force and spatiality of the gesture is neat, transparent and fully expressive. From the exclusive real time processing of the muscle sounds, through resampling of pre-recorded sounds, to the audio manipulations of traditional musical instruments, the XS is the first musical instrument of its kind to offer such a flexibility at a very low cost and with a free and open technology.

The work was developed at the SLE, Sound Lab Edinburgh – the audio research group at The University of Edinburgh, and was kindly supported by the Edinburgh Hacklab and Dorkbot ALBA. The project was finalized during an Artistic Development Residency at Inspace, Edinburgh. Inspace kindly sponsored the work by providing technical and logistical support, and organizing a public vernissage for the official launch of the project within the artistic research program “Non-Bio Boom”.

The XS technology was awarded the first prize at the Margaret Guthman Musical Instrument Competition (Georgia Tech, US 2012) as “the world’s most innovative new musical instrument”.
The research was endowed a PRE travel grant, which facilitated a related presentation at ICMC, International Computer Music Conference 2011 and the International grant by Creative Scotland, for a presentation at the academic conference KEAMS/SICMF 2011 in Seoul, South Korea.

Additional Information

The use of open source technologies is an integral aspect of the research. The biosensing wearable device was designed and implemented by Marco Donnarumma, with the support of Andrea Donnarumma and Marianna Cozzolino. The Pure Data-based framework for real time analysis and processing of biological sounds was designed and coded by the author on a Linux machine, with inspiring advice by Martin Parker, Sean Williams, Owen Green Jaime Oliver, and Andy Farnell.

Related works

Since its inception in March 2011, the first piece for the XS titled “Music for Flesh II” (MFII) has toured South Korea, Mexico, Norway, UK, Italy, Germany and has been presented at several major academic conferences among which the ICMC, International Computer Music Conference (UK) and the Linux Audio Conference (IRL).

Again, in March 2011 the author was commissioned a new work development residency at Inspace, UK. During the residency the XS has been deployed in the implementation of Non-Bio Boom: a Musicircus, a biosensing, participatory sound environment for eight audio channels and multiple users.

In May 2011 the system has been employed as central technology in the project Raw/Roar, a two weeks artistic residency which involved a team of five dancers and three composers directed by the author. The residency focused on the creation of an intermedia dance piece for enhanced bodies which was premiered at Dansehallerne, DK. The project was commissioned by the Danish National School of Theatre and Contemporary Dance and supported by The Danish Arts Council and Augustinus Fonden.

Pictures courtesy of Chris Scott.

Feb 21, 2012
Trackback URL

Leave a comment


Current interview:
Robin Meier, Ali Momeni and the sound of insects

Previous Interviews:


livestage music sound performance calls + opps installation audio/visual radio festival instrument networked audio interactive experimental electronic workshop video participatory writings event mobile exhibition concert live collaboration electroacoustic environment nature reblog distributed soundscape field recording net_music_weekly improvisation software history locative media space public noise recording immersion voice acoustic sonification lecture generative conference body tool sound sculpture net art art + science VJ/DJ light diy remix site-specific perception mapping film visualization listening laptop algorithmic multimedia city urban data wearable architecture open source game virtual biotechnology sound walk spatialization webcast hacktivism robotic image score platform electromagnetic new media cinema ecology found news composer telematic interface streaming residency interviews/other sensor dance circuit bending synesthesia physical political notation intervention object controller broadcasts conversation narrative second life responsive mashup place technology ambient social network symposium motion tracking hybrid intermedia augmented spoken word livecoding text phonography auralization acousmatic upgrade! gesture opera aesthetics mixed reality resource theory processing 8bit orchestra nmr_commission wireless device toy wireless network theater web 2.0 presentation community surveillance p2p 3D copyright soundtrack research podcast sample feedback psychogeography social chance interdisciplinary tactile recycle interview language systems code emergence presence cassette privacy free/libre software media play chiptune newsletter place-specific archives avatar education haptics activist surround sound audio tour glitch hardware tactical identity bioart asynchronous business tv tangible composition animation jazz transmission arts apps tag e-literature collective microsound relational synchronous Artificial Intelligence conductor convergence reuse simulation ubiquitous synthesizers im/material
3D 8bit acousmatic acoustic activist aesthetics algorithmic ambient animation apps architecture archives art + science Artificial Intelligence asynchronous audio audio/visual audio tour augmented auralization avatar bioart biotechnology body broadcasts business calls + opps cassette chance chiptune cinema circuit bending city code collaboration collective community composer composition concert conductor conference controller convergence conversation copyright dance data distributed diy e-literature ecology education electroacoustic electromagnetic electronic emergence environment event exhibition experimental feedback festival field recording film found free/libre software game generative gesture glitch hacktivism haptics hardware history hybrid identity im/material image immersion improvisation installation instrument interactive interdisciplinary interface intermedia intervention interview interviews/other jazz language laptop lecture light listening live livecoding livestage locative media mapping mashup media microsound mixed reality mobile motion tracking multimedia music narrative nature net art networked net_music_weekly new media news newsletter nmr_commission noise notation object open source opera orchestra p2p participatory perception performance phonography physical place place-specific platform play podcast political presence presentation privacy processing psychogeography public radio reblog recording recycle relational remix research residency resource responsive reuse robotic sample score second life sensor simulation site-specific social social network software sonification sound soundscape sound sculpture soundtrack sound walk space spatialization spoken word streaming surround sound surveillance symposium synchronous synesthesia synthesizers systems tactical tactile tag tangible technology telematic text theater theory tool toy transmission arts tv ubiquitous upgrade! urban video virtual visualization VJ/DJ voice wearable web 2.0 webcast wireless device wireless network workshop writings



Apr | Mar | Feb | Jan


Dec | Nov | Oct | Sep | Aug | Jul
Jun | May | Apr | Mar | Feb | Jan


Dec | Nov | Oct | Sep | Aug | Jul
Jun | May | Apr | Mar | Feb | Jan


Dec | Nov | Oct | Sep | Aug | Jul
Jun | May | Apr | Mar | Feb | Jan


Dec | Nov | Oct | Sep | Aug | Jul
Jun | May | Apr | Mar | Feb | Jan


Dec | Nov | Oct | Sep | Aug | Jul
Jun | May | Apr | Mar | Feb | Jan


Dec | Nov | Oct | Sep | Aug | Jul
Jun | May | Apr | Mar | Feb | Jan


Dec | Nov | Oct | Sep | Aug | Jul
Jun | May | Apr | Mar | Feb | Jan


Dec | Nov | Oct | Sep | Aug | Jul
Jun | May | Apr | Mar | Feb | Jan


Dec | Nov | Oct | Sep | Aug | Jul
Jun | May | Apr | Mar | Feb | Jan


Dec | Nov | Oct | Sep | Aug | Jul
Jun | May | Apr | Mar | Feb | Jan


Dec | Nov | Oct | Sep | Aug | Jul
Jun | May | Apr | Mar | Feb | Jan


Dec | Nov | Oct | Sep | Aug | Jul
Jun | May | Apr | Mar | Feb | Jan


Dec | Nov | Oct | Sep | Aug | Jul
Jun | May | Apr | Mar | Feb | Jan


Dec | Nov | Oct | Sep | Aug | Jul
Jun | May | Apr | Mar | Feb | Jan


Dec | Nov | Oct | Sep | Aug | Jul
Jun | May | Apr | Mar | Feb | Jan


Dec | Nov | Oct | Sep | Aug | Jul
Jun | May | Apr | Mar | Feb | Jan


Dec | Nov | Oct | Sep | Aug | Jul

What is this?

Networked_Music_Review (NMR) is a research blog that focuses on emerging networked musical explorations.


NMR Commissions

NMR commissioned the following artists to create new sound art works. More...
More NMR Commissions


"Two Trains" by Data-Driven DJ aka Brian Foo

Two Trains: Sonification of Income Inequality on the NYC Subway by Data-Driven DJ aka Brian Foo: The goal of this song is to emulate a ride on the New York City Subway's 2 Train ... Read more
Previous N_M_Weeklies


Guest Bloggers:


Massachusetts Cultural Council
Networked: a (networked_book) about (networked_art)
New American Radio
New Radio and Performing Arts, Inc.
New York State Council on the Arts, a State agency
New York State Music Fund
Upgrade! Boston

Turbulence Works